This is a fork of another topic.
I think it is high time that we stop talking about programming languages in isolation of the tools that support them. The tools have always depended on the languages, obviously, but increasingly the languages depend on the tools, especially in professional language design contexts. Dart and TypeScript are extreme examples of this where the type systems are there primarily to support tooling, and are much less about early error detection. As another example, C# design is heavily influenced by Visual Studio.
Designing a language invariably involves trade offs, and if we accept tools as part of the core experience, we are able to make decisions that are impossible given the language by itself. For example, many people find type inference bad because it obscures developer documentation, especially if it is not completely local. However, given an editor where inferred type annotations are easily viewed when needed, this is no longer a problem. Likewise, a type system that cannot provide good comprehensible error messages in the tooling is fundamentally broken, but tooling can be enhanced to reach that point of comprehensibility. Type systems in general are heavily intertwined with tooling these days, playing a huge role in features like code completion that many developers refuse to live without.
And of course, there are huge advances to be had when language and programming model design can complement the debugging experience.
There are other alternative opinions about this, to quote and reply to Andreas from the other topic:
This is a good point: code should stand on its own as a human communication medium. However even our human communication is increasingly tool dependent, as we rely on things like the Google to extend our capabilities. We are becoming cyborgs whether we like it or not.
Code isn't bound to paper. Even when printed out on paper, it does not need to printed out as it was typed in.
This is more of an observation about current and future success of programming experiences. I would say it has already happened: successful programming languages already pay these costs, and new languages must either devote a lot of resources to tooling, or build them in at the beginning, or they simply won't be successful. Some people wonder why language X isn't successful, and then chalk it up to unenlightened developers...no.
Since an appeal to common sense is made, I would just think we have different world views. We at least exist in different markets.
Recent LtU discussions have mentioned (eg. here) that it is hard to get programming language "design work" to get recognized academically. Good design work that is *also* accompanied by solid benchmarks or strong correctness argument will get recognized for the benchmarks or formal guarantees, but design work on domains that do not lend themselves to easy performance metrics, or done by people that do not use formalization as a validation tool will be hard to get recognized. A recent blog post by Jonathan Edwards echoes the same frustration (note that it also complains about the worse-is-better philosophy prevalent in industry not being good at fostering radical changes either).
(I would like to apologize to the part of the LtU community that does not care about academia. I try to keep my usual submissions/topics independent of professional occupation, and this one is the exception. Feel free to just skip it.)
There are some academic spaces where design work can stand of its own. I know of the Future of Programming workshop, the Onward! conference, and maybe (it's probably too soon to tell) the new SNAPL conference. However, at least the first two events are considered somewhat second-class in the academic community (which has an unfortunate tendancy to foster competitivity through metrics and will keep core of "top tier" conferences, creating strong incentives to publish only at the most selective/established places). ICFP has a "functional pearl" category (see Call For Papers (CFP), section "Special categories of papers") that counts as a first-class ICFP publication, ITP has a "rough diamond" category (CFP, but-last paragraph), Oleg just mentioned FLOPS "system descriptions" (it also has "declarative pearls", CFP, "Scope"), what would a category of "design papers" look like? (Yes, the idea is have a niche to protect endangered species that the free market would tend to root out; I think we can win big by dedicating a reasonable proportion of our scarce resources to fostering intellectual and methodological diversity.)
Here is a proposal for how it would be defined, or rather, how it would be reviewed (which I think is the subtle question):
(I don't know where such a category would fit; PLDI has a D for "Design" in the name, but then I don't know much about this conference.)
FLOPS 2016, promoting cross-fertilization across the whole declarative programming and theory and practice
LtU generally is not appropriate venue for posting call-for-papers, but there have been exceptions, if the CFP has an exceptionally wide appeal. Hopefully FLOPS 2016 might qualify.
FLOPS has been established to promote cooperation between logic and functional programmers, hence the name. This year we have taken the name exceptionally seriously, to cover the whole extent of declarative programming, which also includes program transformation, re-writing, and extracting programs from proofs of their correctness. There is another strong emphasis: on cross-fertilization among people developing theory, writing tools and language systems using that theory, and the users of these tools. We specifically ask the authors to make their papers understandable by the wide audience of declarative programmers and researchers.
As you can see from the Program Committee list, the members have done first-rate theoretic work, and are also known for their languages, tools and libraries. PC will appreciate the good practical work. Incidentally, there is a special category, ``System Descriptions'' that FLOPS has always been known for. We really want to have more submissions in that category.
One can see even on LtU that there is some rift between theoreticians and practitioners: Sean McDermid messages come to mind. He does have many good points. We really hope that FLOPS will help repair this rift.
For an Actor to cast an address of another Actor is a dubious operation. Instead, an Actor should only cast its own address to interfaces that it implements. For example,
Consider the following definition: Interface Account with getBalance[ ] ↦ Currency, deposit[Currency] ↦ Void, withdraw[Currency] ↦ Void The following is an implementation of Account: Actor SimpleAccount[aBalance:Currency] myBalance ≔ aBalance｡ implements Account using getBalance[ ] → myBalance deposit[anAmount] → Void afterward myBalance ≔ myBalance+anAmount withdraw[anAmount] → (anAmount > myBalance) � True ⦂ Throw Overdrawn[ ] False ⦂ Void afterward myBalance ≔ myBalance–anAmount The above implementation of Account can be extended as follows to provide the ability to revoke some abilities to change an account by providing AccountSupervisor and AccountRevoker interfaces: The implementation AccountSupervisor below implements the Account interface as well as AccountSupervisor and AccountRevoker interfaces as an extension of the implementation SimpleAccount: Actor AccountSupervisor[initialBalance:Currency] extends SimpleAccount[initialBalance] withdrawableIsRevoked ≔ False, depositabeIsRevoked ≔ False｡ implements AccountSupervisor using getRevoker[ ]→ ⍠AccountRevoker getAccount[ ] → ⍠Account withdrawFee[anAmount] → Void afterward myBalance ≔ myBalance–anAmount // withdraw fee even if balance goes negative also partially reimplements exportable Account using withdraw[anAmount] → withdrawableIsRevoked � True ⦂ Throw Revoked[ ] False ⦂ ⍠SimpleAccount.withdraw[anAmount] deposit[anAmount] → depositableIsRevoked � True ⦂ Throw Revoked[ ] False ⦂ ⍠SimpleAccount.deposit[anAmount] also implements exportable AccountRevoker using revokeDepositable[ ] → Void afterward depositableIsRevoked ≔ True revokeWithdrawable[ ] → Void afterward withdrawableIsRevoked ≔ True For example, the following expression returns negative €3: Let anAccountSupervisor ← AccountSupervisor.[€3]｡ Let anAccount ← anAccountSupervisor.getAccount[ ], aRevoker ← anAccountSupervisor.getRevoker[ ]｡ anAccount.withdraw[€2] // the balance is €1 aRevoker.revokeWithdrawable[ ] // withdrawableIsRevoked in is True Try anAccount.withdraw[€5] // try another withdraw catch� _ ⦂ Void // ignore the thrown exception // the balance remains €1 anAccountSupervisor.withdrawFee[€4] // €4 is withdrawn even though withdrawableIsRevoked anAccount.getBalance[ ] // the balance is negative €3
Callisto -- Versioned objects (github)
I think the language lacks examples.
Deny Capabilities for Safe, Fast Actors.
Combining the actor-model with shared memory for per- formance is efficient but can introduce data-races. Existing approaches to static data-race freedom are based on unique- ness and immutability , but lack flexibility and high perform- ance implementations. Our approach, based on deny prop- erties , allows reading, writing and traversing unique refer- ences, introduces a new form of write uniqueness , and guar- antees atomic behaviours .
As a subject, garbage collection has a nice glossary that can be used as a reference while reading the literature. Do type systems/type theory have a comparable online glossary?
I haven't come across one, but it seems like a it could be valuable resource.
Resourceable, Retargetable, Modular Instruction Selection Using a Machine-Independent, Type-Based Tiling of Low-Level Intermediate Code
Ramsey and Dias have a series of papers about making it ever easier to generate compiler backends, and the claim is that they produce decent code to boot. I wonder if this stuff has/will show up in compilers I can use? (Or, do you think it not actually matter, for some pragmatic reason or other?)
Some of you might have followed some of my posts during 2012/2013 where I described the design of a programming language which allows static verification.
The previous working title had been "Modern Eiffel" since my first attempt had been to create a variant of Eiffel which allows static verification. During the language design more and more weaknesses of Eiffel popped up which are in the way to do static verification. The most prominent weak point is that Classical Eiffel is not type safe.
The result of my language design activities is a new language which has some syntactic resemblance to Eiffel.
However semantically the new language now called Albatross is more like Coq or Isabelle because it has a functional core and it allows to state assertions and prove them.
The last to years I have worked on a compiler and verifier for the language. The effort clearly has been underestimated by me and the one-year project has already taken two years and the available implementation has still a lot of restrictions. However the theorem prover works quite well and it certainly will take some time to complete the compiler.
For those of you who know Coq you might enjoy the capabilities of the proof engine and enjoy the fact that there is no extra proof language and all proofs are expressed by pure statements in predicate logic. Proof automation is done by predicate calculus as well and no by an integrated language like Ltac. Therefore the language is easier to learn than Coq, because every programmer has a well developed intuition about logic.
You can find the downloadable version of the compiler and a language description at the Albatross website.
[Church 1934] stated the fundamental paradox as follows:
"in the case of any system of symbolic logic, the set of all provable theorems is [computationally] enumerable... any system of symbolic logic not hopelessly inadequate ... would contain the formal theorem that this same system ... was either insufficient [theorems are not computationally enumerable] or over-sufficient [that theorems are computationally enumerable means that the system is inconsistent]...
Active forum topics
New forum topics