References about the importance of formalism in programming language design

Hi!

I have an unusual question: is there any paper/essay/... that discusses the importance of having a coherent formal design of a programming language to guide and inform its implementation?

Thanks

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Coherent Formal Design?

What does that even mean? It's incoherent, I tell you!

Anyhow, I've recently been going through the exercises at:

Coq for POPL folk: A tutorial on the use of the Coq proof assistant in formalizing programming language metatheory.

This tutorial doesn't bother motivating its own existence. It just dives straight in. But perhaps POPL would be a good place to search for papers and essays of the sort you seek.

I was born with search

I was born with search engines in hand, so thanks but no thanks :)

I was looking for something that some LTUer found specifically illuminating or interesting, especially since it can be seen as a matter of opinion and not necessarily as a provable assertion.

I don't think the paper you

I don't think the paper you are looking for exists, no one has made that argument yet, or at least made it in a convincing enough way to put it to paper. Formalisms are nice for polishing and for evaluating and understanding an existing language (e.g., FeatherweightJava), but rarely play much of a part in the design process.

That's not been my experience...

...my work on FRP has been very, very strongly driven by formal semantics. Getting higher-order functions to interact correctly with the concerns of reactive program (eg, avoiding causality violations and space leaks, interacting safely with scene graphs and the DOM) required significant formal machinery just to formulate the problem.

Of course, Greek letters can't actually replace creativity in solving our problems, but I find them very useful tools for recognizing when a candidate solution works, or doesn't.

When you get to the point

When you get to the point that you must rely heavily on formal reasoning to determine whether your abstractions are "good" or not, they are probably going to be too complex for normal people to use. I would call this a "language design smell." Its one of the biggest problems I have with classic FRP, which I honestly didn't get until I took in FrTime, where the reasoning behind signals is much more mundane and accessible. Maybe I just wasn't smart enough though.

Formal reasoning can provide clarity to a problem space, they can also be used to explain important nuances of relatively simple abstractions. But this reasoning and clarity generally comes after design, acting to grow knowledge of the community (designers and users) and informing on future designs in an indirect way.

Huh?

Seriously, I don't know what you are talking about. Quite a few languages (or language features) in the last decades have been heavily influenced, informed, and/or guided by formal work. Take functional languages in particular, or more recently e.g. Scala, which you know better than me. And when you contrast those with the average mess of all the Perls, C++s, JavaScripts, and Rubys of the world then it is pretty obvious what the advantage of formalism is as a guiding principle in language design.

I'm not arguing that

I'm not arguing that formalisms don't guide language design. Rather I'm arguing that their effects are in practice indirect, and that this is as it should be. It was much easier to call out covarience as having formal problems in Eiffel after the fact than it would have been to overthink it during language design. Future languages benefit from an understanding of past designs; i.e., the benefits of formalism tail design in that you have to design the languages first before you can understand their problems.

How did formalisms explicitly guide Scala's design? The designer of course understood a lot from this past experience, which probably included formal work. However, I don't believe that Scala was worked out completely formally first, possibly only so much as what was already understood from past experience. But then the compiler was written, code was written in the language, and then many interesting formal issues were uncovered via the new experiences people had using the language. This pattern repeats in almost all real languages; formalism is part of a feedback loop where it informs but does not itself dominate other stages of the design.

When formalisms are really required to work out an abstraction before it has any hope of being used, I would argue that the abstraction is too complicated. I admit this statement is controversial, but even the simple abstractions that we think we understand will have nuanced issues, while the abstractions that we can't convince ourselves that we understand without formalisms will just be impossible to use. This is how I feel whenever someone mentions space leaks in Haskell. It is quite an interesting formal problem, but it appears to put the language beyond mortals.

Indirect effect?

Where do you draw the line between direct and indirect effect on language design? And why would the difference matter much?

Even if you restrict to very direct examples, you don't have to look far: e.g. Standard ML as a whole language, any new feature that went into ML modules, monads in Haskell, newer Haskell features such as type families---all examples where a formalisation came first. And that's just random examples from the area I am following most closely. Similarly, it was my understanding that the nuObj calculus was sort of a formal template for Scala (even if the final result deviated from it).

It was much easier to call out covarience as having formal problems in Eiffel after the fact than it would have been to overthink it during language design.

Well, "easier" for sure, but what are you implying? That it was better doing it that way?

Well, "easier" for sure, but

Well, "easier" for sure, but what are you implying? That it was better doing it that way?

Yes, I think it is natural for formalism to chase design and vice versa. Doing formal reasoning requires understanding that mostly doesn't come from within itself. Rather, problems are observed in the "wild" and then we are able to describe it formally.

There are definitely languages that have been formalisation first. The OP's question is that if there have been any arguments put to paper that this is the way it should be; that this is somehow better. I have yet to see this argument and I believe there is a good reason for that. It limits your language to features that are already understood very well, because without experience that is what you can formalize.

Edit: grammar.

Arguments

I was alluding to the (IMO obvious, and quite folklore) argument two posts up: formalisation leads to much cleaner and less buggy languages. It's a very thorough sanity check. I'd argue that e.g. SML is a good case in point.

In fact, there are things that are practically impossible to get right, or even understand the design space properly, without formalisation -- e.g. any non-trivial type system feature. So I strongly disagree with what you're saying.

Are we confusing cause and

Are we confusing cause and effect? Maybe formalisms can lead to much cleaner languages, but maybe because the abstractions used in the language and their interactions are already very well understood (e.g., lambda calculus languages)? Is it cleaner because formalisms were used or is it cleaner because it only used abstractions that were understood well enough to be formalized? Some kind of combination?

We can really just agree to disagree then, and I suspect we are arguing about the wrong thing anyways. Obviously, theory plays a role in the process somewhere, we are only arguing about how early it comes in what is essentially a loop, which seems kind of pointless to me.

No

I already gave the example of type system design, which does not fly at all without proper formal development, I can assure you of that. And any interesting idea there involves more than just consolidating known stuff. So the answer to your rhetoric question is a clear no, at least for areas like that, where the design goals are some sufficiently "deep" semantic properties.

How many people actually use

How many people actually use SML? Yes, there are examples out their of formalism-driven languages, where formalism comes first in a waterfall model of language design and everything else is an afterthought. Are we better for that? I argue not; again, we can disagree.

Bad ideas can be written in

Bad ideas can be written in any language. Even Coq.

Bad ideas can be formalized

Bad ideas can be formalized just as easily (or maybe even easier) than good ideas. Tools are just tools.

Shall we throw our model

Shall we throw our model away or fix corner cases or issue warnings or introduce artificial constraints or do more code generation incurring performance penalties which can be removed later ... ?

Decision making at a living zone which takes years if not decades to become woody. It is those concerns that motivates our analysis in the first place. If we suggest a solution, we would like to know if it is one and still doesn't violate our principles. It is algorithmic analysis and everyone can understand this. If you are able to create a closed model for everything in the first place you are lucky and probably your problem wasn't even ambitious ;)

Type systems are a grateful example for formal modeling, because they are formal systems anyway. However type systems are often needed the least and cause the most work to get them right. Of course you need them in the design of the next big language, the Java successor which revolutionizes the industry. What else shall young men try?

Type systems are a grateful

Type systems are a grateful example for formal modeling, because they are formal systems anyway. However type systems are often needed the least and cause the most work to get them right. Of course you need them in the design of the next big language, the Java successor which revolutionizes the industry. What else shall young men try?

The definition of "type system" is large and can mostly be considered to encompass all formal analyses that we could want to run on the programs of a given language -- though there are some people arguing for keeping the name "type systems" for reasonably-intuitive and inferable systems and using names such as "static analyses" or "automated proof systems" for the rest.

Want to study the security properties of your language? Purity? Safety of message-passing and logical "ownership" of resources? Analyses of time and memory complexity? All this is already being done under the "type system" umbrella -- though they are also attacked by other approaches and toolsets such as "abstract interpretation" or "program verification".

There is a different point of view among the people that are sufficiently familiar with proof assistants to consider programming and just proving programs safe on a case-by-case basic (using a particular system as a proof technique or tactic). neelk has already mentioned that trend here before. I'm not sure however that's what you should advise "young men" to try: given the state of the tooling, it requires a certain part of insanity.

Popularity?

How many people actually use SML?

Er, I wasn't talking about SML there, but about any kind of serious type system design. In any case, come on, what's with pulling out the popularity argument?

Is the approach popular, is

Is the approach popular, is the language popular? We are talking about "the" approach to language design, and the SML and Haskell methods, while very interesting, don't seem to be very mainstream.

Since when

Since when does "popular" equate "superior"? Especially in science and engineering? Especially in PL, where "the mainstream" tends to be neither great nor particularly interesting? How to best achieve mediocrity isn't what we are talking about, or did I miss something?

If you make a fantastic

If you make a fantastic hammer but only people with the 99th percentile hand strength can use it, you're going to get a lot less nails hammered than a merely good hammer almost everyone can use.

Even if formal design aids in making a better language, even science and engineering understand tolerances. At some point, a programming language will be 'good enough' to let people instruct the computer what to do.

Especially in science and

Especially in science and engineering?

The difference between craft and engineering is basically a matter of organization and scale. You can develop nicely crafted unique and masterful artefacts in which you invest all your skills and education and also prove that you have done what you have claimed, but that's not engineering at least not after the first industrial revolution which has already been a while back. Assembling stuff from replaceable components was the last attempt to define software engineering which is on par with what has been achieved in other fields more than 100 years ago. Everyone could judge if it succeeded.

Instead we use the second best approach: large software "ecosystems" having lots of contributors who build stuff which matters because it is widely used and backs our infrastructures. Since this happens on a scale beyond strict organization, the call for popularity isn't merely populism and not entirely irrelevant.

Misunderstanding

OK, time out. I don't dispute that there is a cost to formalism, and that you can get somewhere without it. But if you go back in the (sub)thread, that wasn't the point of the discussion. The question was (1) what the advantages of formalism are, and (2) whether it is used or even necessary sometimes. Language popularity is irrelevant for these questions.

I thought the question was

I thought the question was about the benefits/need for big-formalization upfront in language design vs. using formalisms to understand things over time. My point was merely that those languages that had big upfront formalization weren't very common, and were not noticeably more successful than those languages designed in a more agile fashion (where formalism informs after the design or indirectly based on experience). I'm not arguing against the advantages of formalisms, but about when is the best time to use it.

"Successful"

... and were not noticeably more successful than those languages designed in a more agile fashion ...

It seems to me that this argument turns on the definition of the word "successful," with Sean weighting adoption much more heavily than Andreas.

Exactly

Adoption is a completely useless metric because it depends on so many social, political, commercial, and accidental variables that it tells zero about the technical quality of a language. Admittedly, we have no good and objective measure for this "technical quality", and probably never will. But that is a different discussion (which we had before).

I think that what Sean is

I think that what Sean is frustrated about is that even though we have no good objective measure for technical quality, the proxy that academia uses to evaluate whether a paper is publishable is the level of formalism. The original post was asking justifications for this, or perhaps other, better proxies.

sort of...

I just think that being formal at the wrong time is not very useful, that it is often better to do an explicit formal analysis "after" you have done the design and implementation rather than before. Do it if and when it feels like it will be useful, don't go with any dogma that says you have to start there.

I've seen plenty of papers that use formal as their scientific contribution when they are really providing design contributions and the formal discussion is quite uninteresting. In this they are just used as a checkmark for the PC, but really that's another issue.

To be fair, in process

To be fair, in process methodology terms, Guiseppe refuses an iterative, incremental approach but attempts a single big-step transition, after an initial exploration phase, which is perfectly in line with "having a complete formal model in the end" - so it is only vaguely upfront.

The reason why we can't do any recommendations is basically that this is still pioneering work. No one has an idea whether it scales or the modeling phase turns out to be a huge time sink with little gain other than proving lots of uninteresting theorems correct in cumbersome formalisms. So may a new generation of self-taught hotshots with their open source projects and the communities that grow around them check that out, engage into the ubiquitous debates, become popular and prove us skeptics wrong.

Scala's evolution

I would rather we use Martin's own words than anything else:

The Origins of Scala:

Now, during the Pizza and GJ experience I sometimes felt frustrated, because Java is an existing language with very hard constraints. As a result, I couldn't do a lot of things the way I would have wanted to do them—the way I was convinced would be the right way to do them. So after that time, when essentially the focus of my work was to make Java better, I decided that it was time to take a step back. I wanted to start with a clean sheet, and see whether I could design something that's better than Java. But at the same time I knew that I couldn't start from scratch. I had to connect to an existing infrastructure, because otherwise it's just impractical to bootstrap yourself out of nothing without any libraries, tools, and things like that.

So I decided that even though I wanted to design a language that was different from Java, it would always connect to the Java infrastructure—to the JVM and its libraries. That was the idea. It was a great opportunity for me that at that time I became a professor at EPFL, which provides an excellent environment for independent research. I could form a small group of researchers that could work without having to chase all the time after external grants.

At first we were pretty radical. We wanted to create something that built on a very beautiful model of concurrency called the join calculus. We created an object-oriented version of the join calculus called Functional Nets and a language called Funnel. After a while, however, we found out that Funnel, being a very pure language, wasn't necessarily very practical to use. Funnel was built on a very small core. A lot of things that people usually take for granted (such as classes, or pattern matching) were provided only by encodings into that core. This is a very elegant technique from an academic point of view. But in practice it does not work so well. Beginners found the necessary encodings rather difficult, whereas experts found it boring to have to do them time and time again.

As a result, we decided to start over again and do something that was sort of midway between the very pure academic language Funnel, and the very pragmatic but at some points restrictive GJ. We wanted to create something that would be at the same time practical and useful and more advanced than what we could achieve with Java. We started working on this language, which we came to call Scala, in about 2002. The first public release was in 2003. A relatively large redesign happened early 2006. And it's been growing and stabilizing since.

Derive from this quote whatever interpretation you like, but it stands on its own merit!

See also, A Brief History of Scala (LtU):

In 1999, I joined EPFL. I thought that I would now use my academic freedom to do language design starting from a clean sheet. I still wanted to combine functional and object-oriented programming, but without the restrictions imposed by Java. I had found out about join calculus, and believed that this would be a great foundation on which to base such a unification. The result was Funnel, a programming language for functional nets. This was a beautifully simple design, with very few primitive language features. Almost everything, including classes and pattern matching, would be done by libraries and encodings.

However, it turned out that the language was not very pleasant to use in practice. Minimalism is great for language designers but not for users. Non-expert users don't know how to do the necessary encodings, and expert users get bored having to do them over and over again. Also, it became quickly apparent that any new language has a chance of being accepted only if it comes with a large set of standard libraries.

Funnel led to Scala, whose design began in 2001, and which was first released in 2003. Scala is not an extension of Java, but it is completely interoperable with it. Scala translates to Java bytecodes, and the efficiency of its compiled programs usually equals Java's. A second implementation of Scala compiles to .NET. (this version is currently out of date, however).

Historical point

Rather I'm arguing that their effects are in practice indirect, and that this is as it should be. It was much easier to call out covarience as having formal problems in Eiffel after the fact than it would have been to overthink it during language design.

At the time of creation of Eiffel, the relation between inheritance and subtyping, and in particular the idea that inheritance should preserve substitutability, was already understood in the research community. See for example Cardelli's "On Understanding Types, Data Abstraction, and Polymorphism" (1985), which says quite clearly that overriden fields types should be subtypes of the original field type.

I tend to think that if Meyer had performed the process of looking for a theoretical framework in which to check the soundness of his language design, he would have found the covariance flaw -- just as with the Java Array situation which came even after.

Of course, you could say that this "check with the theory experts" step is impractical or should not be necessary. The first attempt fails, and then the next ones only need to "check with previous practical experience in the field". That puts the theory in the eternally-unsatisfying position of "I would have told you so", but the asymptotic cost over many repeated attempts is not so high.

I do agree though with the underlying meta-argument that a beautiful semantics is more costly to develop than a practical design, and for this economical reason it tends to happen as a second step rather than simultaneously. I would say, though, that one should not consider the design "complete" until this theoretical feedback loop has been performed.

Of course, you could say

Of course, you could say that this "check with the theory experts" step is impractical or should not be necessary. The first attempt fails, and then the next ones only need to "check with previous practical experience in the field".

That isn't really my argument. My point is that you can't really build your theoretical framework very well without experience using the feature. Its not that the theoretical framework is so much expensive to build up front rather that it is much cheaper to build in hindsight.

I would say, though, that one should not consider the design "complete" until this theoretical feedback loop has been performed.

I wouldn't disagree with you on this. However, I would argue that the design of a language is never really "complete." There is always that new version that fixes old problems and introduces new ones.

Hmm...

This is how I feel whenever someone mentions space leaks in Haskell. It is quite an interesting formal problem, but it appears to put the language beyond mortals.

I think the technical challenge is quite manageable, but the problems with space usage in Haskell actually reflect a deep fault line in the design of the language, or perhaps better, the intuitions behind the language.

From the point of view of what results we compute, Haskell is pure, and so supports a very rich notion of equational reasoning. Getting comfortable with this is basically what is meant by learning to think in Haskell. However, the performance model of Haskell (call-by-need) is not compositional, since the cost of a variable reference depends on whether the program context has forced it yet or not. So reasoning accurately about costs requires you to reason about the aliasing of thunks. But this means performance is not preserved under equational transformations.

Formally solving the problem seems fairly straightforward to me: you need to develop a version of separation logic for call-by-need, which will let you reason about the identity of thunks. But this would be against "the spirit of Haskell", since reasoning about identity and aliasing forces you to abandon many of the methods of functional programming.

Formalism and Coherence

I've never actually seen a paper about the relevance of formalism in language design, as a topic of its own. So I'm somewhat interested in answers here, too.

If I were to write a thesis on the subject, it would be along the lines of: Formalism is for validation. What the PL community needs, desperately, is vision and verification.

Though, I do appreciate formalism. I think we should design with it in mind, much like we design with one eye on efficient implementation.

An Idealized MetaML: Simpler, and More Expressive

By Eugenio Moggi, Walid Taha, Zine El-abidine Benaissa and Tim Sheard, 1999.

Link to the paper on CiteSeer

MetaML is a multi-stage functional programming language featuring three constructs that can be viewed as statically-typed refinements of the back-quote, comma, and eval of Scheme. Thus it provides special support for writing code generators and serves as a semantically sound basis for systems involving multiple interdependent computational stages. In previous work, we reported on an implementation of MetaML, and on a small-step semantics and type-system for MetaML. In this paper, we present An Idealized MetaML (AIM) that is the result of our study of a categorical model for MetaML. An important outstanding problem is finding a type system that provides the user with a means for manipulating both open and closed code. This problem has eluded efforts by us and other researchers for over three years. AIM solves the issue by providing two type constructors, one classifies closed code and the other open code, and describing how they interact.

MetaML is an example at the meta level: they used formalism formalism (categories) in programming language formalism design.

Looks interesting, off to

Looks interesting, off to reading then :)

Just a quick note

I am implementing a language for game development (I have extensive experience in the field); the language is a relatively simple variation of F# plus knowledge about the game world, the flow of time, coroutines, optimizations, rendering and input management (http://casanova.codeplex.com/).

The path we are following is:
a) vision + samples + informal language description
b) more formal semantics and actual implementation as a meta library (templates in C++ for the first prototype, F# type providers for the second, code generator for the third; we are currently at step two)
c, d, ...) extensions, samples, case studies, etc.

I would like to motivate to the game dev community that steps (a) and (b) are very different, in that without (b) you cannot move to (c) without introducing a load of subtle problems.

More inspiration

Luca Cardelli's work on Quest.

Joe Goguen's work on ADJ and general systems theory which led to Clear and OBJ and its very expressive module systems.

I am not sure there is a single paper here for you to read, but the authors do take a formal approach to things.

[Edit: What I am trying to say is, if you watch and observe how people do things well, you can learn without ever having it been concretely explained to you.]

I don't see how you can avoid some

The purpose in formalizing a language is primarily to reason about the set of all programs rather than to reason about an individual program. So for a language designer, who's primary interest is the set of all possible programs in their language, I don't see how you can avoid some level of formalism.

Formalism also helps in reasoning about individual programs. However, most programmers are fine with ad-hoc reasoning rather than anything formal but ad-hoc reasoning about something as abstract as the set of all programs in a specific language won't really convince anyone of anything.

I recommend this paper:

Programming pearls: little languages, by Jon Bentley
Communications of the ACM, 29(8):711-21, August 1986
http://dl.acm.org/citation.cfm?id=315691

Use of formalism in the design of Oz

The design of Oz is an interesting case study of how to use formalism in language design. At all times during the long design process (which has never ended, in fact), there must coexist a simple formalization and an efficient implementation. Here's a quote from Logic Programming in the Context of Multiparadigm Programming: The Oz Experience, which is the one paper that probably best describes the overall approach:

At all times during development, there is a robust implementation. However, the system's design is in continuous flux. The system's developers continuously introduce new abstractions as solutions to practical problems. The burden of proof is on the developer proposing the abstraction: he must prototype it and show an application for which it is necessary. The net effect of a new abstraction must be either to simplify the system or to greatly increase its expressive power. If this seems to be the case, then intense discussion takes place among all developers to simplify the abstraction as much as possible. Often it vanishes: it can be completely expressed without modifying the system. This is not always possible. Sometimes it is better to modify the system: to extend it or to replace an existing abstraction by a new one.

The decision whether to accept an abstraction is made according to several criteria including aesthetic ones. Two major acceptance criteria are related to implementation and formalization. The abstraction is acceptable only if its implementation is efficient and its formalization is simple.

Reminds me of PLT/Racket

This sounds a lot like the approach taken by the PLT/Racket guys, at least based on the papers I've read...

Peter, the paper describes

Peter, the paper describes your process without any analysis or discussion on experience of how your process actually worked in practice. It would be nice to eventually see a more in depth on when the process was applied, what specific abstractions were accepted or rejected by it, and so on...this would be useful to us even if it was not a very scientific analysis.

I like the idea that the formalizations should be simple, but this criteria would seem to immediately filter out more inventive, and hence less understood, abstractions. I think this is fine for a production or teaching language, you don't want to incur that much risk, but for a research language I'd think you would want to take more risks. I'm not sure where you think Oz is placed in that spectrum.

I don't read "formal X" sections in papers

Since programming languages are already themselves formalisms, I see little point in translating a formalism into another, much less perspicuous, formalism and claiming that something is proved or explained thereby. The "formal semantics" is just as likely to have bugs as any other program, and more likely than the prose.

Wow

Let me get this right. You are saying that generations of researchers don't know what they are doing? And you can tell that without even reading what they are doing?

In the years I'm reading

In the years I'm reading this blog, I can't remember any technical criticism articulated on a formal proof in a paper, of which hundreds have been cited. This means they are either flawless, which is hard to believe, or they will just be skipped by everyone.

Proofs are tedious

It is just tedious work to do a proof when a claim is correct anyway. Except that it usually isn't.

In other words, it's perfectly fine if a reader doesn't bother with a proof. It's something else entirely when an author doesn't bother with a proof, or a reviewer does not check its plausibility.

Actually, a proof is tedious

Actually, a proof is tedious work, and its very much a detail that often doesn't belong in a modern conference paper (barring exceptions of non-tedious interesting proofs). Hopefully, the rise of machine checked proofs will eliminate tedious proofs from papers, simply because you can't really publish the code for your COQ proof...

Providing a proof in your paper does open it up to attack if its incorrect. I've championed a paper on a PC before that got torpedoed in this manner. I'm confident that the paper would have gotten in without the proof, which wasn't very interesting even if it was correct.

Hm

Actually, a proof is tedious work, and its very much a detail that often doesn't belong in a modern conference paper (barring exceptions of non-tedious interesting proofs).

It's true that the proof is often of little interest to the reader. The (detailed) outline is often important, though, for people that want to "reproduce" the result by redoing the proof -- but it would be ok to publish the details of the proof in a separate Technical Report for example.

However, either ways the proof must be shown to the reviewers at some point, which means that it must be part of the "publication" -- whether it is exposed to the general reader in the main part of the article or not. Because otherwise it's just to easy to slip away without doing the proof, or even to believe that one has a (sketchy) proof of a false result.

Hopefully, the rise of machine checked proofs will eliminate tedious proofs from papers, simply because you can't really publish the code for your COQ proof...

I suppose you mean that it can't be published as part of the paper because it is too long. But it could be distributed separately, and I believe it must -- just as you should publish the code of a software prototype you implemented and discussed in your paper.
If the mechanized proof is publicly accessible, I agree that proof burden can be removed from te paper. There still need to be an outline of the formalization, though, so that the reviewer can understand which statements (in the paper) are claimed to be proved, and confront them with the formal statement (in the mechanized proof) to make sure that it faithfully represent the claim: you don't need to check the proof anymore (Coq does that for you), but you need to trust the statement.

Providing a proof in your paper does open it up to attack if its incorrect. I've championed a paper on a PC before that got torpedoed in this manner. I'm confident that the paper would have gotten in without the proof [..]

In absence of outside pressure that should be a non-event: the paper gets rejected with an explanation of the problem, and the author can fix the proof and resubmit next time. You describe this as a negative experience because of the pressure to publish; indeed, in some singular situations it is sometimes possible to have bigger chances to get accepted by doing less work. But the quality of the research would also have been lessened.

I understand and do not blame optimizing publications rather than quality, but that's an economical, rather than fundamental, argument against formalization.

Don't conflate pressure to

Don't conflate pressure to publish with wasting time. Things like this, as far as I've seen, are generally excuses by reviewers to sink research that they don't like.

Goes both ways

That's a pretty serious accusation you are making there. Can't say that I have witnessed that kind of thing more often than the inverse, reviewers waving through papers that are fundamentally flawed, or make wild hand-waving claims, just because they are supposedly "relevant" or superficially "nice" (or because the reviewers are just being lazy).

[Redacted]

[Redacted]

This is too controversial for LtU. I will say that I've talked to a surprising number (to me) of lauded PL researchers that are unhappy about the papers that make it through the review process at the top-tier.

PCs are weird. You'll see a

PCs are weird. You'll see a bit of both, but I think torpedoing a paper for a flaw is more common than hand waving a flawed one through. However, if you have any obvious examples (especially in a 1st tier conference), I'd love to see them.

I could easily name papers

I could easily name papers from PLDI or ICFP, and even POPL, which should never have made it. But I'm not stupid enough to do that here. ;)

Now you've got me curious...

Could you email the list privately to William Cook?

Ok, I lolled...

Ok, I lolled...

The aim of the paper itself

The aim of the paper itself is to convey an idea to the reader. The aim of a proof is to convince the reader of something. Sometimes a the two aims can happily be handled by a proof, but many times a proof is not the best way to convey understanding. From my math classes I remember that quite often the proof conveyed no real understanding. Often these proofs would go like this:

Let x be {some magical value}. [... derivation involving x ...] QED.

The particular x that was chosen made the proof work out perfectly, but it did not explain how the author of the proof arrived at that x. A simple example is proving the quadratic formula correct by substituting it into ax^2 + bx + c and simplifying. While this convinces the reader that the formula is correct, no new idea or technique is taught. PLT proofs are often bad at conveying ideas in another way: the interesting bit is drowned in a long and boring derivation (it also happens a lot that they leave out the proof entirely, e.g. "by structural induction, QED"). Therefore unless the proof is the best way to convey the idea, it belongs in an appendix or on an external web page.

When reading for example a type system paper, it feels like the whole paper is structured around soundness proofs and around giving a complete formalization including features that are well known (like conditionals). The formalization & proofs are the justification necessary for publishing the paper. What should be the justification is the usefulness or beauty. Give me more motivating examples over a boring proof any day. Preferably real world sized executable examples on an external web page, but that's a lot to ask (unfortunately).

Type systems have the burden of proof

The type systems I know are worked on in research circles are generally used for purposes such as:

  • providing an easy efficient compilation by dropping all type information after static type-checking -- in direct violation of say Bracha's "pluggable type systems" idea
  • being the proof-term calculus of a theorem prover
  • enforcing security properties
  • enforcing race-free concurrent programs

It seems to me that having a proof of the type-system soundness, for those intended purposes, is essential. Because your proof systems serves as a proof technique (ie. considered by its users to provide absolute mathematical certainty of a fact), and proof techniques need to be themselves proven for their results to hold.

Now, there are also cases where things that could be called "type systems" are described that are not used as proof engines. The work on static analysis of programs in existing dynamically typed languages are often of that kind; the report some errors but not all, and don't claim that type-checked program are correct in *any* sense. Those can say "we implemented our system and found N bugs in the standard library and got N' reports that the tool was useful by programmers", and that's a very good thing -- I'm thinking of the work on Erlang's Dialyzer. Sometimes they can prove that they don't have any false negatives, that all programs they flag as ill-typed really contains runtime errors; and that's a very good sign of the quality of the work.

Wrapping up, I would say that most uses of type systems have soundness as a prerequisite for usefulness. But of course it all depends on how you use the type system.

We are not in disagreement

We are not in disagreement that a proof is important. I do not think that it deserves prime real estate in the paper, because the space is better used for an explanation optimized for conveying the idea rather than convincing.

Exactly so

The aim of the paper itself is to convey an idea to the reader. The aim of a proof is to convince the reader of something.

Exactly so, and I am saying no more (and no less) than that drawing a bunch of horizontal rules and putting some equations above and below them is not a rhetorical strategy designed to convince me of anything. This is a fact mostly about me and rather less about formal semantics sections.

In this case my primary

In this case my primary complaint was the proof did not really seem like evidence of anything; it was just included because it was expected which incidentally led to its downfall then (the paper got in to the next competing conference, so not a big deal).

Less is actually more. The authors need to focus on content in their papers that really make their case, and not include something as filler. Actually, the author isn't going to be lazier for it, focus is actually harder than dumping into a paper all the work that is done. Much of the work we do is necessary but not fit for publishing

More specifically:

  1. Bad: a PL idea paper whose evidence is an uninteresting proof about its type system (consistent, complete, etc...). The proof isn't evidence that the idea is good.
  2. Good: a PL idea paper that spends time convincing us that the idea is useful. Include a proof if you have some crazy idea that seems wrong (e.g., the type system appears to be incomplete). These papers were common in the 80s and 90s but have virtually disappeared today.
  3. Good: a PL theory paper that shows some interesting properties about well-accepted or emergingly popular PL idea. The PL theory paper doesn't originate the idea itself. You might discover something bad about a good PL idea that wasn't obvious before (e.g., that the seemingly simple type system is actually incomplete).
  4. Good: a PL idea/theory paper that talks about some idea-like feature/property/technique with interesting formal implications. These papers basically enhance our toolbox.

I think most PL conference communities err on the side of formalims given that idea evaluation is so unscientific. So throw the proof into the paper so the PC has something that they can evaluate with certainty, even if its pointless to what the paper is claiming. In some cases, its the PL community that is being lazy because they don't want to figure out how to handle idea papers. This is a bit ironic since many of our senior PL people today wrote those great idea papers in the 90s.

In fact, the safest course of action today if you don't want to write a bad paper but want to get it accepted is just to write a 3 or 4 paper and avoid 1 (bad) and 2 (good, but hard to validate), where formalism can just as easily be replaced by empiricism.

I don't get your (1)

I don't get your (1). I think you have to distinguish two quite different instances:

1a. A paper presenting an uninteresting type system idea and proving it correct. Such a paper is "bad" with or without a soundness proof.

1b. A paper presenting an interesting type system idea. A soundness proof is crucial (and a decidability proof can't harm either), even if it's boring, because the nicest idea is worth nothing if it doesn't fly. Proving that it actually flies is a integral part of designing it. The paper would be "bad" if the author didn't bother doing that (whether that proof has to be put in the paper itself is a different question).

In either case, the paper isn't bad because of formalism or a proof. It can only be bad despite of it.

Don't underestimate the subtleness of areas like type systems. It is practically impossible to get a novel idea right out of thin air. There is a tight feedback loop between "design" and "proof". Just like with writing a program and testing it, you cannot do one without the other and be credible.

Type systems might be PL,

Type systems might be PL, but all in PL is not type systems. Actually, the sort of type systems you are referring to, the MLish ones, really, that are subject to standard safety/soundness/decidability proofs don't even represent all interesting type systems in PL. There are plenty of undecidable and/or unsound type systems out there that are interesting, even from a formal perspective. That we seem to focus on a rather narrow segment of interesting type systems in PL is probably related to our proof culture. But this is really orthogonal to our argument.

When I read a PL idea paper, I care about whether the idea is useful and plausible. A proof can provide evidence that the type system is plausible, but says nothing about it being useful. Plausibility can also be shown through an implementation, which is both weaker and stronger than a proof (much of what we can prove cannot be implemented, and vice versa). Finally, plausibility can be argued, although I definitely would feel comfortable with at least a prototype.

Now given limited time/resources, do you want to focus on your design, proof, or implementation? You probably can't afford to do it all since most ideas ultimately fail (even if published), so pick two, and design better be one of them. You could maybe skimp and do all three, but you are setting yourself up for mediocrity.

So you do a lot of design, build a prototype, write some programs to get some experience with the idea. Of course, there are subtle bugs in your design, but so what? Its not important right now. Or you do lots of design, do a proof, and...well, you have a proof. You have no bugs, but you have no experience either.

Edit: if the plausibility of your PL is strongly doubted, a proof can be useful, but that proof also becomes interesting because of the strong doubt. On the other hand, you might have other bigger problems at that point (e.g., difficult/impossible implementation).

Now, if the idea flies after the initial work, you can spend more time on it, you do the proof, find the bugs, refine your idea...in a feedback loop. Its just that the feedback loop will terminate early for most ideas b/c of some conceptual flaw that kills it (rarely formal, I'd bet).

We can also argue that only the really great proven ideas deserve to be published, the ones that have gone through a few years of work and validation, that really stood the test of time. But this would make for incredibly empty conferences. The PL designer would be deprived of peer review (not just from the PC, but the community after the paper is accepted); peers would be deprived of early access to useful/plausible ideas, and so on. Edit: a lot of interesting PL work is currently going this route given the ideal that publishing is not useful unless tenure is involved; I don't think this is how it should be.

Soundness is necessary, but not sufficient

This whole conversation seems to me to be based on the mistaken idea that there are lots of papers accepted to top conferences based solely on their inclusion of boring (type safety?) proofs. In my experience, there are actually very few papers nowadays that describe such proofs in any detail, and if anything, there are far too many papers that don't present much formal evidence for their claims at all. I've been on PCs for POPL and ICFP, and I can say with confidence that no paper was accepted *because* the author included a type safety proof. The existence of a type safety (or whatever the appropriate type of soundness) proof was considered a necessary but not sufficient condition for publishing. Most papers are accepted or rejected based on their perceived coolness and originality, and the hard work of producing sound type systems is, if anything, undervalued by comparison.

I think we value different

I think we value different things, so as a result we are seeing different biases. I completely agree that the proof (of soundness or otherwise) is not sufficient, it is just necessary (often), that an idea that is bad or boring will be rejected for the reviewer even gets to the proof. No one here is really arguing about that (at least I'm not).

If the proof is just ceremony, why is it there? Your position is that it is very important and deserves more value, my point is that it is not important at all and deserves less value, even to the point of non-inclusion. I really miss the idea papers of the 90s, they were interesting, useful, influential with nary a proof to be seen. In fact, you used to have to go all the way to POPL* to get any proofs at all.

* Or ICFP.

Scientific maturity

If the proof is just ceremony, why is it there?

Again, doing the proof isn't ceremony, it is your obligation as a designer. If you don't do it, it is practically guaranteed that your design is broken as stated. (Unless the idea is so trivial that it is not worth publishing anyway.)

I really miss the idea papers of the 90s, they were interesting, useful, influential with nary a proof to be seen.

I suppose many people consider that progress: the field has matured from whacky hacky to actual science.

(Btw, I don't really see that many great informal PL idea papers in the 90s. Care to give examples?)

Proofs aren't guarantees

[D]oing the proof [...] is your obligation as a designer. If you don't do it, it is practically guaranteed that your design is broken as stated.

The second would follow from the first only if having a proof practically guaranteed that your design was not broken as stated. But unfortunately it does no such thing: "[a]lmost all theorems are correct, but almost all [non-trivial] proofs have bugs." If proofs aren't a successful strategy for convincing the reader, then their benefit is social: having a proof shows that you know the rules of the club. (I started to write "merely social", but that's unfair.)

Non sequitur

The second would follow from the first only if having a proof practically guaranteed that your design was not broken as stated.

No, that's a non-sequitur. !A=>!B does not entail A=>B.

But unfortunately it does no such thing: "[a]lmost all theorems are correct, but almost all [non-trivial] proofs have bugs."

You misunderstood the quote. "All theorems" refers to all theorems for which proofs have been given, not literally "all" theorems.

If proofs aren't a successful strategy for convincing the reader, then their benefit is social

Also a non-sequitur. "Necessary" doesn't imply "sufficient", but "not sufficient" doesn't imply "not necessary" either. See above.

For something to be a

For something to be a theorem, a proof has to have been given. So the set "all theorems for which proofs have been given" is the same set as "all theorems". The statement is trying to say the following: almost all things that have a published "proof" are true, but almost all non-trivial "proofs" have bugs. For example Fermats Last Theorem is probably true, but the proof probably contains bugs.

In PL, a proof cannot be

In PL, a proof cannot be used as black or white evidence. Its not that the proof will have bugs, its that the abstraction performed on the language to make formal specification possible at all is often just a huge hand wave. Instead, the proof is merely evidence and a strong prose argument is still necessary to convince the reader that it is actually relevant.

Tautology

If you want to be that precise then note that "all theorems are true" holds by definition, so the sentence wouldn't convey much under this strict reading.

Anyway, I was merely trying to explain in a few words why John's interpretation of the quote was bogus (apparently he was using it as an argument that proofs don't have value, by implying that most plausible statements are correct despite of them -- and missing the fact that the quote only talks about statements for which people have at least made a comprehensive effort to prove them, which typically weeds out most errors in the statement, even if there remain minor bugs in the proof itself).

What I meant

By "theorem" I mean a claim supported by a proof. The quotation would have no point or meaning if "theorem" always meant "true theorem" (or for that matter if "proof" always meant "valid proof"). There are many unknown true statements, but they are not theorems in my sense until someone utters them along with a proof (uttered without a proof, they are conjectures). Certainly not all such claims are true (hence some theorems are false, or pseudo-theorems if you prefer). However, I believe that these are few, whereas incorrect proofs (or proofs too vague to be called correct or incorrect) are legion.

You're right that "has no proof" => "broken" is not equivalent to "has a proof" => "not broken". What I said was that the latter, and not the former, is what is needed to establish the utility of proofs.

Again, doing the proof isn't

Again, doing the proof isn't ceremony, it is your obligation as a designer. If you don't do it, it is practically guaranteed that your design is broken as stated.

No, I disagree. Plenty of languages are designed and used without this step occurring up front, if ever at all. The proofs-first viewpoint of PL design is only held by a very narrow segment of academia, which doesn't mean its wrong, but its definitely not mainstream.

I suppose many people consider that progress: the field has matured from whacky hacky to actual science.

No it hasn't. Ours is still very much a big-D field, and we are only trying to pretend to be more scientific by often requiring pointless/unrelated/and very unscientific rigor in our publications. Science and and does occur in our field (defined as great empirical and theory work), just not as often as our publications would imply.

(Btw, I don't really see that many great informal PL idea papers in the 90s. Care to give examples?)

In my field, I could leaf through OOPSLA pre-00 proceedings all day, I don't know where to start! Take "Self: the power of simplicity", "Virtual classes: a power mechanism for OO programming", or "Subtypes vs. Where Clauses", which is a great types paper without proofs whatsoever (by people who know how to do proofs). I believe part of the reason for OOPSLA's fall is that it has dabbled in too much pseudo-science of papers with meaningless proofs.

Funny

Plenty of languages are designed and used without this step occurring up front

Yes, and plenty of languages suck. Don't you think there might be some causality underlying this correlation?

[...] often requiring pointless/unrelated/and very unscientific rigor in our publications.

I don't really know what you are referring to. Can you give examples of rigour that would be pointless or unrelated? And is frequent?

Science and and does occur in our field (defined as great empirical and theory work)

So an approach is only scientific if it leads to "great" results? That would be a rather backwards definition...

I believe part of the reason for OOPSLA's fall is that it has dabbled in too much pseudo-science of papers with meaningless proofs.

It is funny you should say so. My interpretation actually is that OOPSLA grew too much of a reputation of being an inflated conference with far too much babble and far too little content, that failed to ever establish proper quality standards. ;) (Which isn't to say that there haven't been great papers at OOPSLA, but they have rather been the exception IMHO.)

Yes, and plenty of

Yes, and plenty of languages suck. Don't you think there might be some causality underlying this correlation?

I disagree. Plenty of languages don't suck, and plenty of languages done the "right" way aren't useful. I don't see causality.

I don't really know what you are referring to. Can you give examples of rigour that would be pointless or unrelated? And is frequent?

I'd rather not, but it seems to be a majority of papers in the given conferences I go to (not to say the papers are bad, but I scratch my head why they include evidence sections that aren't much evidence).

So an approach is only scientific if it leads to "great" results? That would be a rather backwards definition...

I used "great" as an adjective for the work, to distinguish them from the posers; I'm not against all theory papers, just the pointless theory sections of idea papers. You can get great results using any method that works for that.

It is funny you should say so. My interpretation actually is that OOPSLA grew too much of a reputation of being an inflated conference with far too much babble and far too little content, that failed to ever establish proper quality standards. ;)

Quality in OOPSLA can actually be quite high, while OOPSLA has much larger participation from and impact on industry than ICFP or POPL, so its not a purely academic conference. Anyways, we obviously have different opinions of what we want to see, so its understandable that we'd think something is broken for opposing reasons.

I suppose you mean that it

I suppose you mean that it can't be published as part of the paper because it is too long. But it could be distributed separately, and I believe it must -- just as you should publish the code of a software prototype you implemented and discussed in your paper.

Something which naturally raises the question of the quality of storage, archival and the ability to run those external programs together with the referenced data. This becomes not much different than from certification procedures, which require a detailed protocol for reproducing the exact same results with the described hardware and tooling.

If you are asking a question...

My first question back to you would be: Do you document the pre-conditions and post-conditions of your function definitions? If you do, can they be automatically verified by a computer?

Mathematics is a great medium for testing ideas. The number of bugs in mathematics has not reduced its usefulness. Euclid wrote the greatest books in maths, but future mathematicians have spent the following millenia fixing the bugs in his work and searching for bugs where there are none in the specification itself, but rather the misapplication of the spec.

It was not until Janos Bolyai that mathematicians saw the error of their ways in disproving the Euclidean parallel postulate and began to realize the benefits of different geometries.

Most importantly, Euclid still got the big picture right, and all his descendants had to do was fill in the detail.

Now, how many programming languages were as practical and theoretical as "measuring the earth", and how their formalisms will contribute to the benefit of future millenia, who can say?

Yet, we are beginning to see practical evidence of mechanizing formal semantics, such as the work on certified compilers. I agree with you that the likelihood of bugs in a program is a very important criteria, which is why I am confused that you suggest unilaterally formal semantics is just as likely to have bugs as any other program. Xavier LeRoy has recently contributed a novel notion: the architecture for formalizing semantics mechanically greatly changes how much and how big of a system we can verify.

In fact, if I had to guess why most programs have bugs, it is because of volatile requirements and difficulty connecting prose. Most program prose is connected as if William S. Burroughs' Language-is-a-Virus "cutups" were a rallying cry for the programmers. For example, see MIT Review article An oracle for object-oriented programmers, discussing the Matchmaker software:

[...] it takes as input the names of two objects and describes how to get them to interact with each other. To demonstrate how it works, the researchers applied it to an open-source program called Eclipse, which computer scientists use to develop programming tools for new computer languages.

In the Eclipse framework, the window that displays code written in the new language is called an Editor; a function that searches the code for symbols or keywords is called a Scanner. That much a seasoned developer could probably glean by looking over the Eclipse source code. But say you want to add a new Scanner to Eclipse, one that allows you to highlight particular symbols. It turns out that, in addition to your Editor and Scanner objects, you would need to invoke a couple of objects with the unintuitive names of DamageRepairer and PresentationReconciler and then overwrite a function called getPresentationReconciler in yet a third object called a SourceViewerConfiguration.

With Matchmaker, the developer would simply type the words “editor” and “scanner” into the query fields, and the program would return the names of the objects that link them and a description of the modifications required to any existing functions.

But programming languages generally do not have volatile requirements because they have massive source code dependencies; changing a language by growing it is hard for many reasons, and even when "changed" it is "growth" rather than "mutation". I can provide several reasons why "growth" is still difficult:

  1. Most languages are written using LL(1) or LR(1) scanners with semantic actions in the lexer. Extending the language generally implies "rule cracking", and rule cracking presents several formal difficulties: production rules are not composable, and testing for ambiguities doesn't prove the absence of ambiguities.
  2. Even when a language's parser is written using non-ambiguous parser generator, achieving desirable and intuitive syntax requires unnatural production rule factorings. Eventually the factorings decrease the audit-ability of the grammar.
  3. Parser generators with compos-able grammar properties have practical and theoretical concerns:
    1. Laurie Tratt has an excellent summary on his blog about many issues, Parsing: The Solved Problem That Isn't. The biggest problem for scannerless parsing techniques is ordered choice. While ordered choice helps solve the ambiguity crisis, it has the side-effect that sentences parsable in individual languages might no longer be parsable when the languages are combined. (Determining whether sentences no longer parse is not decidable.)
    2. As projects become bigger and source code enters the millions of lines of code, batch throughput becomes extremely important. For the time being, parser generators tend to generate code that is less efficient and poorer at error handling than hand-rolled recursive descent parsers; usually the most senior member on an industrial compiler team maintains the parser
      1. Dramatic changes in hardware architecture due to Moore's Law still do not change the ultimate reality: splitting the scanner out into its own component has practical advantages, such as handling 'off-sides rules' and eliminating comments, that parser generators using scannerless techniques with composable grammar properties do not.
  4. Many compilers currently rely on complex representation invariants across stages in the compiling pipeline that are too cumbersome to guarantee. Even pseudo-"Platform" systems like LLVM suffer from these issues. Changing the language can jeopardize these invariants, making battling for new features at the language design team table extremely contentious. C# designers use the Minus 100 Points triage system when deciding whether candidate features for a Future C# are "so compelling that they are worth the enormous dollar costs of designing, implementing, testing, documenting and shipping the feature." Modern languages with millions of users must also pay a "Language Design Tax".
  5. Most compiler engineers do not follow Andrew Appel's advice in Axiomatic Bootstrapping: A guide for hackers. Besides the portability and data independence concerns Appel addresses, there are other, tangential, bootstrapping concerns:
    1. If a compiler is a program that takes a input, builds an intermediate representation, and produces an output, then what is the best way to do that? Beyond the simple problem of taking N languages and targeting M machines and avoiding writing N*M compilers, how should you structure the system so that each transformation phase is easy to statically analyze? For advice on that, Andrew Appel wrote what can be thought of as Part 2 to the above bootstrapping paper: Foundational High-Level Static Analysis
    2. If you haven't thought deeply about your methodology for bootstrapping your compiler, then how can you guarantee your compiler doesn't compile a backdoor into the UNIX login command, how can you guarantee it is trustworthy at least insofar as Countering Trojan Horse attacks on Compilers, including thinking about scenarios where the compiler is not self-parenting? This is a property you can never achieve randomly, and must be done through formal semantics.
    3. In addition to batch throughput of parsing and the fundamental problem of trustworthy, correct bootstrapping, modern languages have complex build dependencies, and re-compiling everything from square one every time there is a change seems tedious and not that helpful. Thanks to formalization efforts, there exists a build system where the build config spec is based on a purely functional language: Vesta. Vesta's execution model avoids unnecessary builds and automatically extracts dependencies.
  6. Even if dramatic changes in hardware architecture due to Moore's Law continue, and we can rely on parser generators more often, and our language never changes on the exterior, its run-time environment might have to. Those dramatic changes in hardware will continue to make reasoning about concurrent interactions more difficult, and guaranteeing sequential consistency an increasing challenge.

All of these issues are exactly the central issue of formal semantics: deterministically translating one meaning to another meaning, without loss of meaning and ability for further argumentation.

On the human factors side, designing a language that is easy for its users to understand is difficult, but even more so when the language is designed by using a ton of "cutups" (Perl), as we recently linked to a study on LtU: Perl vs. Random Syntax

answer to your questions.

But programming languages generally do not have volatile requirements because they have massive source code dependencies; changing a language by growing it is hard for many reasons, and even when "changed" it is "growth" rather than "mutation". I can provide several reasons why "growth" is still difficult:

My overreaching goal is to make language that makes among other things growing language easy.

I written amethyst a pattern matching tool in which I plan to write all stages of incremental compiler.

As 1,2,3 using amethyst as parser generator solves most issues.

For composability etc a crucial property is to write grammars in structured way.

Most important is to restrict recursion. I allow only iteration,left+right recursion and annotated as nested(start,may_be_recursive,end)

This makes problem of misplaced parenthesis etc. trackable even with composition

A nesting information allows parsing to be equivalent to top-down parser and run in linear time.
I discovered this a year ago. I do not integrated this approach to my amethyst yet.

Another result that I yet have to integrate is that parser can run incrementally with overhead proportional to rules that must be recomputed.

Doron Zeilberger's Opinion #112

Re: comment 69764: “I don't read "formal X" sections in papers”.

Doron Zeilberger's Opinion #112:

… an average paper is read by at most two people (the author and the referee). Most people are so busy writing papers, that they don't have time to read them, and look for mistakes.

Negative space: formalism helps non-disruptive design

Todd Proebstring famously claimed that the formalism performed for POPL and PLDI papers is for non-disruptive language research. He similarly critiques performance research that isn't orders-of-magnitude improvement (e.g., distributed computing is still disruptive).

I somewhat disagree in that I've observed formalism to help design languages for fundamentally new domains (e.g., a basis for biological or probabilistic computing). However, as Jules and Sean are saying, the formalism involved isn't very important to the presentation. More important are things like examples and how to handle them (or when it is unknown how). There may be a deep underlying theory, but rigorous exploration is unimportant at that stage, and if it fills the paper, it seems like a design smell as in Backus's famous Turing Award lecture. I'm optimistic that we have enough important order-of-magnitude opportunities for significant research like this at the top tier.

Disclaimer: incremental improvements are a bedrock of science and, given current foundations, non-trivial. PLDI/POPL may simply not be the place for orders-of-magnitude PL research; these conferences follow a quantitative approach to science and research design. That is perfectly valid and makes neither process unscientific (in the modern sense) -- I believe both are important and natural.

PPS: Despite the disclaimer, it appears orders-of-magnitude / blue sky PL research is underrepresented relative to formal methods by the PL community at large, and I think this is a systemic/cultural thing. A lot of the coolest work I know is *outside* of the community (look to SIGMOD, NIPS, CHI/UIST, music, startups, etc.).

CHI/UIST used to be cooler...

Great post, I agree with it and I think you make your case much better than I could make mine. There are some really great theory papers out there

But we have to be careful to look at other areas for guidance on how to improve ours. CHI/UIST papers include user studies like PL papers increasingly include proofs. Sometimes these user studies provide insights, but often they are just as pointless as an uninteresting PL proof in an idea paper. It seems the problem is spread throughout academic computer science, but industry seems to immune :)

View online

It just happens that I have no .ppt viewer installed on my current system. I looked around and "Google Docs Viewer" works surprisingly well
https://docs.google.com/viewer?url=http://research.microsoft.com/~toddpro/papers/disruptive.ppt

The above links are the

The above links are the April 24 '02 version. The following is the November 22 '02 version which coincidentally happens to be a PDF file rather than a PPT file.

What are some other order of magnitude opportunities?

I'm an optimist: I think

I'm an optimist: I think there are lot of opportunities out there. For each of these things, I believe simple changes to programming models can have OOM-style benefits (as in, I have concrete ideas and am acting on some).

** Richard Hamming asks to look for big problems -- we have many: correctness, speed, usability, visibility/analytics, etc.

** Bill Buxton asks to change orders of magnitude: size of code, size of device (tiny or huge), number of developers/users, etc.

** David Patterson has a history of high-impact problem-oriented research: Philip Wadler is on record asking the FP community to research more in tools and applications as the actual relative pain points.

** Testing. We've mined types pretty hard; it's clear that we're behind on other forms of programmer hints. SMT solving and symbolic execution are clear opportunities for language design (and that, luckily, *is* happening).

I'm on the fence on security. I think there are big opportunities here -- e.g., hurdling the data wall -- but that's not generally the form of security research going on in the PL community. To give some insight here, one of the arguable successes of Facebook early on was that a walled garden for your data gave you confidence to share. Unfortunately, that's obviously incorrect, which is one reason I'm not convinced that technological solutions to security are significantly more necessary than simple changes to the social structure / software model. As cool data points, Jean Yang, Monica Lam, and others are looking more carefully at flexible relationships and control between data, code, and policy.

The downside is that this optimism makes me disappointed when I read many papers :)

It sounds like we should

It sounds like we should also write down a list of what could be disruptive in the future. Sounds like time for a new topic!

Can you give a sneak peek of

Can you give a sneak peek of some of those concrete ideas? :)

Interesting but somewhat outdated

That is an interesting talk, and a lot in there to agree with. On the other hand, some of its premise no longer matches recent developments. In particular, on slide 7 it says about recent PL progress:

Almost void of innovation on type theory, functional programming, OO programming, optimization, etc!

If you look back at the last decade+, then almost all interesting progress in mainstream languages basically consisted of adoption of ideas from functional programming -- i.e. former PL research. It just came 20 or 30 years late, as usual. (And arguably, that development was already ongoing in 2002 when he gave the talk.) At the same time, the languages he cites as recent mainstream successes (Perl, Python, VBA, and even Java) have started to decline significantly since then.

What are you basing the

What are you basing the supposed significant decline of Perl, Python and Java on? Python has certainly seen a lot of growth since 2002, and Java probably has too. Although Java and Perl are probably starting to decline, that is mostly due to languages that fit his thesis equally well, namely that they are void of innovation on type theory, etc. (e.g. Ruby, Javascript, Objective-C).

all interesting progress in

all interesting progress in mainstream languages basically consisted of adoption of ideas from functional programming -- i.e. former PL research.

You may mean interesting in a different way than Proebsting's disruptive. I -- and Proebsting -- explicitly believe that POPL/PLDI research will show up in popular languages. We all agree Java/Eclipse, Scala, JS, and C#/Visual Studio contain great examples of impact. Proebsting went on to observe a pipeline of incremental improvements to performance, types, etc. and dubbed this non-disruptive PL research. That doesn't make it uninteresting nor unimportant: the POPL/PLDI model has a reliable benefit because we can put in X money and get c * X improvements in safety and speed.

However, it is not interesting in the sense of disruptive and orders-of-magnitude. What was?

** MapReduce: simple high-level structuring primitives are enough for a good chunk of big data

** Ruby on Rails etc: database integration and structuring for MVC

** JavaScript/web: streamlining composition via an exposed DOM (mashups, extensions, search engines). Furthermore, lowering the bar for programming through JSVM optimizations, heavy tolerance to programming error, and having a smaller language than its predecessors.

** Scratch: visual computing simplified enough that kids can quickly pick them up yet still do powerful things + streamlined sharing

** DSLs: Max/MSP is currently my favorite :)

**Unit testing: significant enough that we can compare it in cost/benefit to type-rich programming

I'm on the fence on whether to call Scala disruptive: it is explicitly marketed as a successor to Java, taking the torch from Sun/Oracle to continue the fight against MS (C#). There is some disruption in its approach to concurrency, but, playing a big role, it has community-driven frameworks that are independent of the basic language in the same sense as in MapReduce.

In all of these cases, the languages used benefit from decades of PLDI/POPL: the features come from earlier languages. However, that's not what makes them interesting from a research perspective in Proebsting's, Bill Buxton's, Richard Hamming's, etc. sense of it.