A Certified Type-Preserving Compiler from Lambda Calculus to Assembly Language

A Certified Type-Preserving Compiler from Lambda Calculus to Assembly Language

I present a certified compiler from simply-typed lambda calculus to assembly language. The compiler is certified in the sense that it comes with a machine-checked proof of semantics preservation, performed with the Coq proof assistant. The compiler and the terms of its several intermediate languages are given dependent types that guarantee that only well-typed programs are representable. Thus, type preservation for each compiler pass follows without any significant "proofs" of the usual kind. Semantics preservation is proved based on denotational semantics assigned to the intermediate languages. I demonstrate how working with a type-preserving compiler enables type-directed proof search to discharge automatically large parts of the proof obligations.

Software/proof source code and documentation

Slides are available from a talk I gave at the Projet Gallium seminar at INRIA Rocquencourt, in OpenOffice and PDF formats.

I found this while attempting to follow up on this post. The approach taken here—dependently-typed ASTs and denotational semantics—and the observation that, carried to its logical conclusion, the result is typed intermediate and assembly languages, proof-carrying code, etc. suggests to me a connection to this work. It also seems to offer a denotational counterpoint to Leroy's coinductive big-step operational semantics, also done in Coq. Fascinating stuff.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Wow

I have not read the paper (but it is definitely on my to-read pile now), I did read the slides. Very impressive. This has a very elegant feel to it.

Of course, such success really just screams for an obvious conclusion: why don't we have programming languages that support the features from Coq that allow all of this to 'work'? If it is OK to work with dependent types and generic functions in Coq, why is it not OK for a PL to have the same features?

One thing to notice though: a number of things are outside the logic. In particular, Coq's Ltac is used -- and Ltac is an untyped language! Doesn't this strike others as really strange?

<plug>If you think that Ltac (and Template Haskell) being untyped is really weird, then you should plan to attend the PLMMS Workship (mentionned before)</plug>

I agree in principle, but...

Of course all of us in the statically-typed FP world recoil at the thought of using a dynamically-typed language like Ltac for writing decision procedures. However, constructing proofs really is fundamentally different and harder than constructing programs, I've found. If someone does manage to design a type system for tactics that allows all of the important stuff, it's still not clear to me that the cost-benefit trade-off will be good enough.

I like to do outrageous things (from an ML programmer's perspective!) in Ltac, like using dynamic type errors to trigger backtracking in exhaustive search. :-D

Typing and all that

Just to be clear: I currently use 3 languages all the time - MetaOCaml, Haskell and Maple.

When I am doing math or crazy generic programming, the freedom of Maple is hard to beat (dynamic typing and full introspection, yeah!). When I am serious about programming, Haskell. And when I am doing research in generative programming (mostly applied to math algorithms), MetaOCaml, because I want types everywhere.

So I know all about crazy stuff allowed by dynamically typed languages. In some of my Maple code, I do dynamic dispatch on some pretty crazy things. But after 10 years of that, the types of both Haskell and MetaOCaml are really much better. But then I get frustrated that I have a lot more invariants in my code than the type system will let me encode.

I see this as something currently evolving: Ltac is to proofs what LISP was to ML a few years before ML was born. Once people have figured out how they typically write proofs, someone will come up with ways of giving a type system to (most of) those proofs. You'll still want to have that crazy language, to explore the edges, but most of the time the typed one will be much better.

Elegant Programming Languages

Well you can't have that in a programming language because you wouldn't be able to do the proofs in this case !

Note: I have read the paper, seen the seminar, and know the author :)

Essentially, dependently typed programming relies on the conversion rule and dependent pattern matching which if done badly (ie with potentially non-terminating programs) lead to undecidable type checking, which is certainly not what you want (i don't).

Worse, in some cases, he is forced to use eq_rec to rewrite in the type of the objects which means you have to have a logic in which you can manipulate equalities etc... in the type system.
Again, you can certainly have a logic and some automatic deduction going on and rely on it but you will quickly face undecidability in interesting cases (if you have full-fleged dependent types) or you work in a proof-assistant which is made just for proving these equalities (hint: Coq).

So for me the question is why don't we have certified programming languages having the same features as regular programming languages in proof assistants ? And I'm working on it for my PhD :)

On a side note, Ltac being untyped is not a problem, the only thing that needs to be well typed and terminating and all that is the final proof produced by the tactics, not the tactics themselves.

Undecidability: Is it really so bad?

... lead to undecidable type checking, which is certainly not what you want (i don't)

I, for one, am not certain that I don't want undecidable type checking.

My simple human brain cannot solve the halting problem in general, yet this limitation has never interfered with my ability to understand worthwhile code solving a real-world problem. Judging the correctness of the vast majority of well-written code is easy for a human, although we often come to correctness conclusions more quickly with the help of good documentation. Thus I hypothesize that generating automatic correctness proofs of the vast majority of correct or accidentally incorrect programs should be possible by machine, although the result would arrive more quickly (or only) with good hints.

If I can convince another programmer that some code is either correct or incorrect, shouldn't I be able to convince a sufficiently resourceful compiler? It might be as hard or harder than passing the Turing Test, but I think it's too early to rule out the possibility of eventually creating such a typechecker.

And if it existed, I would be thrilled to have a dependent type typechecker that used heuristic search to create typing proofs or disproofs for at least as many programs at the world's best computer scientist could.

Yes it is when you want certification.

I understand that we not rule out this possibility completely (and Cayenne did a good job popularizing it) but I think you're misled by your programmer's intuition. You said that "judging the correctness of the vast majority of well-written code is easy for a human", but (even if I doubt your statement) what we want here is solely computer-checked correctness proofs. Now if you want to do certified programming and you allow for example some programs to not terminate (which is one major source of undecidability of type-checking) then, well you have already failed ! You will have a hard time using it and selling it to others because certification people want definite, yes or no answers, not maybe's (which could be _|_, or yes, or no... type-checking can take a very long time once you add proof-search to it too :).
The very point of certification is to know precisely which behaviors programs will have and do that in a composable way. If you throw non-terminating functions in there you loose both (note that you can still have infinite datatypes and infinitely-producing functions using some particular constructs in the setting of Coq for example).

About undecidability in general, I agree that many proofs could potentially be done by heuristic search and I think that's perfectly fine, but here I see two separate problems: one is checking programs/proofs for correctness w.r.t. a specification (a.k.a. typing, must be decidable, e.g. to recheck safely) and the other is solving the correctness conditions by building such proofs (using your brain or the computer's ressources, with no time limit).

Now, my opinion is that we won't have automatic theorem provers working for the kind of proofs I do before a very long time if at all. Anyway, as you said, if the theory you work with is undecidable or just sufficiently complicated, you will sometimes have to 'hint' the prover/compiler, that is do part of the proof yourself, so semi-automatic systems seem most promising to me.

Another answer

I'll just summarize what I view as the important part of Matthieu's answer.

First, there is a big difference between inference and checking. It's the checking of already-constructed proofs that is decidable in Coq. I hope you agree that, while we want to allow freedom in how proofs are found, we should be able to rely on anyone's ability to check them afterward. This wouldn't be possible with undecidable checking.

It's also true that Coq proof/type-checking is decidable by an algorithm that's fairly easy to explain to a human, for the wide variety of proofs that it's able to check. We just have a dependently-typed lambda calculus with a few extensions! Any alternative system that requires search to check proofs seems fundamentally less desirable to me. Even if it manages to terminate checking all of the proofs that come up in practice, it will probably take much longer to do so than a checker that is just doing traditional type-checking.

Decidable checking

Here we agree. I definitely want decidable checking of proofs.

But I also want staging (like Omega, say, but with more freedom). More like AUTOMATH had, in fact, with its 'telescopes'.

What I don't want is that my type-level programs are forced to terminate by construction. Why is it OK for my program to diverge, but my types can't? A bug is a bug, and there are times where I want the freedom to put my bugs where I want them! But seriously, I am happier with being forced to show that my programs (at whatever level!) are meaningful than being straight-jacketed into such an inexpressive system that I can't even phrase what I know to be true.

Devil's advocate...

On a side note, Ltac being untyped is not a problem, the only thing that needs to be well typed and terminating and all that is the final proof produced by the tactics, not the tactics themselves.

Well, just to take the other side for a moment, the same argument has been made against typed macros and metaprogramming: only the final program needs to be well-typed, not all the macros, etc.

Well, there are a few problems with this:

  • First, the tactics better in fact be well-typed, or else you're in trouble whether you realize it or not... But this is sort of a semantic argument.
  • Won't you eventually want to reuse, compose, or maintain your tactics?
  • What about debugging your tactics?

I can understand that argument that an untyped language for tactics is fine for now, but unless we're positive that no one will ever to large-scale or complex "real" programming in the language, it might pay to be open-minded... My intuition is that all the same arguments that apply to macros or other small-scale "scripting" languages might come to apply here, too.

Typed tactics

Indeed you are right, tactics are an awful beast in terms of language engineering. We don't know of to check them and the debugging support is very low (at least in Coq).

About type errors, as Adam said they are can even help sometime. I think its just that we use tactics in a lousy way ; we sometimes 'try' tactics knowing full well that most of the time they will fail, but if progress is made that's nice. I don't tink failure of a tactic is comparable to failure of a program in this sense: it's not an exceptional, unexpected behavior.

The semantic question of what the type of a tactic is strikes me as important too, but indeed i meant for now and even a weaker statement: "this does not endanger the coherence of the system at all".

The devil rules!

If the position you outline is that of the devil, then I'm with him. [And yes, I know the meaning of that expression].

When I decided that I could not stand doing metaprogramming in Maple anymore, I looked around for my alternatives. C++ seemed much worse. Template Haskell was cool, but only 1/2 typed. MetaOCaml seemed to be the only viable alternative. It is astonishingly hard to get started, but with some experience (and some help from Oleg and a little syntactic sugar), it's not so bad.

Anyways, for metaprogramming, I am convinced that it pays off. And I do not perceive any significant difference between metaprogramming and proofs. In fact, Curry-Howard says there isn't...

It is astonishingly hard to

It is astonishingly hard to get started, but with some experience (and some help from Oleg and a little syntactic sugar), it's not so bad.

Any language that can be described as being 'astonishingly hard to get started' to the point that one needs Oleg-level help is going to have a serious problem getting much actual use!

Mind you, OCaml has a syntax only its mother could love in the first place, so perhaps there's hope...

However

However, the point is that once a 'body of knowledge' has been developped (by early adopters), the learning curve of latter adopters can be reduced greatly. And my personal belief is that most code-generation frameworks should be used by the very few to build (maintainable!) libraries for the masses.

The hackers behind most programming languages (Java included) are on average way smarter than the average programmer using the language for applications. Haskell and LISP seem to be a bit different, in that the gap between language-hackers and users is lower than average. Some languages are explicitly designed to have a very large gap!