archives

Verifying Semantic Type Soundness of a Simple Compiler

Having been entirely too dismissive of one of Nick Benton's papers before, it seems only fitting that I should attempt expiation for my sins by discussing Formalizing and Verifying Semantic Type Soundness of a Simple Compiler:

We describe a semantic type soundness result, formalized in the Coq proof assistant, for a compiler from a simple imperative language with heap-allocated data into an idealized assembly language. Types in the high-level language are interpreted as binary relations, built using both second-order quantification and a form of separation structure, over stores and code pointers in the low-level machine.

I found this work striking for its strong defense of the utility of semantic, vs. syntactic, type soundness, and for its use of logical relations vs. unary predicates. I could swear that I recently read another paper making quite similar observations about semantic vs. syntactic type soundness. I'm not sure which paper it was, but regardless, there seems to me to be a great deal of overlap of insight and, to a lesser extent, approach, with A Certified Type-Preserving Compiler from Lambda Calculus to Assembly Language, and of course by extension to Leroy's work on formalizing big-step operational semantics, allocation, etc. Finally we have the very exciting news about progress on a mechanized metatheory for Standard ML, in Twelf. It's a very exciting time for mechanized metatheory! I'd love to get at least Nick Benton, Adam Chlipala, Xavier Leroy, and Tim Sweeney in a room for a few months so they could create the Next Mainstream Programming Language in Coq. Of course, we have Dr. Benton, Mr. Chlipala, and Mr. Sweeney right here on LtU... :-)

Update: Mea culpa; the paper I had in mind was A Very Modal Model of a Modern, Major, General Type System. It would seem that Dr. Benton, Mr. Chlipala, and Dr. Appel and colleagues have arrived at quite similar perspectives on good approaches to mechanizing metatheory for very interesting programming language design efforts.

Generators/Iterators and lazy evaluation

Hi all

I've been using OCaml for a year or so building an interpreter/compiler for the Python programming language as a research project.

It's worked pretty well for most part, but I'm trying to implement some of the features of Python involving 'lazy' features, such as generator functions and list comprehensions.

A generator in Python can be thought of as an arbitrarily generated 'lazy list'. As an example
the following is a generator capable of generating powers of 2 upto a max value.

def pow2(max):
  start = 0
  while (start .lt. max):
    yield 2^start
    start += 1

The 'yield' statement is the point where the function returns the next value and suspends itself until the next time it is 'forced'. At that time it resumes execution where it left off.

OCaml makes this particularly hard to implement this due to lack of 'control flow' features, including even a call/cc. The only way I can see right now is breaking this code down into little functions and keeping track of the right function to call the next time.

I've been thinking about whether I can express this in some elegant way in some lazy construct in OCaml, since this is basically a form of 'lazy' evaluation. However, I come from the C world and am not quite used to 'lazy' thinking :) Any ideas?