Lightweight Static Capabilitites (II)

The slides for the talk discussed here are now available for download. Like all of Ken and Oleg's work, this stuff is both cool and important.

Keep in mind that the talk is quite different from the paper. The safety claims were formalized and proved in Twelf: list example, array example. To follow the proofs you should read them alongside the presentation slides. I am told that the first file might change soon, to reflect a more general proof. Perhaps Ken or Oleg would like to comment on the experience of doing mechanized proofs, a subject the comes up regularly on LtU.

LtU newcomers, who already managed to take in a little Haskell or ML, may want to spend a little time chewing on this, and ask questions if something remains unclear, since this work may eventually have practical importance, and can teach quite a few interesting techniques.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Syntactic approach to type soundness, in Twelf

Ken and I have generalized safety.elf: The new code formalizes
small-step semantics for System F with constants and additional typing
rules, closely following the `Syntactic Approach to Type Soundness'
with contexts, focusing and redexing. The type preservation and
progress proofs indeed factor into the type preservation for redexes,
and the progress for typed pre-values. Plus the context typing rules,
which are shared across the preservation and progress proofs.


http://pobox.com/~oleg/ftp/Computation/Computation.html#type-soundness-formal

In our experience Twelf it is quite an insightful framework for doing
proofs. It has been quite revealing why exactly some particular
property holds, which premises precisely had to be satisfied. Andrew
Appel


http://www.cs.princeton.edu/~appel/twelf-tutorial/

demonstrates a different way of using Twelf (that is, rather than showing
totality of a type family, he relies on Curry-Howard correspondence
directly and builds a proof as a series of proof constructor
applications, that is, the series of implications). These proofs are
very low-level -- that's why they retain the feeling and intuition of
detailed pen-and-paper proofs. Twelf merely verifies the proofs and
fills in small details. Doing proofs in a far more capable system like
Coq or Isabelle/HOL does not give the feeling as to what proofs are,
what follows from what. In those systems, we learn tactics but we
don't learn proofs.

That said, the problem, at least in our proofs, is too much
boilerplate. For each trivial case distinction, we have to write a new
lemma (and thus come up with a name), and type mode declaration, world
declaration, and the totality declaration. All these are trivial, and
yet we have to type them anyway. The closed world declaration could be
simplified: why do we need to type %worlds () (foo _ _ _). whereas a
simpler %closedworld foo. could have sufficed. Must we really
enumerate all arguments. One can see that the longest and the most
boring part of our type soundness proof was proving the classification
of expression (each expression is either a value or can be focused).

The following discussion describes other people's experiences with
Twelf:


http://lists.seas.upenn.edu/pipermail/poplmark/2005-August.txt

It seems that some Emacs voodoo helps reduce the typing of the
boilerplate. One may wonder why not to use SML: instead of using the
Twelf server, one may well enter Twelf.make "string"; or Twelf.decl "string";
on the SML command line, having loaded the Twelf structure. Thus it is
possible to write an ML function that does some `macro-expansion' on a
string before passing the string to Twelf.decl.