Semantics
JoeE: A SecurityOriented Subset of Java. Adrian Mettler, David Wagner, and Tyler Close. To appear at ISOC NDSS 2010.
We present JoeE, a language designed to support the development of secure software systems. JoeE is a subset of Java that makes it easier to architect and implement programs with strong security properties that can be checked
during a security review. It enables programmers to apply the principle of least privilege to their programs; implement applicationspecific reference monitors that cannot be bypassed; introduce and use domainspecific security abstractions; safely execute and interact with untrusted code; and build secure, extensible systems. JoeE demonstrates how it is possible to achieve the strong security properties of an objectcapability language while retaining the features and feel of a mainstream objectoriented language...
Section 5.2 discuss how JoeE leverages Java static typing. JoeE is implemented as a sourcecode verifier not a bytecode verifier. Section 6 of the paper explains this design choice.
JoeE was mentioned on LtU in the past and is available here.
Monads in Action, Andrzej Filinski, POPL 2010.
In functional programming, monadic characterizations of computational effects are normally understood denotationally: they describe how an effectful program can be systematically expanded or translated into a larger, pure program, which can then be evaluated according to an effectfree semantics. Any effectspecific operations expressible in the monad are also given purely functional definitions, but these definitions are only directly executable in the context of an already translated program. This approach thus takes an inherently Churchstyle view of effects: the nominal meaning of every effectful term in the program depends crucially on its type.
We present here a complementary, operational view of monadic effects, in which an effect definition directly induces an imperative behavior of the new operations expressible in the monad. This behavior is formalized as additional operational rules for only the new constructs; it does not require any structural changes to the evaluation judgment. Specifically, we give a smallstep operational semantics of a prototypical functional language supporting programmerdefinable, layered effects, and show how this semantics naturally supports reasoning by familiar syntactic techniques, such as showing soundness of a Currystyle effecttype system by the progress+preservation method.
The idea of monadic reflection was one I never felt I really understood properly until I read this paper, so now I'll have to go back and reread some of his older papers on the subject.
Delimited Control in OCaml, Abstractly and Concretely, System Description
We describe the first implementation of multiprompt delimited control operators in OCaml that is direct in that it captures only the needed part of the control stack. The implementation is a library that requires no changes to the OCaml compiler or runtime, so it is perfectly compatible with existing OCaml source code and bytecode. The library has been in fruitful practical use for four years.
We present the library as an implementation of an abstract machine derived by elaborating the definitional machine. The abstract view lets us distill a minimalistic API, scAPI, sufficient for implementing multiprompt delimited control. We argue that a language system that supports exception and stackoverflow handling supports scAPI. Our library illustrates how to use scAPI to implement multiprompt delimited control in a typed language. The approach is general and can be used to add multiprompt delimited control to other existing language systems.
Oleg was kind enough to send me an email letting me know of this paper's existence (it appears not yet to be linked from the "Computation" page under which it is stored) and to include me in the acknowledgements. Since the paper in its current form has been accepted for publication, he indicated that it can be made more widely available, so here it is. In typical Oleg fashion, it offers insights at both the theoretical and implementation levels.
Verified JustInTime Compiler on x86
Magnus O. Myreen
This paper presents a method for creating formally correct justintime (JIT) compilers. The tractability of our approach is demonstrated through, what we believe is the first, verification of a JIT compiler with respect to a realistic semantics of selfmodifying x86 machine code. Our semantics includes a model of the instruction cache. Two versions of the verified JIT compiler are presented: one generates all of the machine code at once, the other one is incremental i.e. produces code ondemand. All proofs have been performed inside the HOL4 theorem prover.
(To appear in next week's POPL.)
I've been enjoying this paper on my commute this week. It's a nice little distillation of some of the basics of the engineering structure of a JITted language and how the pieces fit together in a correct implementation. As JIT compilers become more and more commonplace, I'd like to see them presented in such a way that they're no more scary or daunting  at least in principle  than traditional offline compilers. Perhaps a chapter in EoPL4?
Syntactic Proofs of Compositional Compiler Correctness
Semantic preservation by compilers for higherorder languages can be veriï¬ed using simple syntactic methods. At the heart of classic techniques are relations between sourcelevel and targetlevel values. Unfortunately, these relations are speciï¬c to particular compilers, leading to correctness theorems that have nothing to say about linking programs with functions compiled by other compilers or written by hand in the target language. Theorems based on logical relations manage to avoid this problem, but at a cost: standard logical relations do not apply directly to programs with nontermination or impurity, and extensions to handle those features are relatively complicated, compared to the classical compiler veriï¬cation literature.
In this paper, we present a new approach to â€œopenâ€ compiler correctness theorems that is â€œsyntacticâ€ in the sense that the core relations do not refer to semantics. Though the technique is much more elementary than previous proposals, it scales up nicely to realistic languages. In particular, untyped and impure programs may be handled simply, while previous work has addressed neither in this context.
Our approach is based on the observation that it is an unnecessary handicap to consider proofs as black boxes. We identify some theoremspeciï¬c proof skeletons, such that we can deï¬ne an algebra of nondeterministic compilations and their proofs, and we can compose any two compilations to produce a correctbyconstruction result. We have prototyped these ideas with a Coq implementation of multiple CPS translations for an untyped MiniML source language with recursive functions, sums, products, mutable references, and exceptions.
A submitted draft of another paper from Adam, continuing to expand LambdaTamer's reach.
A Verified Compiler for an Impure Functional Language
We present a verified compiler to an idealized assembly language from a small, untyped functional language with mutable references and exceptions. The compiler is programmed in the Coq proof assistant and has a proof of total correctness with respect to bigstep operational semantics for the source and target languages. Compilation is staged and includes standard phases like translation to continuationpassing style and closure conversion, as well as a common subexpression elimination optimization. In this work, our focus has been on discovering and using techniques that make our proofs easy to engineer and maintain. While most programming language work with proof assistants uses very manual proof styles, all of our proofs are implemented as adaptive programs in Coq's tactic language, making it possible to reuse proofs unchanged as new language features are added.
In this paper, we focus especially on phases of compilation that rearrange the structure of syntax with nested variable binders. That aspect has been a key challenge area in past compiler verification projects, with much more effort expended in the statement and proof of binderrelated lemmas than is found in standard pencilandpaper proofs. We show how to exploit the representation technique of parametric higherorder abstract syntax to avoid the need to prove any of the usual lemmas about binder manipulation, often leading to proofs that are actually shorter than their pencilandpaper analogues. Our strategy is based on a new approach to encoding operational semantics which delegates all concerns about substitution to the meta language, without using features incompatible with general purpose type theories like Coq's logic.
Further work on/with LambdaTamer for certified compiler development.
Certified Programming With Dependent Types
From the introduction:
We would all like to have programs check that our programs are correct. Due in no small part to some bold but unfulfilled promises in the history of computer science, today most people who write software, practitioners and academics alike, assume that the costs of formal program verification outweigh the benefits. The purpose of this book is to convince you that the technology of program verification is mature enough today that it makes sense to use it in a support role in many kinds of research projects in computer science. Beyond the convincing, I also want to provide a handbook on practical engineering of certified programs with the Coq proof assistant.
This is the best Coq tutorial that I know of, partially for being comprehensive, and partially for taking a very different tack than most with Adam's emphasis on proof automation using Coq's Ltac tactic language. It provides an invaluable education toward understanding what's going on either in LambdaTamer or Ynot, both of which are important projects in their own rights.
Please note that Adam is explicitly requesting feedback on this work.
A Typetheoretic Foundation for Programming with Higherorder Abstract Syntax and Firstclass Substitutions by Brigitte Pientka, appeared in POPL 08.
Higherorder abstract syntax (HOAS) is a simple, powerful technique
for implementing object languages, since it directly supports
common and tricky routines dealing with variables, such as
captureavoiding substitution and renaming. This is achieved by
representing binders in the objectlanguage via binders in the metalanguage.
However, enriching functional programming languages
with direct support for HOAS has been a major challenge, because
recursion over HOAS encodings requires one to traverse 
abstractions and necessitates programming with open objects.
We present a novel typetheoretic foundation based on contextual
modal types which allows us to recursively analyze open terms
via higherorder pattern matching. By design, variables occurring
in open terms can never escape their scope. Using several examples,
we demonstrate that our framework provides a namesafe foundation
to operations typically found in nominal systems. In contrast
to nominal systems however, we also support captureavoiding
substitution operations and even provide firstclass substitutions to
the programmer. The main contribution of this paper is a syntax directed
bidirectional type system where we distinguish between
the data language and the computation language together with the
progress and preservation proof for our language.
Its been a while since I posted, but this paper appears that it may be of interest to some members of this community. It looks interesting to me, but I just wish I understood all the terminology. I don't know what "open objects" are, and why they are a problem. I don't understand what HOAS is. I don't even know what binders are. The list goes on. I surely can't be the only person who is interested, but feels that this is just out of their grasp. I bet that I probably could understand these things with a minimum of explanation, given I have experience implementing languages. If anyone is interested in rephrasing the abstract in more basic terms, I would be very appreciative.
[Edit: corrected spelling of Brigitte Pientka. My apologies.]
A Veriï¬ed Compiler for an Impure Functional Language
We present a veriï¬ed compiler to an idealized assembly language from a small, untyped functional language with mutable references and exceptions. The compiler is programmed in the Coq proof assistant and has a proof of total correctness with respect to bigstep operational semantics for the source and target languages. Compilation is staged and includes standard phases like translation to continuationpassing style and closure conversion, as well as a common subexpression elimination optimization. In this work, our focus has been on discovering and using techniques that make our proofs easy to engineer and maintain. While most programming language work with proof assistants uses very manual proof styles, all of our proofs are implemented as adaptive programs in Coqâ€™s tactic language, making it possible to reuse proofs unchanged as new language features are added.
In this paper, we focus especially on phases of compilation that rearrange the structure of syntax with nested variable binders. That aspect has been a key challenge area in past compiler veriï¬cation projects, with much more effort expended in the statement and proof of binderrelated lemmas than is found in standard pencilandpaper proofs. We show how to exploit the representation technique of parametric higherorder abstract syntax to avoid the need to prove any of the usual lemmas about binder manipulation, often leading to proofs that are actually shorter than their pencilandpaper analogues. Our strategy is based on a new approach to encoding operational semantics which delegates all concerns about substitution to the meta language, without using features incompatible with generalpurpose type theories like Coqâ€™s logic.
The latest from Adam Chlipala. Yet another evolutionary step for Lambda Tamer. Between this and Ynot the Coq/certified compiler story seems to be getting more impressive nearly daily.
LNGen
LNgen generates locally nameless infrastructure for the Coq proof assistant from Ottlike specifications. Its output is based on the locally nameless style advocated in Engineering Formal Metatheory and includes all of the "infrastructure" lemmas associated with that style. Compared to Ott's locally nameless backend, LNgen favors generating a large collection of infrastructure lemmas over handling complex binding specifications and methods of defining syntax and relations.
There are really three stories here:
 Coq 8.2 shipped a while ago.
 Ott, a tool for PLT semantics work, has acquired a backend in support of the increasinglypopular "locally nameless" representation of binding structure in mechanized programming language metatheory.
 LNGen is another tool, using a subset of Ott syntax, that takes a slightly different approach from Ott's new backend to addressing the same issues.
From the U. Penn folks who brought us the Coq tutorial at POPL '08.

Recent comments
5 hours 34 min ago
6 hours 34 min ago
14 hours 15 min ago
1 day 11 hours ago
1 day 17 hours ago
2 days 10 hours ago
2 days 11 hours ago
2 days 12 hours ago
2 days 12 hours ago
2 days 12 hours ago