Type Theory
A Verified Compiler for an Impure Functional Language
We present a verified compiler to an idealized assembly language from a small, untyped functional language with mutable references and exceptions. The compiler is programmed in the Coq proof assistant and has a proof of total correctness with respect to big-step operational semantics for the source and target languages. Compilation is staged and includes standard phases like translation to continuation-passing style and closure conversion, as well as a common subexpression elimination optimization. In this work, our focus has been on discovering and using techniques that make our proofs easy to engineer and maintain. While most programming language work with proof assistants uses very manual proof styles, all of our proofs are implemented as adaptive programs in Coq's tactic language, making it possible to reuse proofs unchanged as new language features are added.
In this paper, we focus especially on phases of compilation that rearrange the structure of syntax with nested variable binders. That aspect has been a key challenge area in past compiler verification projects, with much more effort expended in the statement and proof of binder-related lemmas than is found in standard pencil-and-paper proofs. We show how to exploit the representation technique of parametric higher-order abstract syntax to avoid the need to prove any of the usual lemmas about binder manipulation, often leading to proofs that are actually shorter than their pencil-and-paper analogues. Our strategy is based on a new approach to encoding operational semantics which delegates all concerns about substitution to the meta language, without using features incompatible with general purpose type theories like Coq's logic.
Further work on/with LambdaTamer for certified compiler development.
Certified Programming With Dependent Types
From the introduction:
We would all like to have programs check that our programs are correct. Due in no small part to some bold but unfulfilled promises in the history of computer science, today most people who write software, practitioners and academics alike, assume that the costs of formal program verification outweigh the benefits. The purpose of this book is to convince you that the technology of program verification is mature enough today that it makes sense to use it in a support role in many kinds of research projects in computer science. Beyond the convincing, I also want to provide a handbook on practical engineering of certified programs with the Coq proof assistant.
This is the best Coq tutorial that I know of, partially for being comprehensive, and partially for taking a very different tack than most with Adam's emphasis on proof automation using Coq's Ltac tactic language. It provides an invaluable education toward understanding what's going on either in LambdaTamer or Ynot, both of which are important projects in their own rights.
Please note that Adam is explicitly requesting feedback on this work.
Noam (uid 3913) announced on his weblog at the beginning of the year a technique that allows one to bridge the gap between meta-theoretic notions of function space and theory-internal notions, in a way that is compatible with structural induction over Higher-Order Abstract Syntax, by applying Reynolds' Definitional Interpreters for Higher-Order Programming Languages [pdf] (cf. LtU classic), and realised it as an implementation in Twelf.
That's a lot of ideas in one sentence, but since HOAS is in the air this month, I guess there are a good number of LtUers who will want to get to grips with this stuff.
Refs:
Effective Interactive Proofs for Higher-Order Imperative Programs
We present a new approach for constructing and verifying higher-order, imperative programs using the Coq proof assistant. We build on the past work on the Ynot system, which is based on Hoare Type Theory. That original system was a proof of concept, where every program veriï¬cation was accomplished via laborious manual proofs, with much code devoted to uninteresting low-level details. In this paper, we present a re-implementation of Ynot which makes it possible to implement fully-veriï¬ed, higher-order imperative programs with reasonable proof burden. At the same time, our new system is implemented entirely in Coq source ï¬les, showcasing the versatility of that proof assistant as a platform for research on language design and veriï¬cation.
Both versions of the system have been evaluated with case studies in the veriï¬cation of imperative data structures, such as hash tables with higher-order iterators. The veriï¬cation burden in our new system is reduced by at least an order of magnitude compared to the old system, by replacing manual proof with automation. The core of the automation is a simpliï¬cation procedure for implications in higher-order separation logic, with hooks that allow programmers to add domain-speciï¬c simpliï¬cation rules.
We argue for the effectiveness of our infrastructure by verifying a number of data structures and a packrat parser, and we compare to similar efforts within other projects. Compared to competing approaches to data structure veriï¬cation, our system includes much less code that must be trusted; namely, about a hundred lines of Coq code deï¬ning a program logic. All of our theorems and decision procedures have or build machine-checkable correctness proofs from ï¬rst principles, removing opportunities for tool bugs to create faulty veriï¬cations.
Adam Chlipala has been telling us how underutilized Coq's Ltac tactic programming language for proof automation is for years. Here is the... er... proof.
Certiï¬ed Web Services in Ynot
In this paper we demonstrate that it is possible to implement certiï¬ed web systems in a way not much different from writing Standard ML or Haskell code, including use of imperative features like pointers, ï¬les, and socket I/O. We present a web-based course gradebook application developed with Ynot, a Coq library for certiï¬ed imperative programming. We add a dialog-based I/O system to Ynot, and we extend Ynot’s underlying Hoare logic with event traces to reason about I/O behavior. Expressive abstractions allow the modular certiï¬cation of both high level speciï¬cations like privacy guarantees and low level properties like memory safety and correct parsing.
Ynot, always ambitious, takes another serious swing: extracting a real web application from a proof development. In some respects the big news here is the additional coverage that Ynot now offers in terms of support for file and socket I/O, and the event trace mechanism. But there's even bigger news, IMHO, which is the subject of another paper that warrants a separate post.
In Observational Equality, Now! Thorsten Altenkirch, Conor McBride, and Wouter Swierstra have
something new and positive to say about propositional equality in programming and proof systems based on the Curry-Howard correspondence between propositions and types. We have found a way to present a propositional equality type
- which is substitutive, allowing us to reason by replacing equal for equal in propositions;
- which reflects the observable behaviour of values rather than their construction: in particular, we have extensionality — functions are equal if they take equal inputs to equal outputs;
- which retains strong normalisation, decidable typechecking and canonicity — the property that closed normal forms inhabiting datatypes have canonical constructors;
- which allows inductive data structures to be expressed in terms of a standard characterisation of well-founded trees;
- which is presented syntactically — you can implement it directly, and we are doing so—this approach stands at the core of Epigram 2;
- which you can play with now: we have simulated our system by a shallow embedding in Agda 2, shipping as part of the standard examples package for that system [21]. Until now, it has always been necessary to sacrifice some of these aspects. The closest attempt in the literature is Al- tenkirch’s construction of a setoid-model for a system with canon- icity and extensionality on top of an intensional type theory with proof-irrelevant propositions [4]. Our new proposal simplifies Altenkirch’s construction by adopting McBride’s heterogeneous ap- proach to equality.
Verified Programming in Guru is a tutorial introduction to Guru:
GURU is a pure functional programming language, which is similar in some ways to Caml and Haskell. But GURU also contains a language for writing formal proofs demonstrating the properties of programs. So there are really two languages: the language of programs, and the language of proofs.
In comparison to Coq:
GURU is inspired largely by the COQ theorem prover, used for formalized mathematics and theoretical computer science, as well as program verification. Like COQ, GURU has syntax for both proofs and programs, and supports dependent types. GURU does not have as complex forms of polymorphism and dependent types as COQ does. But GURU supports some features that are difficult or impossible for COQ to support, which are useful for practical program verification. In COQ, the compiler must be able to confirm that all programs are uniformly terminating: they must terminate on all possible inputs. We know from basic recursion theory or theoretical computer science that this means there are some programs which really do terminate on all inputs that the compiler will not be able to confirm do so. Furthermore, some programs, like web servers or operating systems, are not intended to terminate. So that is a significant limitation. Other features GURU has that COQ lacks include support for functional modeling of non-functional constructs like destructive updates of data structures and arrays; and better support for proving properties of dependently typed functions.
The tutorial is worth a read to anybody new to this style of programming as it is one of the most gentle introductions to dependent types and automated program verification that I've seen.
On Monday, July 13th the C++ standards committee voted "Concepts" out of consideration for C++0x.
First, skepticism regarding the feasibility and usefulness of concepts intensified the antipathy towards this proposal. Some people expressed concerns about compile-time and runtime overhead. Second, the creators of the Concepts proposal tried desperately to improve and patch Concepts. The last nail in the coffin was Bjarne Stroustrup's paper "Simplifying the Use of Concepts" from June. It's a masterpiece in terms of presenting Concepts but it also scared folks. The general sense was that concepts were broken, the committee was not sure what the correct direction was to fix them, and it would probably take several more years to come up with a reasonable fix that would achieve consensus. Considering that Concepts were originally designed to simplify C++, a critical mass of committee members agreed in July 2009 that it was time to bid Concepts goodbye.
Edit:
For more on the meeting see "The View (or trip report) from the July 2009 C++ Standard Meeting" part 1 and part 2
Edit 2:
Bjarne Stroustrup on The C++0x "Remove Concepts" Decision.
Translation of Tree-processing Programs into Stream-processing Programs based on Ordered Linear Types, Koichi Kodama, Kohei Suenaga, Naoki Kobayashi, JFP 2008.
There are two ways to write a program for manipulating tree-structured data such as XML documents: One is to write a tree-processing program focusing on the logical structure of the data and the other is to write a stream-processing program focusing on the physical structure. While tree-processing programs are easier to write than stream-processing programs, tree-processing programs are less efficient in memory usage since they use trees as intermediate data. Our aim is to establish a method for automatically translating a tree-processing program to a stream-processing one in order to take the best of both worlds.
We first define a programming language for processing binary trees and a type system based on ordered linear types, and show that every well-typed program can be translated to an equivalent stream-processing program. We then extend the language and the type system to deal with XML documents. We have implemented an XML stream processor generator based on our algorithm, and obtained promising experimental results.
Linear logic is what you get when you give up the structural rules of weakening and contraction -- it's logic when you cannot reuse or forget any hypothesis you make in the course of a proof. This means that every formula is used exactly once, which makes it useful (via Curry-Howard) for programming, as well: linear types give us a way of tracking resources and state in a type system. Intuitively, you can think of it as a kind of static analysis that ensures that an object's runtime reference count will always be one.
However, linear logic retains the structural rule of exchange, which lets us use hypotheses in a different order than we introduced them. Ordered logic is what you get when you drop exchange, as well. (Amusingly, it was invented long before linear logic -- in the 1950s, Lambek introduced it with an eye towards applications in linguistics: he wanted to express the difference between one word occurring either before, or after, another.)
This means that you can use ordered logic to express the order in your types the order in which you use data, as well, which the authors here use to design a language whose typechecker statically rules out memory-inefficient programs.

|
Recent comments
4 weeks 2 days ago
4 weeks 3 days ago
4 weeks 4 days ago
4 weeks 4 days ago
5 weeks 2 days ago
5 weeks 2 days ago
5 weeks 2 days ago
8 weeks 3 days ago
9 weeks 1 day ago
9 weeks 2 days ago