Semantics
Peter Smith, The Galois connection between syntax and semantics.
explains Lawvereâ€™s remark about â€˜the familiar Galois connection between sets of axioms and classes of models, for a fixed [signature]â€˜
Seems like a rather nice introduction to the notion of Galois connection (I seem to remember someone asking about this awhile back).
One place Galois connections pop up is in the realm of Abstract Interpretation.
Milawa: A SelfVerifying Theorem Prover for an ACL2Like Logic
Milawa is a "selfverifying" theorem prover for an ACL2like logic.
We begin with a simple proof checker, call it A, which is short enough to verify by the "social process" of mathematics.
We then develop a series of increasingly powerful proof checkers, call them B, C, D, and so on. We show that each of these is sound: they accept only the same formulas as A. We use A to verify B, and B to verify C, and so on. Then, since we trust A, and A says B is sound, we can trust B, and so on for C, D, and the rest.
Our final proof checker is really a theorem prover; it can carry out a goaldirected proof search using assumptions, calculation, rewrite rules, and so on. We use this theorem prover to discover the proofs of soundness for B, C, and so on, and to emit these proofs in a format that A can check. Hence, "self verifying."
This might help inform discussions of the relationship between the de Bruijn criterion (the "social process" of mathematics) and formal verification. I think it also serves as an interesting signpost on the road forward: it's one thing to say that starting with a de Bruijn core and evolving a more powerful prover is possible in principle; it's another thing for it to actually have been done. The author's thesis defense slides provide a nice, quick overview.
A Lambda Calculus for Real Analysis
Abstract Stone Duality is a revolutionary paradigm for general topology that describes computable continuous functions directly, without using set theory, infinitary lattice theory or a prior theory of discrete computation. Every expression in the calculus denotes both a continuous function and a program, and the reasoning looks remarkably like a sanitised form of that in classical topology. This is an introduction to ASD for the general mathematician, with application to elementary real analysis.
This language is applied to the Intermediate Value Theorem: the solution of equations for continuous functions on the real line. As is well known from both numerical and constructive considerations, the equation cannot be solved if the function "hovers" near 0, whilst tangential solutions will never be found.
In ASD, both of these failures and the general method of finding solutions of the equation when they exist are explained by the new concept of overtness. The zeroes are captured, not as a set, but by highertype modal operators. Unlike the Brouwer degree, these are defined and (Scott) continuous across singularities of a parametric equation.
Expressing topology in terms of continuous functions rather than sets of points leads to treatments of open and closed concepts that are very closely lattice (or de Morgan) dual, without the double negations that are found in intuitionistic approaches. In this, the dual of compactness is overtness. Whereas meets and joins in locale theory are asymmetrically finite and infinite, they have overt and compact indices in ASD.
Overtness replaces metrical properties such as total boundedness, and cardinality conditions such as having a countable dense subset. It is also related to locatedness in constructive analysis and recursive enumerability in recursion theory.
Paul Taylor is deadly serious about the intersection of logic, mathematics, and computation. I came across this after beating my head against Probability Theory: The Logic of Science and Axiomatic Theory of Economics over the weekend, realizing that my math just wasn't up to the tasks, and doing a Google search for "constructive real analysis." "Real analysis" because it was obvious that that was what both of the aforementioned texts were relying on; "constructive" because I'd really like to develop proofs in Coq/extract working code from them. This paper was on the second page of results. Paul's name was familiar (and not just because I share it with him); he translated JeanYves Girard's regrettably outofprint Proofs and Types to English and maintains a very popular set of tools for typesetting commutative diagrams using LaTeX.
JoeE: A SecurityOriented Subset of Java. Adrian Mettler, David Wagner, and Tyler Close. To appear at ISOC NDSS 2010.
We present JoeE, a language designed to support the development of secure software systems. JoeE is a subset of Java that makes it easier to architect and implement programs with strong security properties that can be checked
during a security review. It enables programmers to apply the principle of least privilege to their programs; implement applicationspecific reference monitors that cannot be bypassed; introduce and use domainspecific security abstractions; safely execute and interact with untrusted code; and build secure, extensible systems. JoeE demonstrates how it is possible to achieve the strong security properties of an objectcapability language while retaining the features and feel of a mainstream objectoriented language...
Section 5.2 discuss how JoeE leverages Java static typing. JoeE is implemented as a sourcecode verifier not a bytecode verifier. Section 6 of the paper explains this design choice.
JoeE was mentioned on LtU in the past and is available here.
Monads in Action, Andrzej Filinski, POPL 2010.
In functional programming, monadic characterizations of computational effects are normally understood denotationally: they describe how an effectful program can be systematically expanded or translated into a larger, pure program, which can then be evaluated according to an effectfree semantics. Any effectspecific operations expressible in the monad are also given purely functional definitions, but these definitions are only directly executable in the context of an already translated program. This approach thus takes an inherently Churchstyle view of effects: the nominal meaning of every effectful term in the program depends crucially on its type.
We present here a complementary, operational view of monadic effects, in which an effect definition directly induces an imperative behavior of the new operations expressible in the monad. This behavior is formalized as additional operational rules for only the new constructs; it does not require any structural changes to the evaluation judgment. Specifically, we give a smallstep operational semantics of a prototypical functional language supporting programmerdefinable, layered effects, and show how this semantics naturally supports reasoning by familiar syntactic techniques, such as showing soundness of a Currystyle effecttype system by the progress+preservation method.
The idea of monadic reflection was one I never felt I really understood properly until I read this paper, so now I'll have to go back and reread some of his older papers on the subject.
Delimited Control in OCaml, Abstractly and Concretely, System Description
We describe the first implementation of multiprompt delimited control operators in OCaml that is direct in that it captures only the needed part of the control stack. The implementation is a library that requires no changes to the OCaml compiler or runtime, so it is perfectly compatible with existing OCaml source code and bytecode. The library has been in fruitful practical use for four years.
We present the library as an implementation of an abstract machine derived by elaborating the definitional machine. The abstract view lets us distill a minimalistic API, scAPI, sufficient for implementing multiprompt delimited control. We argue that a language system that supports exception and stackoverflow handling supports scAPI. Our library illustrates how to use scAPI to implement multiprompt delimited control in a typed language. The approach is general and can be used to add multiprompt delimited control to other existing language systems.
Oleg was kind enough to send me an email letting me know of this paper's existence (it appears not yet to be linked from the "Computation" page under which it is stored) and to include me in the acknowledgements. Since the paper in its current form has been accepted for publication, he indicated that it can be made more widely available, so here it is. In typical Oleg fashion, it offers insights at both the theoretical and implementation levels.
Verified JustInTime Compiler on x86
Magnus O. Myreen
This paper presents a method for creating formally correct justintime (JIT) compilers. The tractability of our approach is demonstrated through, what we believe is the first, verification of a JIT compiler with respect to a realistic semantics of selfmodifying x86 machine code. Our semantics includes a model of the instruction cache. Two versions of the verified JIT compiler are presented: one generates all of the machine code at once, the other one is incremental i.e. produces code ondemand. All proofs have been performed inside the HOL4 theorem prover.
(To appear in next week's POPL.)
I've been enjoying this paper on my commute this week. It's a nice little distillation of some of the basics of the engineering structure of a JITted language and how the pieces fit together in a correct implementation. As JIT compilers become more and more commonplace, I'd like to see them presented in such a way that they're no more scary or daunting  at least in principle  than traditional offline compilers. Perhaps a chapter in EoPL4?
Syntactic Proofs of Compositional Compiler Correctness
Semantic preservation by compilers for higherorder languages can be veriï¬ed using simple syntactic methods. At the heart of classic techniques are relations between sourcelevel and targetlevel values. Unfortunately, these relations are speciï¬c to particular compilers, leading to correctness theorems that have nothing to say about linking programs with functions compiled by other compilers or written by hand in the target language. Theorems based on logical relations manage to avoid this problem, but at a cost: standard logical relations do not apply directly to programs with nontermination or impurity, and extensions to handle those features are relatively complicated, compared to the classical compiler veriï¬cation literature.
In this paper, we present a new approach to â€œopenâ€ compiler correctness theorems that is â€œsyntacticâ€ in the sense that the core relations do not refer to semantics. Though the technique is much more elementary than previous proposals, it scales up nicely to realistic languages. In particular, untyped and impure programs may be handled simply, while previous work has addressed neither in this context.
Our approach is based on the observation that it is an unnecessary handicap to consider proofs as black boxes. We identify some theoremspeciï¬c proof skeletons, such that we can deï¬ne an algebra of nondeterministic compilations and their proofs, and we can compose any two compilations to produce a correctbyconstruction result. We have prototyped these ideas with a Coq implementation of multiple CPS translations for an untyped MiniML source language with recursive functions, sums, products, mutable references, and exceptions.
A submitted draft of another paper from Adam, continuing to expand LambdaTamer's reach.
A Verified Compiler for an Impure Functional Language
We present a verified compiler to an idealized assembly language from a small, untyped functional language with mutable references and exceptions. The compiler is programmed in the Coq proof assistant and has a proof of total correctness with respect to bigstep operational semantics for the source and target languages. Compilation is staged and includes standard phases like translation to continuationpassing style and closure conversion, as well as a common subexpression elimination optimization. In this work, our focus has been on discovering and using techniques that make our proofs easy to engineer and maintain. While most programming language work with proof assistants uses very manual proof styles, all of our proofs are implemented as adaptive programs in Coq's tactic language, making it possible to reuse proofs unchanged as new language features are added.
In this paper, we focus especially on phases of compilation that rearrange the structure of syntax with nested variable binders. That aspect has been a key challenge area in past compiler verification projects, with much more effort expended in the statement and proof of binderrelated lemmas than is found in standard pencilandpaper proofs. We show how to exploit the representation technique of parametric higherorder abstract syntax to avoid the need to prove any of the usual lemmas about binder manipulation, often leading to proofs that are actually shorter than their pencilandpaper analogues. Our strategy is based on a new approach to encoding operational semantics which delegates all concerns about substitution to the meta language, without using features incompatible with general purpose type theories like Coq's logic.
Further work on/with LambdaTamer for certified compiler development.
Certified Programming With Dependent Types
From the introduction:
We would all like to have programs check that our programs are correct. Due in no small part to some bold but unfulfilled promises in the history of computer science, today most people who write software, practitioners and academics alike, assume that the costs of formal program verification outweigh the benefits. The purpose of this book is to convince you that the technology of program verification is mature enough today that it makes sense to use it in a support role in many kinds of research projects in computer science. Beyond the convincing, I also want to provide a handbook on practical engineering of certified programs with the Coq proof assistant.
This is the best Coq tutorial that I know of, partially for being comprehensive, and partially for taking a very different tack than most with Adam's emphasis on proof automation using Coq's Ltac tactic language. It provides an invaluable education toward understanding what's going on either in LambdaTamer or Ynot, both of which are important projects in their own rights.
Please note that Adam is explicitly requesting feedback on this work.

Recent comments
1 week 2 days ago
1 week 3 days ago
1 week 4 days ago
2 weeks 6 days ago
3 weeks 4 hours ago
3 weeks 14 hours ago
3 weeks 17 hours ago
3 weeks 18 hours ago
3 weeks 1 day ago
3 weeks 1 day ago