Semantics

Handlers of Algebraic Effects

Matija Pretnar, Gordon Plotkin (2009) Handlers of Algebraic Effects:

We present an algebraic treatment of exception handlers and,
more generally, introduce handlers for other computational effects repre-
sentable by an algebraic theory. These include nondeterminism, interac-
tive input/output, concurrency, state, time, and their combinations; in
all cases the computation monad is the free-model monad of the theory.
Each such handler corresponds to a model of the theory for the effects
at hand. The handling construct, which applies a handler to a compu-
tation, is based on the one introduced by Benton and Kennedy, and is
interpreted using the homomorphism induced by the universal property
of the free model. This general construct can be used to describe previ-
ously unrelated concepts from both theory and practice.

Handling a computational effect, such as raising an exception, amounts to homomorphically mapping the handled computation onto another computation. So, for example, raise is interpreted as the exception handling code given to the handler.

While encompassing both returning and non-returning handlers, this idea becomes more interesting when you start to handle the other effects, such as lookup and update. Then you can get things like state rollback when an exception occurs, and others (CSS renaming and hiding, stream redirection, timeout). Thus the semantics of handlers gives rise to a new programming construct.

If you want a gentler introduction to the subject (along with Plotkin's algebraic theory of effects and Levy's Call-by-Push-Value), try Pretnar's 2010 thesis.

The Galois connection between syntax and semantics

Peter Smith, The Galois connection between syntax and semantics.

explains Lawvere’s remark about ‘the familiar Galois connection between sets of axioms and classes of models, for a fixed [signature]‘

Seems like a rather nice introduction to the notion of Galois connection (I seem to remember someone asking about this awhile back).

One place Galois connections pop up is in the realm of Abstract Interpretation.

Milawa: A Self-Verifying Theorem Prover for an ACL2-Like Logic

Milawa: A Self-Verifying Theorem Prover for an ACL2-Like Logic

Milawa is a "self-verifying" theorem prover for an ACL2-like logic.

We begin with a simple proof checker, call it A, which is short enough to verify by the "social process" of mathematics.

We then develop a series of increasingly powerful proof checkers, call them B, C, D, and so on. We show that each of these is sound: they accept only the same formulas as A. We use A to verify B, and B to verify C, and so on. Then, since we trust A, and A says B is sound, we can trust B, and so on for C, D, and the rest.

Our final proof checker is really a theorem prover; it can carry out a goal-directed proof search using assumptions, calculation, rewrite rules, and so on. We use this theorem prover to discover the proofs of soundness for B, C, and so on, and to emit these proofs in a format that A can check. Hence, "self verifying."

This might help inform discussions of the relationship between the de Bruijn criterion (the "social process" of mathematics) and formal verification. I think it also serves as an interesting signpost on the road forward: it's one thing to say that starting with a de Bruijn core and evolving a more powerful prover is possible in principle; it's another thing for it to actually have been done. The author's thesis defense slides provide a nice, quick overview.

A Lambda Calculus for Real Analysis

A Lambda Calculus for Real Analysis

Abstract Stone Duality is a revolutionary paradigm for general topology that describes computable continuous functions directly, without using set theory, infinitary lattice theory or a prior theory of discrete computation. Every expression in the calculus denotes both a continuous function and a program, and the reasoning looks remarkably like a sanitised form of that in classical topology. This is an introduction to ASD for the general mathematician, with application to elementary real analysis.

This language is applied to the Intermediate Value Theorem: the solution of equations for continuous functions on the real line. As is well known from both numerical and constructive considerations, the equation cannot be solved if the function "hovers" near 0, whilst tangential solutions will never be found.

In ASD, both of these failures and the general method of finding solutions of the equation when they exist are explained by the new concept of overtness. The zeroes are captured, not as a set, but by higher-type modal operators. Unlike the Brouwer degree, these are defined and (Scott) continuous across singularities of a parametric equation.

Expressing topology in terms of continuous functions rather than sets of points leads to treatments of open and closed concepts that are very closely lattice- (or de Morgan-) dual, without the double negations that are found in intuitionistic approaches. In this, the dual of compactness is overtness. Whereas meets and joins in locale theory are asymmetrically finite and infinite, they have overt and compact indices in ASD.

Overtness replaces metrical properties such as total boundedness, and cardinality conditions such as having a countable dense subset. It is also related to locatedness in constructive analysis and recursive enumerability in recursion theory.

Paul Taylor is deadly serious about the intersection of logic, mathematics, and computation. I came across this after beating my head against Probability Theory: The Logic of Science and Axiomatic Theory of Economics over the weekend, realizing that my math just wasn't up to the tasks, and doing a Google search for "constructive real analysis." "Real analysis" because it was obvious that that was what both of the aforementioned texts were relying on; "constructive" because I'd really like to develop proofs in Coq/extract working code from them. This paper was on the second page of results. Paul's name was familiar (and not just because I share it with him); he translated Jean-Yves Girard's regrettably out-of-print Proofs and Types to English and maintains a very popular set of tools for typesetting commutative diagrams using LaTeX.

Joe-E: A Security-Oriented Subset of Java

Joe-E: A Security-Oriented Subset of Java. Adrian Mettler, David Wagner, and Tyler Close. To appear at ISOC NDSS 2010.

We present Joe-E, a language designed to support the development of secure software systems. Joe-E is a subset of Java that makes it easier to architect and implement programs with strong security properties that can be checked during a security review. It enables programmers to apply the principle of least privilege to their programs; implement application-specific reference monitors that cannot be bypassed; introduce and use domain-specific security abstractions; safely execute and interact with untrusted code; and build secure, extensible systems. Joe-E demonstrates how it is possible to achieve the strong security properties of an object-capability language while retaining the features and feel of a mainstream object-oriented language...

Section 5.2 discuss how Joe-E leverages Java static typing. Joe-E is implemented as a source-code verifier not a bytecode verifier. Section 6 of the paper explains this design choice.

Joe-E was mentioned on LtU in the past and is available here.

Monads in Action

Monads in Action, Andrzej Filinski, POPL 2010.

In functional programming, monadic characterizations of computational effects are normally understood denotationally: they describe how an effectful program can be systematically expanded or translated into a larger, pure program, which can then be evaluated according to an effect-free semantics. Any effect-specific operations expressible in the monad are also given purely functional definitions, but these definitions are only directly executable in the context of an already translated program. This approach thus takes an inherently Church-style view of effects: the nominal meaning of every effectful term in the program depends crucially on its type.

We present here a complementary, operational view of monadic effects, in which an effect definition directly induces an imperative behavior of the new operations expressible in the monad. This behavior is formalized as additional operational rules for only the new constructs; it does not require any structural changes to the evaluation judgment. Specifically, we give a small-step operational semantics of a prototypical functional language supporting programmer-definable, layered effects, and show how this semantics naturally supports reasoning by familiar syntactic techniques, such as showing soundness of a Curry-style effect-type system by the progress+preservation method.

The idea of monadic reflection was one I never felt I really understood properly until I read this paper, so now I'll have to go back and re-read some of his older papers on the subject.

Delimited Control in OCaml, Abstractly and Concretely, System Description

Delimited Control in OCaml, Abstractly and Concretely, System Description

We describe the first implementation of multi-prompt delimited control operators in OCaml that is direct in that it captures only the needed part of the control stack. The implementation is a library that requires no changes to the OCaml compiler or run-time, so it is perfectly compatible with existing OCaml source code and byte-code. The library has been in fruitful practical use for four years.

We present the library as an implementation of an abstract machine derived by elaborating the definitional machine. The abstract view lets us distill a minimalistic API, scAPI, sufficient for implementing multi-prompt delimited control. We argue that a language system that supports exception and stack-overflow handling supports scAPI. Our library illustrates how to use scAPI to implement multi-prompt delimited control in a typed language. The approach is general and can be used to add multi-prompt delimited control to other existing language systems.

Oleg was kind enough to send me an e-mail letting me know of this paper's existence (it appears not yet to be linked from the "Computation" page under which it is stored) and to include me in the acknowledgements. Since the paper in its current form has been accepted for publication, he indicated that it can be made more widely available, so here it is. In typical Oleg fashion, it offers insights at both the theoretical and implementation levels.

Verified Just-In-Time Compiler on x86

Verified Just-In-Time Compiler on x86
Magnus O. Myreen

This paper presents a method for creating formally correct just-in-time (JIT) compilers. The tractability of our approach is demonstrated through, what we believe is the first, verification of a JIT compiler with respect to a realistic semantics of self-modifying x86 machine code. Our semantics includes a model of the instruction cache. Two versions of the verified JIT compiler are presented: one generates all of the machine code at once, the other one is incremental i.e. produces code on-demand. All proofs have been performed inside the HOL4 theorem prover.

(To appear in next week's POPL.)

I've been enjoying this paper on my commute this week. It's a nice little distillation of some of the basics of the engineering structure of a JITted language and how the pieces fit together in a correct implementation. As JIT compilers become more and more commonplace, I'd like to see them presented in such a way that they're no more scary or daunting -- at least in principle -- than traditional offline compilers. Perhaps a chapter in EoPL4?

Syntactic Proofs of Compositional Compiler Correctness

Syntactic Proofs of Compositional Compiler Correctness

Semantic preservation by compilers for higher-order languages can be verified using simple syntactic methods. At the heart of classic techniques are relations between source-level and target-level values. Unfortunately, these relations are specific to particular compilers, leading to correctness theorems that have nothing to say about linking programs with functions compiled by other compilers or written by hand in the target language. Theorems based on logical relations manage to avoid this problem, but at a cost: standard logical relations do not apply directly to programs with non-termination or impurity, and extensions to handle those features are relatively complicated, compared to the classical compiler verification literature.

In this paper, we present a new approach to “open” compiler correctness theorems that is “syntactic” in the sense that the core relations do not refer to semantics. Though the technique is much more elementary than previous proposals, it scales up nicely to realistic languages. In particular, untyped and impure programs may be handled simply, while previous work has addressed neither in this context.

Our approach is based on the observation that it is an unnecessary handicap to consider proofs as black boxes. We identify some theorem-specific proof skeletons, such that we can define an algebra of nondeterministic compilations and their proofs, and we can compose any two compilations to produce a correct-by-construction result. We have prototyped these ideas with a Coq implementation of multiple CPS translations for an untyped Mini-ML source language with recursive functions, sums, products, mutable references, and exceptions.

A submitted draft of another paper from Adam, continuing to expand LambdaTamer's reach.

A Verified Compiler for an Impure Functional Language

A Verified Compiler for an Impure Functional Language

We present a verified compiler to an idealized assembly language from a small, untyped functional language with mutable references and exceptions. The compiler is programmed in the Coq proof assistant and has a proof of total correctness with respect to big-step operational semantics for the source and target languages. Compilation is staged and includes standard phases like translation to continuation-passing style and closure conversion, as well as a common subexpression elimination optimization. In this work, our focus has been on discovering and using techniques that make our proofs easy to engineer and maintain. While most programming language work with proof assistants uses very manual proof styles, all of our proofs are implemented as adaptive programs in Coq's tactic language, making it possible to reuse proofs unchanged as new language features are added.

In this paper, we focus especially on phases of compilation that rearrange the structure of syntax with nested variable binders. That aspect has been a key challenge area in past compiler verification projects, with much more effort expended in the statement and proof of binder-related lemmas than is found in standard pencil-and-paper proofs. We show how to exploit the representation technique of parametric higher-order abstract syntax to avoid the need to prove any of the usual lemmas about binder manipulation, often leading to proofs that are actually shorter than their pencil-and-paper analogues. Our strategy is based on a new approach to encoding operational semantics which delegates all concerns about substitution to the meta language, without using features incompatible with general purpose type theories like Coq's logic.

Further work on/with LambdaTamer for certified compiler development.

XML feed