A functional correspondence between evaluators and abstract machines

A functional correspondence between evaluators and abstract machines by Mads Sig Ager, Dariusz Biernacki, Olivier Danvy, and Jan Midtgaard, 2003.

We bridge the gap between functional evaluators and abstract machines for the lambda-calculus, using closure conversion, transformation into continuation-passing style, and defunctionalization of continuations.
We illustrate this bridge by deriving Krivine's abstract machine from an ordinary call-by-name evaluator and by deriving an ordinary call-by-value evaluator from Felleisen et al.'s CEK machine. The first derivation is strikingly simpler than what can be found in the literature. The second one is new. Together, they show that Krivine's abstract machine and the CEK machine correspond to the call-by-name and call-by-value facets of an ordinary evaluator for the lambda-calculus.
We then reveal the denotational content of Hannan and Miller's CLS machine and of Landin's SECD machine. We formally compare the corresponding evaluators and we illustrate some relative degrees of freedom in the design spaces of evaluators and of abstract machines for the lambda-calculus with computational effects.

I was surprized not to find this paper featured in a story on LtU, as it looks very important both from implementation and understanding points of view.
See also more recent paper by Dariusz Biernacki and Olivier Danvy: From Interpreter to Logic Engine by Defunctionalization, which applies similar ideas in context of Prolog-like language.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I mentioned it a long time

I mentioned it a long time ago. I'm sure I've brought it up some other times, but probably with a general reference to Danvy's BRICS page, e.g. this. There are about eight papers in this vein. But yes, it's well deserving of a story, and I am not sure why I never did it.

Right

Actually, I should have used Google instead of built-in LtU search, as that immediately gave me this, for example.

I found the derivation of a

I found the derivation of a virtual machine and compiler from the interpreter, as summarized here, particularly interesting. It seems most such derived virtual machines are inherently stack-based. How does the stack-based instruction language of the derived virtual machine compare to the source terms of the abstract machine for compilation purposes? Consider compiling a bytecode representation of abstract machine source terms, and a bytecode representation of the virtual machine instruction set. I'm wondering which representation would be more suitable for runtime optimization and compilation.

I know there's lots of literature on efficiently compiling stack languages, but I haven't found any comparisons of a stack machine instruction set against a higher-level abstract machine representation.

Also, the link above provides links to a paper Deriving Compilers and Virtual Machines for a Multi-Level Language, which extends the VM derivation with multistaging capabilities. Very interesting work.

It's wired in

I believe the reason for ending up with a stack language so often is the way environments are usually represented and manipulated in such interpreters. Compare this with the work of Atsushi Ohori, and in particular the work in e.g. "Register Allocation by Proof Transformation" (PDF) (PS) for how one might end up with a register-based machine.

Interesting links! I'll

Interesting links! I'll definitely enjoy ploughing through those. I'm constantly amazed at the sophisticated and useful applications of expressive type systems, from proof-carrying code, to correctness-preserving optimizations, to type-based SSA, to register allocation.

Unfortunately, I still haven't found anything discussing typed abstract syntax tree compilation vs. stack-based virtual machine compilation. I agree with your point that stacks are a natural fit for substitution. I have a vague compilation model in mind for my AST, so I'll just have to go through the stack machine literature and see how it compares.

[Edit: what do other languages use for their IR, ie. OCaml, SML, Haskell, etc.? I know OCaml's bytecode is a stack machine, but I don't recall whether it was a fully derived virtual machine, or just the abstract machine.]