Tagless Staged Interpreters for Simpler Typed Languages

Finally Tagless, Partially Evaluated, Tagless Staged Interpreters for Simpler Typed Languages.
Jacques Carette, Oleg Kiselyov, and Chung-chieh Shan.

We have built the first family of tagless interpretations for a higher-order typed object language in a typed metalanguage (Haskell or ML) that require no dependent types, generalized algebraic data types, or postprocessing to eliminate tags. The statically type-preserving interpretations include an evaluator, a compiler (or staged evaluator), a partial evaluator, and call-by-name and call-by-value CPS transformers.
Our main idea is to encode HOAS using cogen functions rather than data constructors. In other words, we represent object terms not in an initial algebra but using the coalgebraic structure of the LC. Our representation also simulates inductive maps from types to types, which are required for typed partial evaluation and CPS transformations.

Oleg explains:

It seems like a common wisdom that an embedding of a typed object language (e.g., DSL) to a typed meta-language so that all and only typed object terms can be represented requires dependent types, GADTs or other such advanced type systems. In fact, this problem of writing (tagless) type-preserving typed interpreters has been the motivation for most of the papers on GADTs. We show that regardless of merits and conveniences of GADTs, type-preserving typed interpretation can be achieved with no GADTs whatsoever, using very simple type systems of ML or Haskell98. We also show the same approach lets us perform statically type-preserving partial evaluation and call-by-value or call-by-name CPS tansformations. The latter transformations, too, are often claimed impossible in Haskell98 or ML - requiring instead quite advanced type systems or language features.

The complete (Meta)OCaml and Haskell code accompanying the paper is
available (see readme).

One of features of our approach is writing the DSL code in a form that can be interpreted in multiple ways. Recently we have become aware the very same approach underlies `abstract categorial grammars' (ACG) in linguistics. Chung-chieh Shan has written an extensive article on this correspondence. That post itself can be interpreted in several ways: the file can be read as plain text, or it can be loaded as it is in Haskell or OCaml interpreters.

It should be noted that the linguistic terms `tectogrammatics' and `phenogrammatics' were coined by none else but Haskell Curry, in his famous 1961 paper 'Some Logical Aspects of Grammatical Structure'. The summary of the ESSLLI workshop describes further connections to linear lambda-calculus.

The paper has been accepted for APLAS; the authors appreciate any comments indeed.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Can that be extended to non-domain specific lang.

So this is for type-safe DSLs? Does it impose constraints on the DSL itself? Like syntax, constructs, interpretation... I see that it is
(* In CPS, as it was originally designed. It doesn't seem to matter though *)

Finally can the same approach be used for a non domain-specific language, removing the need for an advanced type system or is it the equivalent of adding an indirection on top of the host language type system.

DSLs vs non-DSLs

I don't know a hard line to draw between DSLs and non-DSLs. The prototypical DSL we use in our paper is essentially the lambda calculus. The approach may or may not "remove the need for an advanced type system" in the host language depending on how advanced the type system of the object language is -- in general, the host language's type system only needs to be a little more advanced than the object language's type system, because the "tagless final" way doesn't add much indirection on top of the host language type system. For example, if the object language is the simply typed lambda calculus then the host language can be just Hindley-Milner.

Can this approach be

Can this approach be meta-circular, ie. encode an object language whose type system is equivalent in power to the host language? Or does the host language's type system need to be strictly more powerful? I suspect the latter.

Tagless typed self-interpretation

naasking wrote:
Can this approach be meta-circular, ie. encode an object language whose type system is equivalent in power to the host language?
That is indeed a very interesting question! It appears the answer is yes and no. We saw that the availability of type constructors (of the kind * -> *) was crucial for the abstraction over interpreters. The generator of abstract interpreters needs to encode that type constructor, which requires higher-kinds (i.e., type constructor constructor), etc. It seems we can't get by without the kind polymorphism. OTH, if we assume a sort of let-abstraction, things do work-out, embarrassingly simple. The full paper (referenced from the Chung-chieh Shan's blog) has a special section on self-interpretation. Hopefully it shows a useful view of polymorphic let in modern functional programming languages.

Self-interpretation->staging->dynamic linking?

Assuming the appropriate self-interpretation, could this approach expose the host language's compiler itself as a runtime module, and compile dynamic expressions at runtime ala staged computation? That would be quite a useful advancement, as it could express dynamic linking within any statically typed host language of sufficient power via a JITed bytecode (or directly from the source code). I think this would require that all language expressions be first-class, ie. first-class modules.

The problem to be solved?

With my understanding of the original problem (static validation of manipulation of the syntax of statically-typed object languages), this approach seems to have a serious problem: The representation of an object program is not a first-class value. For instance, you couldn't read a program from a file at run-time. In OCaml, a program is a functor, and functors are "purely compile-time" things. In Haskell 98, a program is represented with a type beginning with a type class quantifier, and such quantifiers are only allowed at the very beginnings of the types of top-level values (or something like that, right?). As I understand it, it's not even possible to define a type synonym or other Haskell entity referenced by a single identifier that says what it is type-wise to be a program.

Am I missing an encoding trick for one or both languages that would permit run-time parsing of this kind of program, or is this somehow outside the scope of the work?

The parsing/typechecking problem

Adam Chlipala wrote:

With my understanding of the original problem (static validation of
manipulation of the syntax of statically-typed object languages), this
approach seems to have a serious problem: The representation of an
object program is not a first-class value.

The last sentence of Section 1.0 (the paragraph before Section 1.1)
discusses that very concern. We were seriously short on space and had
to abbreviate as much as we could (and then some more).

To answer your question: the problem of reading the term of the object
language from a file or a string and parsing it (doing typechecking so
to make sure the resulting term is typed in the object language) is
essentially a solved problem, at least in Haskell. We refer to `typing
dynamic typing'. More precisely, the problem of parsing/typechecking
into typed object terms has been discussed on the Haskell mailing list
about a year ago:

http://www.haskell.org/pipermail/haskell-cafe/2006-October/019160.html
http://www.haskell.org/pipermail/haskell-cafe/2006-October/019161.html
http://www.haskell.org/pipermail/haskell-cafe/2006-November/019193.html

In those examples, typed object terms are represented via GADTs, but
that is immaterial and Symantics encoding could be used just as
well.

That typechecking solution indeed is not portable to SML because
functors and structures are `purely compile-time things'. However,
there are ML dialects (I think, MoscowML and AliceML) where structures
are first-class.

It should be noted that most of the papers on embedding of the typed
language into a typed metalanguage (e.g., works of Hongwei Xi, Simon
Peyton-Jones, Morten Rhiger, etc) consider just the representation
problem rather than the typechecking problem. The only exception is
the paper by Pasalic, Taha, Sheard (ICFP2002), which deals with
typechecking as well.

The type of a parser?

Thanks for the quick and thorough reply! I think I didn't express my question properly. The following should elucidate it:

What is the type of the AST that the parser returns? Doesn't it have to begin with a type-class quantifier, and aren't those restricted in a similar way as normal type variable quantifiers are restricted in Haskell 98? You would want a parser like:

parse :: String -> (Symantics a => something(a))

...but isn't there a restriction on type class quantifications that forces it to be

parse :: Symantics a => String -> something(a)

? That is, the Symantics you want to use must be decided at parsing time, rather than leaving that choice to later, when it may be resolved in multiple different ways.

My question really isn't specific to parsing or type-checking. It has to do with typing functions that manipulate ASTs, such that their results can be used with multiple Symantics interpretations later on.

The type of a parser?

Adam Chlipala wrote

You would want a parser like:

parse :: String -> (Symantics a => something(a))

...but isn't there a restriction on type class
quantifications that forces it to be

parse :: Symantics a => String -> something(a)

? That is, the Symantics you want to use must be decided at
parsing time, rather than leaving that choice to later, when it may be
resolved in multiple different ways.

A very good question! One should note that the argument of Symantics
type class has the kind * -> *. In other words,
Symantics is a constructor class. That is crucial to the abstraction
over interpreters. The type of the interpreted term
(repr a) can be thought of as `made' of two parts:
repr depends on the interpreter and a
depends on the object term. It is this partitioning that lets us
type-check a term separately of its interpreter (and typecheck the
interpreter separately of its terms).

So, the type of parse will be something of the form

parse :: Symantics repr => String -> (forall a. repr a -> w) -> w

Or, to be precise, we can define the following existential, meant to
be the result of the parsing:

data DynTerm repr = forall a. Show a => DynTerm (repr a)

where the constraint Show lets us see the eventual result of the
interpretation.

We abstract over the type of the object term (which we don't know and
will determine as we parse); yet we do not commit to any particular
interpretation. We can interpret the same DynTerm using the variety of
interpreters/compilers. The end of the file incope.hs in the updated
tagless-final.tar.gz archive demonstrates exactly that multitude of
interpretations of the same term, using the interpreters R
(familiar interpreter), L (size computation), C (byte-compiler),
and P (partial evaluator).

Explicit "forall"?

Isn't the explicit "forall" a GHC extension? Can you do this in Haskell 98? Since you already said you don't know of an analogous technique for dynamic representation of terms in vanilla OCaml, does this mean that, while your technique works in generic Haskell 98 and OCaml, it can be used for dynamically-built terms only with extensions to those languages? (I think most readers would assume that a technique for manipulating programs works for more than just a fixed set of programs entered in the source code of the program manipulator. I interpreted your comment on this in the paper as saying that you thought it was possible but didn't have space to go into it, not that the technique precluded it.)

Explicit "forall" is a

Explicit "forall" is a common extension (Hugs supports it as well, I believe YHC's planning to) that's pretty certain to make it into haskell' - but yeah, it's not Haskell 98. Same goes for dynamics, though they're a far more common one because they only really require extending the runtime rather than the type system.

Interpreter or Compiler?

Here's perhaps another way to look at the issue raised by Adam.
It looks like the paper presents a compiler, not an interpreter.
The compiler looks like the standard higher-order compiler that
corresponds to the typical denotational semantics of a functional
language. On what basis do the authors argue that this is an
interpreter and not a compiler? After all, writing a
compiler that produces tagless code is not an open problem.

staging as dynamic linking

naasking wrote: ``could this approach expose the host language's
compiler itself as a runtime module, and compile dynamic expressions
at runtime ala staged computation?''

I'm not sure that I fully understand the question. I should say that
MetaOCaml already does all this. When you compile staged code to the
native code via metaocamlopt, you link the result against the 3.5MB
library metanative.a -- which contains the whole OCaml compiler. If
you list the contents of that archive, you'll see the names lexer.o,
parser.o, ctype.o (the typechecker), cmmgen.o (generator to C--),
spill.o, etc. Running of the staged code is implemented by invoking
the compiler, at run-time, to compile the code to assembly, make a
shared object and link it in into the running program. Likewise,
`offshoring'' in the byte-code compiles the OCaml code value to C
code, invokes gcc to make a shared object and links it back
in. Staging ensures that compilation should produce no typing errors,
and dynamic linking will not encounter any unresolved references.

I'll try to clarify

I'll try to clarify: this paper is about embedding a language in a statically typed host language. If we're self-interpreting, the host language is the one being embedded. This is similar to staging, though perhaps not as convenient, correct?

Using an algebraic representation of the term language, as is usually done, one uses an "eval" function to evaluate a constructed expression. I can similarly imagine a "compile" function which directly compiles the expression tree to remove all the interpretive overhead; for instance, .NET compiles such expression trees in LINQ.

So imagine some small bytecode language whose VM provides a basic JIT, but no dynamic loading facilities beyond a basic file read/write interface. Dynamic code loading can then be provided from within the language as a module which translates this bytecode into native terms using your Symantics module(s) [1].

I think that even in your tagless approach, there is still some small overhead in the constructed term as each host language value actually invokes a function. I was wondering whether there can be a similar "compile" function for this paper's term representation. Or perhaps partial evaluation Symantics module takes care of inlining all such small functions and so the overhead is non-existent? Although if I'm assuming the partial eval module, the language is already multistaged, so perhaps there's a simpler way to accomplish this.

On an unrelated note, I wasn't aware that MetaOCaml used C--. I hadn't seen much C-- activity lately, and thought that perhaps it wasn't being actively developed any longer.

[1] The point is to move as much as possible into the safer host language, rather than providing VM facilities which are invariably written in unsafe languages like C/C++.

C--

On an unrelated note, I wasn't aware that MetaOCaml used C--. I hadn't seen much C-- activity lately, and thought that perhaps it wasn't being actively developed any longer.

There was a status report at the end of 2006 which indicated some uncertainty about the future. I don't know what's happened since then, although there are mentions of it in places like the GHC wiki's page about its new code generator.

Final clarification

Final clarification: I'm assuming here that the language is not multistaged, which means that your C Symantics module is not expressible [1]. I'm suggesting the C module be provided by the runtime, which builds a representation for the machine code emitter. Essentially, this adds multistaging capabilities without extending the language and complicating the type system, by reifying the JIT/compiler as a module.

[1] I'm currently reading Implementing Multi-stage Languages Using ASTs, Gensym, and Reflection, and multistaging adds considerable complexity to a language.

Dynamic loading and staging

Sorry for a late reply: I was out of the country; attending APLAS in fact. The conference turns out quite interesting, perhaps more interesting than ICFP. It was the only conference I attended with two talks on CBN-CBV duality. The conference web site has the pictures of all the talks and all the posters. The slides of Ken's talk (besides
those captured in the pictures) are also available.

To answer your questions: indeed, staging is very complex. It is
possible nevertheless to represent staging (including stage
distinctions and the manipulation of `open' code) in an unstaged
language. The price to pay is the number of administrative
redices. One may consider genuine staging to be elimination of those
redices. It is also constructively possible (that is, we have the
code) to `load' the untyped bytecode, typecheck it once, and then
interpret several times. Typechecking may of course report an error;
the result of typechecking (`compiled' code) is assuredly well-typed
and its evaluation will never give any pattern-match error because
there is no pattern matching or any tags for that matter. This is the
essentially `Typing Dynamic Typing'. I was surprised to see how many
people actually wanted to see that result in the Semantics
framework. Perhaps it's better to discuss these issues in a separate
thread...

Administrative redices -> constant propagation

To answer your questions: indeed, staging is very complex. It is possible nevertheless to represent staging (including stage distinctions and the manipulation of `open' code) in an unstaged language. The price to pay is the number of administrative redices.

Which I believe can be removed by traditional compiler optimizations when jitting the resulting expression (constant propagation, etc.); coupled with the optimizations already performed by partial evaluation, I believe this reduces the need for some sophisticated optimization passes.

Assuming the language is capable of self-interpretation, multistaging becomes a source to source transformation using your Symantics interface.

How much can be done in the language?

Assuming the language is capable of self-interpretation, multistaging becomes a source to source transformation using your Symantics interface.

This leads me to wonder how much of the compiler pipeline can be embedded in the host language via a series of Symantics modules? How many of a compiler's IRs can be represented and plugged together and which pipeline stage must escape the host language to unsafely "cast" the constructed data to a function/data pointer? The more powerful the host language, the closer to a typed equivalent of the host machine's assembly one could get. I'm sure a typed assembly language written to a byte array is beyond a language like OCaaml, but I wonder how far off it is.

Updated link

Finally Tagless, Partially Evaluated, Tagless Staged Interpreters for Simpler Typed Languages.

[Edit: it seems this version does not extensively discuss self-interpretation, and abstraction over evaluation style, ie. CPS]

Check the Journal version

There just wasn't enough room to discuss all of that in the APLAS version. The journal version has extensive discussion of evaluation styles and a few other goodies.

A preliminary version (ie the one we submitted) also had a discussion of self-interpretation. Unfortunately, we don't know how to do get it to type in an ``elegant'' way; what we'd implemented worked, but required the user to do too much work to manually insert let bindings where one might otherwise expect functions to work. The problem is that a self-interpreter seems to need more polymorphism than is available in either Haskell or O'caml. In the end, we simply cut that out entirely of the final version.

You can download the preprint version of the JFP paper (the final version is typeset much more nicely by CUP, but they have not posted it online yet, I expect that to happen very soon).

Thanks for the link. There

Thanks for the link. There seem to be quite a few versions floating around. Thankfully, I had saved one with the self-interpretation section intact, but it's still shorter than the JFP version you just linked to, so I'm looking forward to going through this new one to see what else was added.

The problem is that a self-interpreter seems to need more polymorphism than is available in either Haskell or O'caml. In the end, we simply cut that out entirely of the final version.

Any thoughts on specifically what kind of additional polymorphism is required?

Required polymorphism

The issue is that if you have polymorphism in your object language, you need rank-2 polymorphism in your meta-language. But for self-interpretation, the object and meta-language have to have the same features, which means that you need rank-2 polymorphism in your object language, which requires rank-3, then rank-4, etc.

Another way solution is to use let-polymorphism to bind each part of the language; this basically lets type resolution happen 'in context' instead of globally, thereby requiring less explicit polymorphism. One can see this has having a "context with a hole", and you plug in your program into that hole *then* type the result. Using 'let' to create such a 'context with a hole' is sufficient to get the needed polymorphism.

Translated to C#

I implemented the final interpreters in C#. I haven't quite gotten the partial evaluator to work yet, unfortunately. The problem lies with recursion, and while I've figured out why it's not working in my code, I haven't quite figured out how to translate how it's supposed to work as described in these papers. I also went a step further and implemented a JavaScript backend. See the post for details.

New Link

Finally Tagless, Partially Evaluated, Tagless Staged Interpreters for Simpler Typed Languages Jacques Carette, Oleg Kiselyov and Chung-chieh Shan. [also here] [and here]

The OP link now displays a 404. Ehud, can you fix that?

2 different (related) papers

The links are to 2 different versions: a shorter paper which appeared in the proceedings of APLAS, and a much longer version with lots more material (but not everything we had!) in JFP. We need to get back to the partial evaluation ideas which are in the code but not really fully explained even in the longer JFP paper.

One would think different

One would think different titles would be appropriate for different papers...

Pretty common

One would think different titles would be appropriate for different papers...

To be fair to Jacques, Oleg and Chung-chieh (Ken), long time LtUers all, this practice is pretty commmon with academic papers.

Papers with the same name are often published in successive versions, sometimes with major revisions.

Version numbers might be nice (like software), but having to come up with similar but different names for each one is probably too much to ask.

Also extended journal

Also extended journal versions of conference papers will usually have the same names, although journal versions can necessarily cover more content.

Simplified tagless-final type-checker

I have recently had the opportunity to revisit the code to de-serialize an expression from on-the-wire format into the tagless-final representation. The de-serialization process involves type-checking. The new code uses de Bruijn indices, which simplify the code significantly. De Bruijn indices are used only in the intermediate representation, not visible to the user; the (untyped) source code uses concrete, symbolic variable names.

The previous version of the code relied on type representation and generalized cast provided by Haskell's Data.Typeable. In this version, this dependence is eliminated. The code implements its own version of Data.Typeable, in the pure, above-the-board Haskell with no dependence on GHC primitives. The language of types is treated as an embedded DSL, represented again in the tagless-final style. As a `side-effect', the code shows how to compare typed terms in the tagless-final style.

http://okmij.org/ftp/tagless-final/course/course.html#type-checking