Macros/Syntax vs. AST manipulation

The Lisp family has macros that allow you to generate new code by manipulating programs; S-expressions are used as the representation. For many uses, this is what makes it hard to work with. Manipulating program language in such a low-level form is difficult. Looking at Stratego/XT, the authors clearly indicate that performing program manipulation with ATERMS (Stratego's underlying, low-level term representation, analogous to s expression data) is quite difficult and undesirable. Stratego has mechanisms to deal with program manipulation in a more natural way.

My read of Stratego shows that they authors allow the direct use of program fragments in the language being manipulated as a short form for underlying aterms.

What this boils down to is this: What is the best representation of a program to use, when the program must be manipulated/transformed? Are different representations appropriate for runtime manipulation (a program manipulating itself) vs. build-time manipulation?

Lisp chooses S-Expressions. Java programs can be transformed at compilation or runtime by bytecode manipulation -- aspect-oriented programming can be used to solve problems similar to those macros are generally applied to. C++ performs build-time transformation via its template mechanisms. Scala provides flexible, extensible syntax that relies on efficient compilation to deliver transformation-like behavior. Boo (a .NET language) provides, as a part of its standard library/compiler the ability to modify the AST of the program; there is no inconvenient "detour" through syntax.

This leads me to these thoughts:

1. To the extent is can, the type system of a language should provide as much transformational power (templates/type parameters) as it can. The transformational power of a type system, it seems to me, is just as important as type-safety/correctness.
2. Syntax extensions can result in highly readable code, and as such provide oft-needed flexibility. A good test here is building syntax for array-language operators; does the code "look" as natural as that for atomic values? Does it perform well?
3. Programs increasingly operate in an environment of fluid metadata, and must be able to adapt. Therefore, transformation at runtime is highly desirable.
4. Completeness dictates that transformations should be able to construct other transformations. This necessitates runtime transformations.
5. Losing the transformative and corrective powers of type systems seems undesirable when constructing transformations in general -- are type-safe transformations at runtime any different from those performed by compilers?

Modern VMs perform just-in-time compilation; selective optimized compilation of programs is performed. When new code is dynamically generated, that code is folded into the same JIT framework as "normal" code. When existing code is modified, a VM will "de-optimize", then allow further optimization to occur when needed. If a program's type system is modified, is it possible to perform similar work? In general, it seems that a comprehensive model for the evolution of a program's full AST+Data (types/logic/data) is needed. Lisp does this without checks and balances. Is a given transformation of a program acceptable?

Perhaps there is a kind of bitemporal typed lambda system out there somewhere; something that understands what the type system used to be, what we think it is now, and allows both to be combined together in computation. :)

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

S-expressions

For many uses, this is what makes it hard to work with. Manipulating program language in such a low-level form is difficult.

In Lisp, S-expressions is how you specify programs as a programmer, so it's only natural that macros work at this level as well. In other words, it's not "low-level" compared to the level at which Lisp programmers work. I'd also argue that "manipulating program language in such a low-level form is difficult" is wrong. The difficulties in writing sophisticated Lisp macros is generally not due to the S-expresssion representation but more related to issues of hygiene or the lack thereof. Hygiene is an issue when working in any system in which macros can be composed in fairly arbitrary ways.

Agreed

I take issue with the exact same statement. S-Expressions are nothing more than syntax, and may have arbitrary (i.e. high-level or low-level) semantics.

I largely agree with Per. Lack of hygene is asking for trouble, and creates difficulty for the programmer. A purely hygenic system totally avoids variable capture, which almost always avoids problems... many macros don't need to capture variables, even if they bind new variables.

In my limited experience, the issue with Scheme's syntax-rules is that it has a normal-order evaluation semantics and no way of forcing the evaluation of a given sub-expression. This leads to the requirement that macros be CPS'ed in order to be composable.

Syntax-rules isn't the end

At the time the Scheme report R5RS was written, the research on macros and hygiene hadn't come to a conclusion. Now severel Scheme implementation support the syntax-case system, which makes it easy to write hygienic macros, but at the same time also allow arbitrary manipulation of the syntax.

A complete list of papers on macros is available at http://library.readscheme.org/page3.html at the impressive readscheme.org

End times...

You seem to be suggesting that syntax-case is "the end", and that research on macros and hygiene has now come to an end... Is that a generally accepted idea, that syntax-case is, in some sense, an optimal solution to the problem, and now it's solved? I'm not being facetious, I honestly don't know much about recent developments in macrology. I guess I haven't seen much "new" since syntax-case, although there still seems to be a lot of really interesting work on macro/module interactions.

Not yet

Never say never. I like syntax-case and in particular the version made by Flatt for PLT Scheme. There are other solutions though such as syntactic closures or explicit renaming.

One thing I know is, that syntax-rules macros aren't expressive enough -- for one they loose too much source location information in the process of macro expansion, which makes it difficult to write precise, custom made error messages.

The interaction between modules and macros are indeed very tricky to get right, and I am looking forward to see what R6RS will suggest. Presumably they use ideas from (among others) Dybvig and Flatt's papers.

A particular interesting one is "Composable and Compilable Macros: You want it When" by Flatt.

Syntax-case is still applicative order

You'll find no disagreement from me, and I don't consider myself an expert macrologist, but my comments about syntax-rules applies to syntax-case as well. :)

Syntax-case is strict

... my comments about syntax-rules applies to syntax-case as well.

which were:

In my limited experience, the issue with Scheme's syntax-rules is that it has a normal-order evaluation semantics and no way of forcing the evaluation of a given sub-expression. This leads to the requirement that macros be CPS'ed in order to be composable.

I can't get this to make sense.

In the syntax-case macro system macro transformers are normal Scheme functions. That means that they use the normal Scheme semantic, so syntax-case macros do not use normal-order semantics.

Expansion of a subexpression can be done with local-expand (I can't remember whether it is called the same in psyntax - but the important point is that the underlying macro model supports it).

And finally syntax-case is not purely hygienic.

applicative/normal-order macros

Leon P Smith wrote:
In my limited experience, the issue with Scheme's syntax-rules is that it has a normal-order evaluation semantics and no way of forcing the evaluation of a given sub-expression. This leads to the requirement that macros be CPS'ed in order to be composable.
I am assuming that you meant: "there is no way of forcing the expansion of a given sub-expression".

I will assume that if you want to force an expansion of a sub-expression, then you want to look inside the output of the expansion for something interesting. I will try to explain why that may not be a good idea:

Applicative order evaluation for macros yields macros that are non-portable across implementations. For example, suppose you use two or more implementations that support syntax-rules. The first system supports letrec natively and provides no macro for it. Thus, an expansion of (letrec ([x 5]) x) will be the same expression. The other system does not support letrec natively but expands it to another form: (letrec-values (((x) (#%datum . 5))) x) because letrec-values is its native form. A third system may expands it into (let ([x (void)]) (let ([t 5]) (set! x t) x)) (or maybe it will even expand the let forms into direct application of a lambda expression, a let-values form, or to a call to call-with-values). Writing a portable macro across these implementations is just not realistic.

Aziz,,,

grammatical transformations and macros

The language machine takes rather an extreme position on this, but one which works pretty well. It operates by doing substitutions - it recognises patterns and substitutes replacements. To that extent operates in much the same way as a macro processor.

But the patterns it recognises and the patterns it substitutes can include nonterminal symbols and variable bindings, and rules can be constructed that operate as 'backend' rules analysing material produced by 'frontend' rules. There is no explicit parse tree, but the 'ghost of a parse tree' determines the scope rules of variable references from within fragments of transformed representations.

So you can construct and operate on whatever internal representation suits your purposes as a means of communicating between 'frontend' and 'backend' rules.

As for type information, you can use associative arrays to associate information with symbols. One way of using this is to make the type information enter into the analysis, and again, it's up to you to devise the representation that suits you.

Stratego is something I outght to look into. But I have tried very hard to keep the grammar of the lmn metalanguage as simple and un-rebarbative as possible.

Syntax-objects

In the Scheme world a lot of research have been put into the question of how macros and modules interact. One solution is has been to drop S-expressions favoring syntax-objects, which are "S-expressions with annotations". To make the manipulation of syntax-objects convenient, a custom pattern matching on syntax-objects is used, that enables the programmer to use normal S-exp syntax most of the time.

Look for papers on syntax-case such as Syntactic Abstraction in Scheme" (Dybvig et al.) which contains the low-level details, and
"Writing Hygienic Macros in Scheme With syntax-case"
(Dybvig) which contains a high level explanation of how to use them.

Or take a look at how Slate macros work

Slate has Smalltalk syntax and is a prototype based multiple dispatch language.

If you mark a message send to be a macro call then the message will be sent to the parsed AST form of the input parameters at macro expand time. There are labelled quotation and etc...

slate.tunes.org
Macros in Slate

Program manipulation with concrete and abstract syntax

Wow! That's a lot of material you cover in your posting. Books have been, could have, should be and will be written on these topics for years to come.

Let me fence myself into one tiny corner. The topic, Macros/Syntax vs AST manipulation, can be seen as to pit two complimentary program representations against eachother: the syntactical representation of a program (either in text format, or as a concrete syntax tree [CST]), and the AST.

(I also see another line very interesting of debate: full program transformation vs "macro-like" meta-programming, but I will not go there now.)

The first we probably should acknowledge, is that both have their uses. The CST is a faithful representation of what the programmer actually wrote, usually down to the indentation (whitespace) and comments. This information is necessary for doing some forms of program manipulation, most notably refactoring, but sometimes also reverse engineering. This is not to say that refactoring and reverse engineering should/must be done (exclusively) on the CST, however.

The AST, harking back to McCarthy, was intended as the essential representation for a program. It glosses over lexical elements, comments and formatting, containing only the syntactical essence. In a "pure" AST, recreating the original program faithfully is impossible. It is guaranteed to be semantically equivalent, but not syntactically. ASTs are the representation of choice in compiler front-ends.

The upside of the AST approach is that the AST is precise (if it is not, you have have done something wrong in your design, or your language semantics is very convoluted). The immediate downsides are that the AST usually has no syntax (could be serialized to XML, or ATerms). It is also rather verbose, because of its preciseness:

Consider the Java statement int foo;

This does not say anything about modifiers to the variable foo, such as final, static, etc, because the language semantics says that the absence of these markers means "non-final" and "non-static". Both by convention and for practical purposes, such information is always embedded explicly into ASTs, making the representation uniform. I.e., an AST node for the variable declaration would say that it is "non-static" and "non-final". This is allows us to have exactly one node type variable declarations, which is extremely handy when writing the transformation code, and results in more readable and compact code.

The price to pay for this level of uniformity, is verboseness, and this is exactly what concrete syntax attacks. In Stratego, concrete syntax allows you to write |[ int foo; ]|, and have the Stratego compiler expand it inline to the full AST representation. Used in this way, concrete syntax is merely a convenience for the meta-programmer. It does not allow Stratego to do anything it could not already do on the AST.

From experience, we know that this is extremely useful when generating AST new nodes, but it is not always that useful when matching patterns against existing code. Think of patterns as ASTs with holes, where we
can line up the ASTs against a tree and see if the non-holey parts match. If they do, we can pick out to the sub-ASTs visible through the holes.

Writing such templates on the AST is perfectly precise, but sometimes it turns out that writing equivalent patterns on concrete syntax form is tricky, and in Stratego, we do a lot of pattern matching. Therefore, concrete syntax cannot be said to be a panacea, but it is an extremely handy tool in the kit.

(Disclaimer: I've spent a year working with the Stratego team in Utrecht, and also implemented an aspect-language extension to Stratego [AspectStratego].)

Thanks in particular for prov

Thanks in particular for providing the CST/AST terminology -- hadn't yet picked up on those words yet.

"Patterns with holes" is what we mostly see out there in the Lisp world, and perhaps within Stratego as well. Given that state of the art, most people think that macros are quite arcane. Some of that is due to lack of familiarity, and some of it is due to inherent complexity.

We use macros/transformations when we want to inject functionality into code that has no idea it is being transformed. If the code is aware that it will be transformed, conventional mechanisms (function refs, type parameters, etc) can be used to accomplish the "customization" that is required.

Aspect systems for Java replace "patterns with holes" with a series of specific scenarios -- pointcut, joins, and so forth. All of these can be seen as "patterns with holes" but it's a significantly easier when we give them names, as done with aspects. Vocabulary is important; concepts given agreed names are the benefit of pattern languages. Stratego provides for the composition of transformations; a pattern language of transformations is equally important.

Practical code usually has to be pretty smart about dealing with other code. For example, Java's reflection API is very commonly used to allow one library of code to adapt itself to other code. There are many libraries that use reflection to figure out what classes "look like" do persistence on that class's behalf. These libraries take on many forms -- some provide libraries that can perform the persistence, others generate classes that can do the task, and still others generate descriptions (such as SQL or XML) that can accomplish the task with the aid of other tools.

Should a reflection-style API or usage be any different than performing transformation tasks at compile time? In the Lisp world this is unified, but perhaps this unification is easy because of the untyped nature of references.

One of the key things brought to the table by static typing is the "view", where the compiler can transform one kind of entity to another. Views are one of the most important parts of "do what I mean".

What we're all really looking for is the language that gives us the most "do what I mean" for the characters we type, and couples it with "I get what you mean".

Object-orientation is useful primarily for its ability to allow complex namespaces to be constructed via ordering and scoping, from which we invoke functionality and/or access data.

Type systems to date provide a measure of control over this localized namespace assembly, and give us a little "magic" assignment to help the process along (super, self, outer objects, etc). The type system and declarations act as a set of rules that interact to create (hopefully) desired behavior. This is precisely a code transformation process. Languages today mostly lock up this set of rules in a running program, and do not allow them to be modified. Part of this is rooted in the difficulties of dealing with change in the ruleset.

That problem has been studied in rule engines, most of which do permit the rules to be changed "on the fly", and perhaps there's something to be learned there. I've often wondered if compilers might be better implemented as rule-based systems instead of deterministic transformations, where the order of transformations is not known, but is "worked out" during compilation by various means.

p.s. What's up with the Gentoo Java support? ;)

Again, you touch on a lot of

Again, you touch on a lot of difficult and extremely juicy topics. I'll only pick up some, to keep my reply reasonably short.

One part here is static typing vs dynamic typing in both your subject language (the language you are transforming) and the transformation language. In Lisp, the subject and transformation language is the same, and it's dynamically typed. In, say MetaML and MetaOcaml, the subject and object languages are still the same, but they are statically typed.

Both of these approaches (static vs dynamic again) have their uses, and I don't want to devolve into this debate, save to say that doing program transformation on a statically typed languge allows you to do a lot more transformation at compile-time (offline), because of the information provided by the additional typing information.

Your question, should a reflection-style API or usage be any different than performing transformation tasks at compile time, is a very good one. What I have observed in my experience with reflection APIs, is that the information you get through these APIs is not always complete enough and often rather low-level. Take Java as an example. Using java.lang.reflect, we cannot get the AST for a method. We must reconstruct it from the bytecodes. What if the bytecodes were generated by Jython? What to we reconstruct then? Even for Java, how can we reconstruct a foreach loop reliably? We cannot.

(Adding the AST as an extra section in the .class file is certainly possible, in the same way as debugging info and annotations are added already, see JSR-175, for example).

The reflection API gives us access to our subject program, but not in its original subject language. The Java->bytecode transformation is not an isomorphism: we do not know exactly where we came from, and bytecodes are at a significantly lower abstraction level. These issues alone make it impossible to use the exact same approaches for transformation at runtime (online) as we do at compile-time (offline).

Arguably, we could pass our software around in .ast files instead of .class files. In a way, this is what Lisp does, and we see that this gives us significantly better expressiveness in our transformations.

(As for Java on Gentoo, I have unfortunately not had the time to work on it significantly since before the summer, but I can assure you that we'll finally unleash Java 1.5 and a lot of new J2EE and server-oriented Java packages in the coming weeks.)

Typesafe transformations - Template Haskell vs MetaML

Template Haskell allows compile-time transformations of the AST, with the caveat that any transform can fail a typecheck. I think the Template Haskell paper also describes MetaML as allowing compile-time transformation, but only if provably type safe beforehand.

So it seems that both typesafe ahead of time and checked type safety per transformation are valid models.

Compile-time transformation in MetaML and MetaOCaml

MetaML/MetaOCaml do not directly allow you to transform pieces of code. MetaML/MetaOCaml introduce several explicit annotations for multi-stage programming, similar to binding-time annotations in partial-evaluation.

These operators give you control over the time at which parts of the code are evaluated, but do not actually allow you to modify quoted code. In fact, similar to binding-time annotations in partial-evaluation, if you drop the staging annotations from MetaML/MetaOCaml program, then you get a working, but usually very slow, program.

Static type checking of a multi-staged programming language is therefore not extraordinary difficult: on top of the type system of OCaml they basically have to check if the staging annotations are used correctly.

Static type checking of code generated by a meta-program written in language that allows you to construct (and maybe even modify) arbitrary abstract syntax trees is very difficult. At GPCE'05 there were two papers on this subject: "Statically Safe Program Generation with SafeGen" by Shan Shan Huang, David Zook, and Yannis Smaragdakis and "A Type System for Reflective Program Generators" by Dirk Draheim, Christof Lutteroth, Gerald Weber. The first system uses allows you to implement a code generator that generates Java programs, taking as input other Java code. SafeGen can now prove that generator produces only type-correct Java programs. SafeGen uses a theorem prover and is sound. The second system uses a more conventional type checker, but is not sound.

Composition in CST vs AST

What about composition of transformations? It seems to me that working on the AST loses the ability to compose macros, or at least requires a very different system to that present in Scheme.

Macros vs Transformations

Since Stratego was mentioned in the original post, I feel allowed to use it as an example here, but the idea of composable AST transformations is in principle language-independent.

One of the design goals for Stratego was indeed composable transformations, and in my opinion, it tackles this problem very well.

I have also programmed Scheme, and developed DSLs in it, so I see where you are coming from. Scheme is a lovely language. Your intuition that the systems are rather different is correct. In my opinion, the most apparent difference is that in Scheme, the macros (meta program) is written inside Scheme, and is part of the program (subject program).

A macro in Scheme cannot (at least not to my knowledge) 'reify' arbitrary functions, that is, it cannot get the CST for an arbitrary function or other macro. It gets s-exps as arguments, and works on these, treating these s-exps as code, data or both. As a side effect, the macro may change global variables, etc, but the primary effect of a macro is to expand its computation at the invocation site. That is, in-place macro expansion is the primary mode of operation.

With Stratego, the meta program is entirely outside the subject program. It has access to the entire AST (or CST; actually the Stratego language works equally well on CSTs, but this is a lot more cumbersome and therefore never done), and can modify any part of the program. A transformation is expressed with rewrite rules, controlled by strategies. Strategies are composable by using control flow combinators (left-choice operator, composition operator, etc), to obtain new strategies. These strategies are in general never invoked directly from the subject program: There is no canonical way to directly call a Stratego transformation in the subject program.

That being said, you can come up with whatever scheme you want to apply the transformations. The most common approach is to run a Stratego transformation over your program as part of the compilation process, before handing code off to the compiler. The transformation acts as a preprocessor, and may pick up on whatever hooks you have put into your subject program.

An example of this is found in CodeBoost, a high-level C++ optimizer written in Stratego. Here, we use Stratego as sort of a macro expander/function generator that generates specialized map functions.

The specializer strategy will walk through the C++ program, look for invocations of the function map. When it finds one, it will use other strategies to infer the types of the arguments to map. After that, it will use yet other strategies to generate a specialized map function, with a unique name. The invocation will then be rewritten to call the specialized function. (The generated map may later be inlined.)

This is an example of strategy composition: we have strategies for C++ type inference (semantic analysis), traversals on C++ ASTs, and generation of specialized new maps.

Type-checking..?

I don't know very much at all about term rewriting, so I'm curious what people do to typecheck rewriting programs, and what the safety properties typechecking checks are. Can you tell me anything about this?

Typechecking in Stratego

I can tell you how we do it in Stratego, but this is certainly not the only term rewriting language.

Stratego works on terms, and terms are constructed from signatures. Think of signatures as data type declarations, or type declarations.

A signature consists of a set of constructors. A constructor is on the form C(t1, ..., tn). C is the name of the constructor, and t1, ..., tn are the types of subterms allowable at a given place.

For example,

If: Expr * Block * Block

would declare that If is a term that must take three arguments. The first must be an Expr term, the last two must be Block terms. According to this definition, If(Expr(...),Block(...),Block(...)) would be a valid term.

In current versions of Stratego, only the arity is checked inside a transformation. Therefore, you can create the term If(Expr(...),Expr(...),Expr(...)) without being caught.

This is not as stupid as it looks, because a lot of transformations go from one language (say a high-level AST) to another (say a compiler IL), and this happens by rewriting parts of the AST in multiple rewrite steps. It is okay that intermediate terms are invalid, as long as the final result of the transformation is a valid term in the output language (say, the IL).

The way we handle this in Stratego is to have format checkers (these are automatically generated from the signatures) applied after transformations. A format checker will verify that the output term from a transformation is correct.

Typically, a transformation system, say a compiler, is composed from multiple individual transformations in a pipeline. Putting the format checkers between each stage in the pipeline (which are usually quite small), we can always ensure type correctness, and catch errors at a reasonable granularity (though other techniques, such as unit testing, are more useful for catching and avoiding errors).

Runtime transformation

3. Programs increasingly operate in an environment of fluid metadata, and must be able to adapt. Therefore, transformation at runtime is highly desirable.

Do you really think so ? I've always had the impression that runtime transformations are way more perilous than they are interesting. Or perhaps are you thinking of some kind of type-safe run-time transformations ?

I guess it all depends on how

I guess it all depends on how you define "perilous". If you define it as "can't guarantee type-correctness post-transformation", then some space-age thinking is needed to ensure transformations stay consistent with type.

If you define it as "customer kicks my ass because they don't want to have to bring the server down to make metamodel changes", then that's something else again. ;)

Put like this...


If you define it as "customer kicks my ass because they don't want to have to bring the server down to make metamodel changes", then that's something else again. ;)

Of course, in that case :)

Still, you might be interested by the Kell project (and it's pre-prototype implementation Chalk), a distributed language of reconfigurable components. In particular, the notion of replacing a bit of a program by another is a primitive notion of the language.

A very tiny correction, if I am allowed to say so :-)

C++ performs build-time transformation via its template mechanisms.

The C++ template mechanism works by substitution, not by evaluating expressions. A C++ template class/procedure can not do any kind of transformation like LISP or meta-languages.

Correction correction ;>

C++ most certainly does evaluate expressions at template evaluation time. However, you are correct that its ability to evaluate expressions is severely limited compared to languages which support metaprogramming more naturally. For instance, the infamous compile-time factorial metafunction would not be possible if the template engine performed no evaluations. Also, I believe that it has been proven that the template engine is Turing-complete. This may come as a bit of a surprise, but consider that branching can be achieved via template specialization, which is yet another form of evaluation.

C++ Templates are Turing complete

Todd Veldhuizen has a sketch of a proof for the Turing-completeness of the C++ template engine.

(Also, it may be useful to point out that term rewriting is equational reasoning. That is, it is purely about substitution. Term rewriting has been proven equivalent to lambda calculus, in computational power, and is therefore also Turing complete;)

Lambda calculus

It's surprising something so trivial makes it into a paper. And he's obviously not a Lisp hacker, or he'd have written a lambda calculus interpreter. :)

Here's my normal-order lambda calculus evaluator that works by term rewriting. It didn't take any more "brain cycles" to write than the equivalent ML or Haskell version although you spend time battling with compiler error messages.

// Term definitions.

template<char Name>
struct Var { };

template<typename Param, typename Body>
struct Lambda { };

template<typename Left, typename Right>
struct Apply { };

// Structural equality.

template<typename First, typename Second>
struct Eq
{
    enum { Res = false };
};

template<char Name>
struct Eq< Var<Name>, Var<Name> >
{
    enum { Res = true };
};

template<typename FirstParam, typename FirstBody, typename SecondParam, typename SecondBody>
struct Eq< Lambda<FirstParam, FirstBody>, Lambda<SecondParam, SecondBody> >
{
    enum { Res = Eq<FirstParam, SecondParam>::Res && Eq<FirstBody, SecondBody>::Res };
};

template<typename FirstLeft, typename FirstRight, typename SecondLeft, typename SecondRight>
struct Eq< Apply<FirstLeft, FirstRight>, Apply<SecondLeft, SecondRight> >
{
    enum { Res = Eq<FirstLeft, SecondLeft>::Res && Eq<FirstRight, SecondRight>::Res };
};

// Substitution.

template<typename Term, typename ReplaceTerm, typename ReplaceVar>
struct Substitute { };

template<typename Var, typename ReplaceTerm>
struct Substitute< Var, ReplaceTerm, Var >
{
    typedef ReplaceTerm Res;
};

template<typename Param, typename Body, typename ReplaceTerm>
struct Substitute< Lambda<Param, Body>, ReplaceTerm, Param >
{
    typedef Lambda<Param, Body> Res;
};

template<typename Param, typename Body, typename ReplaceTerm, typename ReplaceVar>
struct Substitute< Lambda< Param, Body >, ReplaceTerm, ReplaceVar >
{
    typedef typename Substitute<Body, ReplaceTerm, ReplaceVar>::Res SubBody;
    typedef Lambda<Param, SubBody> Res;
};

template<typename Left, typename Right, typename ReplaceTerm, typename ReplaceVar>
struct Substitute< Apply<Left, Right>, ReplaceTerm, ReplaceVar >
{
    typedef typename Substitute<Left, ReplaceTerm, ReplaceVar>::Res SubLeft;
    typedef typename Substitute<Right, ReplaceTerm, ReplaceVar>::Res SubRight;
    typedef Apply<SubLeft, SubRight> Res;
};

// Reduction.

template<typename Term>
struct Reduce
{
    typedef Term Res;
};

template<typename Param, typename Body>
struct Reduce< Lambda<Param, Body> >
{
    typedef typename Reduce<Body>::Res ReducedBody;
    typedef Lambda<Param, ReducedBody> Res;
};

template<typename Param, typename Body, typename Right>
struct Reduce< Apply< Lambda<Param, Body>, Right > >
{
    typedef typename Substitute<Body, Right, Param>::Res Res;
};

template<typename Left, typename Right>
struct Reduce< Apply<Left, Right> >
{
    typedef typename Reduce<Left>::Res ReducedLeft;
    typedef typename Reduce<Right>::Res ReducedRight;
    typedef Apply<ReducedLeft, ReducedRight> Res;
};

// Evaluation.

template<typename Term, bool IsDone = false>
struct Eval { };

template<typename Term>
struct Eval<Term, true>
{
    typedef Term Res;
};

template<typename Term>
struct Eval<Term, false>
{
    typedef typename Reduce<Term>::Res Reduced;
    typedef typename Eval< Reduced, Eq<Term, Reduced>::Res >::Res Res;
};

The usual bug(s)

Try evaluating the following term:

Lambda<Var<'y'>,
       Apply<Lambda<Var<'x'>,
                    Lambda<Var<'y'>, Var<'x'> > >,
             Var<'y'> > >

The alpha conversion blues

You got me there, Vesa. Darn alpha conversion! That's what I get for not using de Bruijn indices. In any case hopefully I got my point across: lambda calculus interpreters are intrinsically cooler than Turing machines in demonstrating the computational completeness of template metaprogramming. :)

By the way, I pretty much learned everything non-trivial I know about template metaprogramming from those old COTDs and TOTDs of yours on flipCode. It's interesting to see that you've left games and went into PLT. I hope it's treating you well.

It can also do pattern matching on static expressions.

For example:

#include 
using namespace std;

template < int N > struct factorial {
    static int eval() {
        return N * factorial< N - 1 >::eval();
    }
};

template < > struct factorial < 0 > {
    static int eval() {
        return 1;
    }
};

int main()
{
    cout << factorial< 5 >::eval();
    getchar();
    return 0;
}

but static expression evaluation predates templates, so it is not part of the template engine. It just happened that the compiler needed to evaluate the static expressions passed to a template in order to keep the substitutions going.

In other words, the Turing-completeness of the C++ template engine happened accidentally, whereas LISP was designed around the principle of code processing itself.