## Nesting of quasi-quotation

I'm confused by the issue of producing programs using quasi-quotation that themselves use quasi-quotation. Lets use <...> for quotation and ~ for splicing. Consider the expression <...<...~e...>...> where e is some term that produces code. Should that evaluate to the program ...<...e...>... with e spliced in, or should that evaluate to the program ...<...~e...>... that contains a splicing sub-expression? Given the first choice, how would you achieve the behavior of the second choice? And given the second choice, how would you quote a program with a splice outside of a quote? How does that all fit together in a coherent and expressive whole? What about indicating at the splice point to which quotation that splice belongs? Or introduce an escape construct \ so that you can express quoting a splice as <...<...\~e...>...>?

## Comment viewing options

### This is a problem that has

This is a problem that has been solved multiple times and for which solutions are documented many places, I'm sure. But the only place I recall it documented is Adam Megacz's Multi-Level Languages are Generalized Arrows [ltu], with regards to multi-level terms (section 3.5).

(I really hate quasiquotation as a basis for staged computation.)

### Thanks! I'm still looking

Thanks! I'm still looking into it, it's a bit hard to understand with all the abstractions. My understanding at the moment is that they desugar quasiquotation into function calls with (approximately) one function for each language construct, much like do notation or computation expression in F# are desugared into function calls. But given all the category theory there has to be a lot more to it than that. Perhaps you can give me an advance hint of the implications for nesting?

What do you dislike about quasiquotation for staged computation, and what's the alternative? Are you specifically talking about Lisp like quasiquotation (which I agree is pretty bad) or also about MetaOCaml like quasiquotation that is well scoped?

### I dislike quasiquotation

I dislike quasiquotation because it is painful to read, it is difficult to disentangle or abstract, it mixes semantic and syntactic layers, it doesn't integrate nicely with user-defined syntax, and it doesn't work uniformly for both compile-time and runtime staging (e.g. in a code distribution scenario).

One alternative is latent optimization of first-class functions. This requires keeping some extra meta-data or uncompiled forms available, but that is the compiler's responsibility. This might be expressed as let f' = specialize f in ... where f is a first-class function, and f' refers to an optimized variation thereof (i.e. constant propagation, branch elimination, rewriting, and other optimizations). This separates the performance gains from the staging model, at which point staging can be modeled a number of ad-hoc ways (based on which effects are desired): applicatives, monads, comonads, dynamic arrows, currying, etc..

Compile-time staging would then be supported by having the 'main' behavior be of an explicitly staged type, perhaps one modeling a link-time environment.

### With structured editing it

With structured editing it does integrate nicely with user defined syntax ;-)

Why doesn't it work for both compile time and run time staging? It's just syntactic sugar for building your own AST after all, it hasn't much to do with compile time or run time?

Automatically specializing a closure to its environment is a nice goal but IMO pretty much unattainable. It has all the same problems of automatic partial evaluation, which hasn't worked out well in practice. In essence quasiquotation lets the programmer specify binding time annotations, instead of trying to do some automatic inference process.

Yea, having the first stage be at compile time rather than at run time is obviously a good idea :)

### How does structured editing

How does structured editing help?

The reason quasiquotation doesn't work uniformly for compile-time and run-time staging is precisely because it operates at the level of syntax and ASTs rather than semantics-layer abstractions. There are ways to make syntax work at run-time, but the extra integration steps make it non-uniform, creating some discontinuities (i.e. the clear difference between compile-time and run-time).

Specializing closure to environment? I think you misunderstood. In the earlier example, f' would be specialized based only on information already contained within f (e.g. through currying or composition), and is independent of environment. Programmers are plenty able to specify staged computations via monads or other abstractions. The trouble is that they can't achieve near-native performance through these abstractions.

### With structured editing you

With structured editing you just have a quasiquote node and beneath that everything works as normal. On second thought, I don't understand what's so hard about quasiquotation even with string based syntax. Can you explain?

I do not understand what you mean in your second paragraph, can you give a concrete code example?

Re 3rd paragraph: that's a subset of what I said? When you partially apply a function to a value, what you get is a closure with that value stored in the closure's environment.

I'm having a lot of trouble parsing all the buzzwords though, so maybe I'm completely misunderstanding what you mean. Probably a more concrete description would be easier for me.

### I have not argued that

I have not argued that quasi-quotation is hard with string based syntax. I claimed it doesn't integrate nicely with user-defined syntax. Integration isn't just an issue of adding quote-and-splice support to each user-defined syntax; there is also the issue of figuring out what it means to have ASTs for ad-hoc user-defined syntaxes spliced together in an ad-hoc manner.

Regarding the second paragraph: this is fundamental stuff; I have 'no idea' which concrete code would help.

• Semantics-layer abstractions typically include functions, records, types, whereas syntax-layer abstractions typically include macros, templates, quotation, or user-defined syntax. Some languages, such as Kernel or the Pattern calculi or Term rewriting, unify these abstraction mechanisms (usually at some costs elsewhere).
• Staging mechanisms are naturally coupled to abstraction mechanisms due to the late-binding of resources or values common in staged systems. Some behavior and data is provided elsewhen or elsewhere of other code and data.
• By 'uniformly for compile-time and run-time' I mean a model for staging that does not make or require a distinction between run-time and compile-time, i.e. using the same mechanisms and having the same level of expressiveness and power and modularity.
• The value of a uniform staging model may not be obvious unless you're deeply into both live programming and staged programming. Uniform staging provides a sort of elegant regression, a 'turtles all the way down' consistency and simplicity. A compiler in a live staged model can fuse stages based on stability (i.e. profile and distinguish slow-changing data/code vs. fast-changing data/code).
• Syntactic abstraction mechanisms tend to operate poorly across communication, modules, components, and semantic abstraction boundaries (functions, records, etc.). My hypothesis is that any model for uniform staging must be based on semantic abstractions (such as monads, applicatives, comonads, dynamic arrows, GADTs, etc.).
• The common complaint against semantic staging is that, unlike macros and syntactic abstraction, it generally never achieves native performance due to late-binding of values and deep layers of abstraction. Though with JIT / tracing compilers it can come closer than before.

Hopefully this gives you some idea of what I'm saying.

Regarding the third paragraph: even if you interpret it as a 'subset of what you said', it's a tractable subset. We aren't specializing an abstract closure against its environment. We needn't infer much at all. We're waiting until runtime, then propagating some runtime values as constants. This sort of latent optimization has much in common with tracing/hotspot compilation (and indeed, can be lazily implemented with those). What were you imagining?

I don't believe I've used any buzzwords, or at least not in their role as buzzwords.

### there is also the issue of

there is also the issue of figuring out what it means to have ASTs for ad-hoc user-defined syntaxes spliced together in an ad-hoc manner

Well, the answer is simple. You have an expression with a hole, you plug another expression in that hole. That is all you need if you choose the language constructs right, and by that I mean in such a way that every expression type has a fixed number of children. If some expressions can have a varying number of children then you will also need unquote-splicing. For example if you curry function calls, every function call node has exactly two children: the function and the argument. If you have function calls with a varying number of arguments like in C and Lisp, then you need unquote-splicing to splice in a number of function arguments. For this reason and other reasons related with structured editing, I think it's best to design such a language such that each expression type has a fixed number of children. This is not hard, the same thing you do with curried functions works for all constructs: a nested series of nodes with fixed number of children instead of one node with a variable number of children. The latter also works, but it's slightly more ugly.

Hopefully this gives you some idea of what I'm saying.

Yes, it helps, but I'm still not sure why quasiquotation doesn't satisfy these requirements. I very much agree that the same mechanisms should work at compile time and run time. Quasiquotation is simply syntactic sugar for building an AST. That works the same at compile time and run time. New syntax is an orthogonal concern, although it is a place where quasiquotation is often useful to compile the new syntax extension down to existing language constructs. To do this you need to generate an AST, and for this quasiquotation is useful. In Lisp these two concerns are intertwined (the syntactic sugar and generating the AST), and in Lisp quasiquotation is unstructured. I certainly agree that that is not good. I'm talking about structured quasiquotation as in MetaOCaml, where you don't have arbitrary unstructured AST manipulation.

We're waiting until runtime, then propagating some runtime values as constants. This sort of latent optimization has much in common with tracing/hotspot compilation (and indeed, can be lazily implemented with those). What were you imagining?

I was imagining the same thing. At run time you have a closure with a lexical environment where all values in that lexical environment are known. For example:

let x = readNumber () in
specialize (fun y -> y * (x + x))

Now if you call specialize on that closure, because the x is a known constant in the environment of the closure, you can constant propagate x+x. The problem is that 'just constant propagate' is assuming a partial evaluation mechanism. The trouble with that is that there is as far as I know no satisfactory automatic partial evaluation mechanism. Either you propagate too much and it may run very slow or even cause the partial evaluator to loop, or you don't propagate enough and you don't have the performance you want. Explicit programmer specified annotations provide predictable performance, have simple semantics and it is known how to implement it.

let x = readNumber () in
!<fun y -> y * ~(x + x)>

Buzzwords was the wrong name, they only sound like buzzwords to somebody who does not understand them. Jargon may be the correct term?

### Well, the answer is simple.

Well, the answer is simple. You have an expression with a hole, you plug another expression in that hole. That is all you need

That works well enough if certain conditions are met - that the expression you plug into the hole is meaningful in its new context, that you have some easy means to identify holes. But in case of user-defined syntax, neither of these conditions generally hold true. And this is even before we concern ourselves with preservation of meaning.

There is a reason we need type-safety if we perform substitutions involving more than one type (including user-defined types); similarly, we need syntax-safety (or typed syntax) if we perform substitutions involving more than one syntax (including user-defined syntax). To avoid duplicating semantic efforts, I think it's better to ignore the idea of 'AST' entirely unless we're talking about the specialized subject of parsing the language.

Quasiquotation is simply syntactic sugar for building an AST. That works the same at compile time and run time.

Building an AST does not integrate nicely with the run-time types, values, and semantic abstractions. Unless you have a very, very loose definition of AST, or a very restrictive language.

problem is that 'just constant propagate' is assuming a partial evaluation mechanism. The trouble with that is that there is as far as I know no satisfactory automatic partial evaluation mechanism.

You're focusing too much on just constant propagation. There is also composition (e.g. specialize (f . g . h)) and latent optimizations involving rewriting, equality saturation, tracing, memoization, caching. Many of these would be difficult to express using quasi-quotation, much less validate for preservation of meaning.

Optimization is a challenging problem, and I think it's better if we avoid entwining it too deeply with our code beyond annotating a few recommendations. I am not opposed to use of annotations to guide an optimizer, though I think quasi-quotation is a poor fit for that role. Specialize has simple enough semantics (formally equivalent to id) and is trivial to implement in a formally valid manner (replace with id) and ultimately is itself one such annotation to guide the compiler/optimizer. specialize is identity with an attitude/annotation.

As far as 'satisfactory' goes, I can't speak for you but I'm often satisfied with the set of heuristic decisions a compiler makes for me in achieving near-native performance. I'm just a little unsatisfied that these same benefits are difficult to achieve in most languages for runtime data and behavior.

### That works well enough if

That works well enough if certain conditions are met - that the expression you plug into the hole is meaningful in its new context, that you have some easy means to identify holes. But in case of user-defined syntax, neither of these conditions generally hold true. And this is even before we concern ourselves with preservation of meaning.

The condition that an expression is meaningful in its context is checked by the type system. e.g. MetaOCaml has typed expressions. E.g. in the example that I showed above, the ~(x + x) is only allowed because it is in a hole where an integer is expected.

You need to identify holes in any case to be able to put in nested subexpressions even for ordinary programming, unquote is just one extra type of subexpression. This is no problem.

Building an AST does not integrate nicely with the run-time types, values, and semantic abstractions. Unless you have a very, very loose definition of AST, or a very restrictive language.

Why? Please be concrete. Why does an AST not integrate nicely with run-time types? Why does an AST not integrate nicely with values? Why does an AST not integrate nicely with semantic abstractions? (and what are semantic abstractions? can you give an example of a semantic abstraction?)

You're focusing too much on just constant propagation. There is also composition (e.g. specialize (f . g . h)) and latent optimizations involving rewriting, equality saturation, tracing, memoization, caching. Many of these would be difficult to express using quasi-quotation, much less validate for preservation of meaning.

Why? Those optimizations will be applied by the compiler when you compile the code that you built with quasiquotation. And why are those things any easier to express with specialize?

Specialize has simple enough semantics (formally equivalent to id) and is trivial to implement in a formally valid manner (replace with id) and ultimately is itself one such annotation to guide the compiler/optimizer. specialize is identity with an attitude/annotation.

Yea, it has simple enough semantics in that sense, but I meant in the sense of predictable performance. Predictable performance is very important, you don't want to depend on a fragile "sufficiently smart specializer". For example GHC's optimizer is very nice, but you can't rely on it. Small changes in the code make virtually unpredictable differences in performance. If somebody has successfully written a piece of code just right for the optimizer it can often look misleadingly simple (although just as often it looks atrocious), but arriving at that code is much harder than doing the same in a language with predictable performance (e.g. C++). Therefore for such a specializer to supplant quasiquotation style staging, it needs performance semantics. The difficult an open problem (and if you ask me, intractable problem) is coming up with that and an implementation.

### Why does an AST not

Why does an AST not integrate nicely with semantic abstractions?

The basic issue is that you can perform operations on syntax or an AST that you cannot perform on the denoted semantic objects. To achieve the benefits of syntax-level processing, developers are under pressure to develop and integrate more code at the syntactic level. This developmental pressure can interfere with semantic abstraction (and generally will, unless there is some careful design to avoid the issue).

If syntactic abstraction mechanisms are less modular or less open than the semantic abstraction mechanisms, this pressure may also contribute to non-extensible or monolithic code.

Conversely, ASTs are of the wrong type for integration with most semantic objects. E.g. a function typically takes a value as an argument, not an AST expression denoting said value. Evaluation is often pervasive with such integration, which tends to hinder the benefits associated with the syntactic abstraction.

Those optimizations will be applied by the compiler when you compile the code that you built with quasiquotation.

Indeed. I've not said that quasiquotation prevents optimization. I've said the opposite: one reason people favor syntactic abstraction is to achieve these optimizations.

What specialize does is break the dependence for performance on syntactic abstraction. Developers are then free to model staging using normal abstractions - monads, comonads, currying, applicatives, and so on. The resulting optimization is not superior, but developers don't face any new choices or pressures.

And why are those things any easier to express with specialize?

You don't express those optimizations with specialize. You leave it to the compiler, as you seem to be suggesting now even for quasi-quotation.

Compare: earlier you were 'expressing' constant-propagation with quasi-quotation, rather than leaving it to the compiler.

I meant in the sense of predictable performance. Predictable performance is very important

You argue inconsistent things. At most one of these can be true:

• "Those optimizations will be applied by the compiler when you compile the code that you built with quasiquotation."
• The latency for staged computation using quasi-quotation is more predictable than the latency for staged computation using specialize.

You might argue that quasi-quotation is only up to construction of the AST. But that doesn't result in an apples-to-apples comparison with regards to this predictable performance concern.

I agree that predictable performance is important. But it is often sufficient to focus on predictable lower and upper-bounds, and optimizers can work within that window.

You need to identify holes in any case to be able to put in nested subexpressions even for ordinary programming

Not all user-defined syntax is for 'ordinary programming'. It's often domain specific, problem-specific, highly specialized. As a concrete example, consider quasi-quoting into a user-defined syntax for regular expressions. (MetaOCaml isn't a good source of counter-examples here; like most quasi-quote models, it favors a homogeneous language and syntax. Though I did find one paper on 'implicit' heterogeneity through a postprocess 'offshore translation' step, it only works if the target language is similar to OCaml.)

### It's not inconsistent. You

It's not inconsistent. I'm not talking about time to optimize, I'm talking about the performance of the code resulting from the optimizations. You wanted those optimizations, I already said that I think that it is a bad idea to rely on them. I merely showed that they can be integrated with quasiquotation no problem. With staged computation you can express exactly what you want to evaluate when. For example when staging an interpreter you can eliminate the interpretive overhead. With specialize you just have to hope that the optimizations behind it are smart enough to do the same. Unless you're coming with a tremendous breakthrough in compiler optimization, more likely than not it won't be smart enough.

There's no problem with regular expressions. Regular expressions are just expression trees consisting of various node types like a|b, a*, a+, which have holes that you plug another regular expression into.

[If you have the ability to break open closures, you can express specialize with quasiquotation:

let specialize Closure(f,env) = !<f env>


Here a closure of type a -> b is a function f : env -> a -> b coupled with an env. Then when the resulting AST gets compiled, the f and env are in there as constants, so then the optimizations can do their work.]

### We can model staged

We can model staged computation using monads, e.g. of forms similar to a -> IO (b -> IO c). Or we can use co-monads, or dynamic arrows, or applicatives, or a number of other semantic staging abstractions. The challenge of "expressing exactly what we want to evaluate when" is relatively trivial, can be addressed in dozens of ways, and bears little further mention.

The remaining issue - how to achieve near-native performance from staged code - is the relevant one under discussion. For reasons already mentioned, I prefer this performance issue be separated from the staging abstraction.

I merely showed that [those optimizations] can be integrated with quasiquotation no problem.

You showed that one relatively trivial optimization can be integrated with quasiquotation no problem. And you combined it with a hand-wavy claim that constant propagation is inherently difficult or unpredictable for compilers.

With specialize you just have to hope that the optimizations behind it are smart enough to do the same.

Not really. You can rely on annotations if you don't want a smart compiler.

I'm not talking about time to optimize, I'm talking about the performance of the code resulting from the optimizations.

I meant full-path latency: time to construct the staged behavior + time to compile the staged behavior + time to execute the staged behavior. With runtime staging (e.g. for a graphics pipeline), all aspects are important. But even if you talk only about the last element, you won't be getting more predictable results once you start depending on optimizations from when you 'compile the code that you built with quasiquotation'. Even with your clarification, your argument is no more consistent.

There's no problem with regular expressions.

You considered how to inject a regular expression from a general purpose language. You haven't started on the issue of how to perform quasi-quotation from the user-defined language for regular expressions.

### We can model staged

We can model staged computation using monads, e.g. of forms similar to a -> IO (b -> IO c). Or we can use co-monads, or dynamic arrows, or applicatives, or a number of other semantic staging abstractions.

I'm sure that with monads/applicatives/etc you can do some kind of computation that could be reasonable named "staged computation", but I'm specifically asking about the staged computation that you can do with MetaOCaml and similar systems. If that's also what you mean, then please substantiate this assertion.

You showed that one relatively trivial optimization can be integrated with quasiquotation no problem.

?? I showed that all the optimizations that you named can be done in the same way by applying them when the AST produced by quasiquotation gets compiled.

And you combined it with a hand-wavy claim that constant propagation is inherently difficult or unpredictable for compilers.

No, I said that constant propagation is not powerful enough to do what staging with quasiquotation (like in MetaOCaml) does, and full automatic partial evaluation is difficult and unpredictable. This is not a controversial claim.

Not really. You can rely on annotations if you don't want a smart compiler.

Please specify such a system of annotations and what specialize will do with them. Without this, we are comparing quasiquotation to an unknown system.

you won't be getting more predictable results once you start depending on optimizations from when you 'compile the code that you built with quasiquotation'.

Quasiquotation obviously doesn't magically fix the problem of unpredictable compiler optimizations, so why expect it to perform magic? I have already repeated several times that I think it's a bad idea to depend on them, and that explicit staging is better. Since YOU wanted those optimizations, I just mentioned how they can be easily integrated in staging that is based on quasiquotation. Again, if you want predictability in this combined system you need to do the staging yourself with quasiquotation, instead of hoping that the compiler optimizations will fix performance.

You considered how to inject a regular expression from a general purpose language. You haven't started on the issue of how to perform quasi-quotation from the user-defined language for regular expressions.

I don't understand what this means, can you give a CONCRETE example of what you want to do that's not possible with quasiquotation but should be possible? With code?

### "Staged computation" simply

"Staged computation" simply refers to a condition where one computation generates another that has a strong separation from the former. MetaOCaml doesn't do anything special with regards to staged computation, i.e. in terms of what can be expressed.

If you focus on the specific mechanisms for staging - e.g. quasiquotation which is a form of syntactic abstraction - then of course we'll be able to argue a differences or similarities with other mechanisms. E.g. syntactic abstraction mechanisms are very similar to GADTs, but quite different from a monad stack.

You seem to have a very different understanding of 'integrated with quasiquotation' than I do, if you happen to include postprocessing that is completely decoupled from quasiquotation.

Please specify such a system of annotations and what specialize will do with them.

There are many languages with performance annotations (for parallelism, for register allocation, for inlining, for unboxing, etc.). Haskell supports annotations for rewrite rules. specialize only does what a normal optimizer might do with such annotations, but that can be a lot.

Without this, we are comparing quasiquotation to an unknown system.

One purpose of specialize is so I can get performance WITHOUT binding to any particular staging mechanism. This allows developers to model staging in a number of ad-hoc ways. So, 'unknown' (aka 'abstract', or separation of concerns) is one of the features of the alternative.

Quasiquotation obviously doesn't magically fix the problem of unpredictable compiler optimizations, so why expect it to perform magic?

I don't. I expect quasiquotation to fail for the majority of optimizations you might desire, such as rewrite rules or dead-sink elimination. And by 'fail' I mean you won't be able to express such things using quasiquotation without turning your codebase into a mess.

There is a small class of optimizations that can be easily expressed using quasiquotation, but ultimately you're depending on the following compilation and all its hidden optimizations to offer the real performance gains. And with regards to this, specialize does not compare unfavorably.

I have already repeated several times that I think it's a bad idea to depend on them, and that explicit staging is better.

Explicit staging includes all those monads/applicatives/etc. I've mentioned, and certainly doesn't guarantee predictable performance. Every single time you've repeated "explicit staging is better", I don't exactly disagree (I favor explicit staging; specialize does not hinder explicit staging), but I have received the impression that you're mistakingly conflating staging with a particular mechanism and context for staging (such as MetaOCaml). This is why I repeatedly mention that other things are staging, too, and perhaps seem to be ignoring your assertions. Do you plan to continue repeating yourself?

In any case, I do acknowledge that quasiquotation and other syntactic abstraction mechanisms (e.g. templates, macros, user-defined syntax) have some advantages for precise explicit control of inlining and constant propagation, at least in-the-small. I happen to favor user-defined syntax in that role.

Again, if you want predictability in this combined system you need to do the staging yourself with quasiquotation

Explicit staging can help with predictability by providing some control. But again, performance predictability and control can be achieved by a number of means and are ultimately more a function of programming model - e.g. pipelined dataflow vs. constraint-logic - than of staging. Quasiquotation is just one mechanism for staging, and isn't necessary for predictability.

### It's ambiguous

It's ambiguous. To disambiguate splicing within nested quotations, the splice would need to indicate the level of nesting it refers to. You could do this by indicating a "number of outward hops" at the splice site, or name the quotes and have the splice refer to a quote by name, or reduce expressiveness by having the splice implicitly refer to e.g. the innermost quote.

### Indeed

And for a nice discussion of this issue from the Lisp point of view, see Bawden's Quasiquotation in Lisp, particularly sections 3.2 and 3.3, and Appendix A.

As Tim says, you basically just have to have some rule. The problem is that the implications of various rules aren't terribly obvious. An advantage of doing it the Lisp way is that it's at least sort-of well understood.

### Scheme vs CL

Re: "the Lisp way" is more than one way, unfortunately. It's amazing how difficult a problem is nested quasiquotation. R5RS Scheme and Common LISP handle it slightly differently, which I learned after an argument with R. Kent Dybvig (who was clearly right!). Having a CL background, I was arguing that my code, based on the appendix in CLtL2 with corrections vetted by Steele, was correct, and learned that the semantics in R5RS differed. This Gauche blog entry describes a similar discrepancy.

### sample quasiquote old spec

I'd like to understand your question better, in case it affects my own code later. I plan to include another version of quasiquote read macros and expansion in a future Lisp, based on old code in an interpreter of mine from the early 90's, which now I only barely recall. Let's see, yes I do have those old sources on this laptop. When my reader sees a backquote followed by an expression, instead of returning (quasiquote expr), it instead reads expr and then expands that expression using quasiquote as defined below. [Edit: note under recursion, this causes bottom-up quasiquote expansion from leaves to root of an expression, so there are no nested quasiquotes, unless you play games with list surgery.]

(Note the reader turns ,expr into (unquote expr), and turns ,@expr into (unquote-splicing expr), which helps explain why code below looks for those two symbols.)

(define (quasiquote skel)
(cond ((null? skel) nil)
((atom? skel) (list 'quote skel))
((eq? (car skel) 'unquote) (cadr skel))
((and (pair? (car skel))
(eq? (caar skel) 'unquote-splicing))
(quasiquote (cdr skel))))
(else
(combine-skels (quasiquote (car skel))
(quasiquote (cdr skel))))))

(define (combine-skels left right)
(cond ((and (quoted? left) (quoted? right))
((null? right) (list 'list left))
((and (pair? right)(eq? (car right) 'list))
(cons 'list (cons left (cdr right))))
(else (list 'cons left right))))

(define (quoted? x)
(and (pair? x) (eq? (car x) 'quote)))

That's exactly how it appears in a source code comment to explain what the C++ code does. (Indentation could be better.) The actual expansion occurs in equivalent C++ code that does what the interpreter would do if it executed code above. Except it assumes the definition of each symbol is the expected version.

This spec for quasiquote comes from a book handed to me by Mark Richer around 1989, which I no longer recall. When I Google for some of those symbols, I see similar quasiquote code attributed to AIP (artificial intelligence programming) for Cromemco. So that's the best I can do for a citation on the code above. How was I to know I'd want it 25 years later?

(I started dabbling in Lisp when Richer said, "You'll like it: code and data have the same form." Then he suggested quasiquote support in macros and showed me the spec as above, which I copied and later used in a kind of Scheme clone, after manual conversion into equivalent C++.)

I only used such quasiquoted expressions inside macros, which were evaluated twice — the first time with call-by-name for symbol substitution without evaluation, and then a second afterward to evaluate the resulting edited AST with substituted symbols and expressions in place.

In short, I thought the main purpose of quasiquoted expressions was as a tool in AST-based macro expansion. In contrast to C, which does text substitution in the preprocessor, the old style of macro in Lisp was expression substitution in the abstract syntax tree, so it was tree editing to C's text editing. If you think of C macros as a poor man's system for generics, based on text as templates, then Lisp macros are a similar kind of generics using AST as template, which gets edited by quasiquote expansion.

Your question seems to imply a subtle violation of that model, by involving more than one layer of "templates" when top level quasiquoted code deals with another level of quasiquoted code. It just seems like an editing conflict in a way much like an abstraction that bleeds through interface layers. Why does the situation you describe need to occur? (If my asking is rude, just ignore the question.)

### Why nesting occurs

It sounds like you're asking why nesting occurs? (It's possible I'm misunderstanding the referent of "the situation you describe".)

The usual case is macros that expand to the definitions of more macros. I'm not sure whether that was allowed in your system. The Bawden paper I linked above has a simple motivating example in Section 3.2.

### There are some hard cases of quasiquote.

First there is the metamacro case where macros expand into macro definitions. Second there is the case where a quasiquoted expression is used as an argument to another quasiquote.

You have to be careful, especially if your lisp allows variable captures, or you'll wind up evaluating something with respect to the wrong environment.

### It came up in two different

It came up in two different situations. The first was writing a quine with quasiquote. The second was partial evaluation binding time annotations with more than 1 stage of partial evaluation.

As I see it, quasiquotation is just syntactic sugar for building the AST yourself by constructing each node by hand (I think this is also what you're saying but I'm not entirely sure). The AST of a language should be able to represent all its language constructs, and you should be able to generate all those ASTs with quasiquotation. Quasiquotation itself is a language construct, and you should be able to generate code that uses quasiquotation with quasiquotation.

It's true that quasiquotation in Lisp is used mostly in macros, but it can also be used in other instances where you are generating code. An example Matt Helige mentions above is macro generating macros, but it more generally applies to any kind of code that generates code that generates code (besides macros, other targets of generated code can be evaluating/compiling that code at run time, or outputting the generated code to a file).

I don't think bottom up expansion of quasiquotation is the right thing to do, because that way you can not reliably generate code that does quasiquotation. For example the expression (quasiquote (quasiquote foo)) should return '(quasiquote foo), not '(quote foo) as it does in your method if I understand it correctly. This is basically the same reason why macros should be expanded from the root down rather than from the leaves up. You don't want any macros to run inside quoted code, including the macro (quasiquote ...). My question is about how unquote should interact with nested quasiquotation, though particularly in full structured languages where quasiquotation is guaranteed to produce well scoped code. In those kind of languages, expressions inside a quote can still have some effects, for example let brings variables in scope. In contrast in Lisp code inside a quasiquote can't have any effect since it's an amorphous tree of atoms (in this way Lisp sexprs are basically halfway between representing code as strings and representing code fully structurally, sexprs are a half-structured representation).

For example you would be able to write <let x = EXPR in ~(foo x)>, whereas in Lisp (let ((x EXPR)) ,(foo x)) is not well scoped because the x is not brought in scope by a let in a quasiquote. In Lisp the equivalent would be (let ((x (gensym))) (let ((,x EXPR)) ,(foo x))).

Note that my question is about code that quotes quasiquotation, not about code like (quasiquote ... (unquote ... (quasiquote ...) ...) ...), since it's clear what that should do. In Lisp syntax the question is basically what should (quasiquote (quasiquote (unquote e))) do? Should that be (list 'quasiquote '(unquote e)) or should it generate (list 'quasiquote e), and then the next question is if you choose one of these options then how would you express the other? If the answer is that we should do something else entirely, then the next question is what should we do?

### thanks for further explanation

I like your characterization of the issue as one of specifying and controlling holes in which we can substitute expressions. (The nature of holes is more interesting than constant templates.) I agree quasiquotation seems mainly syntactic sugar for AST building; I did try to say something similar.

I don't know the original author's name, which makes it hard to write third person remarks about the author's intent in code I posted. I think all I did was tweak it to use Scheme style names. As a joke, let's pretend it was written by comedian Red Skelton, since "skel" appears in symbol names (likely referring to a tree of pairs as a skeleton). I had to figure out what Red's code does, then use it as a read macro in a way making macro expansion work; but it's been twenty years since I last thought about it.

Red's quasiquote-expanding code basically builds a tree when evaluated, so unquoted holes can pick up expression substitutions. When used as a read macro, this translates an original template into a tree constructor, which loses the original information in the form of a template. We might call this "template erasure" in analogy to "type erasure", if we want to note loss of info known earlier.

The AST of a language should be able to represent all its language constructs, and you should be able to generate all those ASTs with quasiquotation.

Yes, Red's code loses the original template when expanded at read time, so there never is an AST representing the original—just an expanded version ready for evaluation as an AST. This is inferior to a scheme capturing original trees verbatim, to preserve options in transformation. So I agree with your sentiment.

To define models a developer can grasp easily, it helps to specify both what and when things happen. An explanation of Red's quasiquote expansion requires parts about timing of read macro expansion, then later evaluation in stages to build a new tree with holes filled by substitution. A completely different model for quasiquote expansion can be defined another way as you suggest, but it would be nice to keep both what and when in a story.

Thanks for the discussion elsewhere here, from which I'm learning more about what folks want and expect. [Edit: although LtU's markdown now does nothing with tt tags, you can still use code tags to get a teletype font to highlight code snippets.]

### Closing the Stage

I think this is what you want. Link is broken there, so here's an updated one. See Oleg's site on applying these approaches to Template Haskell to derive a tagless-final library of statically type-safe code generation combinators, an important marker on the path to MetaHaskell!

### Very interesting! In the

Very interesting! In the first link it appears that they do not handle nesting? (because they limit the system to one stage) And in the second link as well? (because the embedded domain specific language does not have quasiquotation itself) Do you have ideas on how to extend it with nesting?

Even so it is very interesting work and I'll study it further. Perhaps we can get syntactic sugar akin to F# computation expressions or idiom brackets for that, so that we can write ordinary syntax and get the tagless final form of that by quoting it. That would insert the correct lam, var, etc. This would be useful for a lot of things besides quasiquotation. For example it would let us write CPS in direct style by using the CPS transform on tagless final terms, and the same goes for effectful code in direct style by inserting the correct monad functions (f x would become ap f x, etc.).

How exactly should code be expanded? I'll give it a try.

<x>       --> var x
<\x -> e> --> lam (\x -> <e>)
<f x>     --> app <f> <x>
<~e>      --> e
<<e>>     --> quote <e> ??


(Or maybe <~e> should expand to something like lift e?) Then if we choose var=return and app=ap we get direct style monad notation. And an escaped unquote <\~ e> where \ is the escape character maybe should expand to unquote <e>. Here nesting should correspond to composing effects. Perhaps monads with additional functions for quote and unquote correspond to composable effects. With monad transformers quote = runXYZ (e.g. runMaybe, runState, depending on what monad) and unquote = lift?

### Nesting of quasi-quotation in MetaOCaml

Since the code looks like MetaOCaml, I guess I should say how this issue is handled in MetaOCaml.

MetaOCaml associates a so-called level with each expression. Normal expressions', which are meant to be executed when the program is run, are at level 0. Brackets <...> increment levels and escapes ~ decrements. Splicing only occurs when evaluating an escape from level-1 expression.

You can see how this is implemented if you install MetaOCaml and look at the end of the file typing/trx.ml. Here is the relevant snippet, with compiles code within brackets (at some level n > 0):

let rec trx_bracket :
(expression -> expression) -> (* 0-level traversal *)
int -> (expression -> expression) = fun trx_exp n exp ->
let new_desc = match exp.exp_desc with
... lots of code snipped ...

| Texp_bracket e ->     (* nested bracket *)
texp_apply (texp_ident "Trx.build_bracket")
[texp_loc exp.exp_loc; trx_bracket trx_exp (n+1) e]
| Texp_escape e ->
if n = 1 then (trx_exp e).exp_desc	(* switch to 0 level *)
else
texp_apply (texp_ident "Trx.build_escape")
[texp_loc exp.exp_loc; trx_bracket trx_exp (n-1) e]

In the example from the original article <... <... ~e ... > ...> the expression e was at level 1 (an escape from doubly-nested brackets). Only 0-level expressions are evaluated. Therefore, in the example, e will not be evaluated and no splicing will occur. To splice something within double-nested quotations, double the escapes.

For illustration, here are a few examples from the regression test suite of MetaOCaml (file metalib/test/trivial.ml)

.<.<.~(.<1>.)>.>.;;
(*
- : ('cl, ('cl0, int) code) code = .<.<.~(.<1>.)>.>.
*)
.!.<.<.~(.<1>.)>.>.;;
(*
- : ('cl, int) code = .<1>.
*)
.<.<.~.~(.<.<1>.>.)>.>.;;
(*
- : ('cl, ('cl0, int) code) code = .<.<.~(.<1>.)>.>.
*)
`
Incidentally, cross-stage persistent values allow multiple stages without nesting brackets. The rough idea is that < < e > > can be represented as let x = < e > in < x >.