How to Write Seemingly Unhygienic and Referentially Opaque Macros with Syntax-rules

How to Write Seemingly Unhygienic and Referentially Opaque Macros with Syntax-rules
By Oleg Kiselyov
This paper details how folklore notions of hygiene and referential transparency of R5RS macros are defeated by a systematic attack. We demonstrate syntax-rules that seem to capture user identifiers and allow their own identifiers to be captured by the closest lexical bindings. In other words, we have written R5RS macros that accomplish what commonly believed to be impossible.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Trying to figure this out

I'd been thinking I should grok the essence of what this paper is doing before commenting. I think I've got it now. Not finding the paper itself helpful, I tried to find the Petrofsky material cited as the starting point. That's posts from 2001–2002 on comp.lang.scheme (quite a blast from the past); here are links to what I (eventually) found in google's archives, of which the November 2001 was most helpful to me (I hunted down the other two mainly because it seemed advisable to be thorough while about the business).

(The Kiselyov paper attributes the posts it's interested in to "A. Petrofsky"; in fact, their author apparently signed their usenet posts using something different for the "A" each time.)

My understanding of the core trick here (yes, trick; not meant pejoratively):

We want to capture occurrences of a symbol in an operand to a Scheme "hygienic" macro.

Scheme's hygienic-macro facility is meant to be able to define new binding constructs, such as variants of let; and in order to do that, one has to be able to pass a symbol into the macro and bind that symbol in another operand. This is considered okay, hygiene-wise, because the symbol is specified in the same scope where it is being "captured" (as with let, where the symbols to be bound are passed in to let, and then occurrences of them in the body of the let are bound, also in the dynamic environment of the call to the macro). What you aren't supposed to be able to do, hygienically, is to capture a symbol whose name is determined internally by the macro; as long as the symbol is specified in the dynamic environment where the macro is called, there is, ostensibly, no problem.

However, as Petrofsky points out, when we are trying to capture some particular symbol in an operand of a macro, the symbol actually does get passed into the macro: it occurs in the operand where we're trying to capture it. (If it doesn't occur in that operand, we don't need to worry about capturing it there anyway.) "All" we need is to find an example of that symbol in the operand, and use that to specify the binding. The Scheme hygienic-macro facility goes about achieving its hygiene by alpha-renaming symbols in the operand, and so, if we can correctly identify an alpha-renamed occurrence of the correct symbol in the operand, we can then bind that.

With some tedious recursion we can search through the structure of the operand looking for such an occurrence; what we need, to make the whole thing work, is a way to determine, in the base case for the recursion, whether or not a particular symbol, extracted from the operand, is an alpha-renamed occurrence of the symbol we want to capture. We do this using a feature of Scheme's syntax-rules. syntax-rules defines all the different patterns by which a macro may be called, and what each one expands to. Before the list of pattern-expansion pairs, is a list of keywords that is usually empty. This is there so that the macro syntax can contain keywords that are required to occur in their literal form. But that means the macro facility has to match an operand against this keyword without alpha-renaming. So, we can set up a subsidiary macro that does a pattern match, requiring its first operand to be the particular symbol we want to capture, taken as a keyword, and accepting its second operand as a symbol being passed in. Whatever particular symbol we have found in the operand, we then pass it to this subsidiary macro twice; the first one gets matched as a keyword, without alpha-renaming, while the second gets alpha-renamed and thus tells the subsidiary macro just what the alpha-renamed identity is that ought to be bound. And the subsidiary macro can then use the alpha-renamed symbol to do whatever effectively-unhygienic thing it was that we set out to do.

Very nice explainer. Thanks!

Very nice explainer. Thanks!

Thanks for the great explanation

Oleg's papers are usually great if you can decrypt them. I skimmed the paper the other day when this was posted and couldn't figure it out, but your explanation is easy to follow. Plus this is further evidence for my claim that macro systems like this that re-evaluate a code block multiple times in different contexts (e.g. once where some symbol is a keyword and another where it isn't) are doing it wrong.

But out of curiosity, is there a proposed fix to this state of affairs in scheme? (I guess it also exists in Racket?)

R6RS and Racket (and Kernel)

I find the official documentation on these macro systems very hard to read, too; but don't see any evidence that this anomaly in the behavior of R5RS macro pattern-matching has been changed either in the R6RS or in the definition of Racket.

My own approach to syntax extension in Kernel is all about doing things explicitly instead of implicitly. Certainly the Scheme macro-hygiene algorithm is virulently indirect; rampant alpha-renaming is used to create an illusion that macro definitions follow the same lexical-scoping structure as the runtime part of the language, and it seems inevitable that such a complicated illusion would be flawed, so that the complications would leak into the observable language.

Why not to fix

My explanation of Petrofsky, which folks have kindly praised, only mentions alpha-renaming of symbols where the macro is called. I'd wondered whether to mention further complications, but they'd interfere with explaining the trick, and the explanation isn't really "wrong", just leaves out some further complications. But those further complications may help explain why the Scheme folks might be aware of Petrofsky's trick yet still choose not to try to "fix" the hygienic macro facility to prevent it.

I do not, atm, know how to explain the whole of the macro-hygiene device so it's easy to understand. I'm not certain it's possible to to do so. Perhaps one should formulate a koan: the macro-hygiene device that can be clearly explained is not the true macro-hygiene device. I did, briefly, understand it myself, during part of the writing of my dissertation; I had undertaken to compare the complexity of different syntax-extension devices for Scheme, by writing a vanilla Scheme evaluator and then adding the different devices to see how much each of them increased the size of the evaluator. (Kernel-style fexprs were far the smallest addition, hygienic macros far the largest.) Even then, though, I left out Scheme's pattern-matching, so still wouldn't have known quite how the hygiene device interacts with the pattern-matching.

Basically, the hygienic macro device is awash in alpha-renaming. The sort of alpha-renaming I mentioned above, that gets bypassed by "keyword" pattern-matching, is just part of what's going on. Parameters to a lambda get alpha-renamed in its body. Unbound symbols in a macro get... temporarily (I think)... alpha-renamed during transcription, involving some deep voodoo. The binding of things in syntactic environments is itself rather alpha-renaming-like, and interplays with all of this. With all these things going on, the "keyword" pattern-matching happens at a somewhat different point in the process than the "macro parameter" pattern-matching, and that differential is what makes the Petrofsky device work, but neither one of them is really acting on the raw, unprocessed syntax as typed in by the programmer. In particular, if the macro is trying to capture a symbol whose syntactic binding at the macro definition is different from its syntactic binding at the macro call — I think — the "keyword" pattern-matching will fail to recognize its symbol occurrences in the operand after all, undermining Petrofsky's trick.

What all this means, for the prospect of "fixing" the macro facility to disallow Petrofsky's trick, is that it's not easy to say with confidence that there's "really" a dire problem here, and rather mind-bending to try to work out what would be required to "fix" it. So if they really haven't done anything to try to "fix" it, that's likely why not.

I think the short version of

I think the short version of John's (accurate) explanation is that the syntax-rules keyword list is unhygenic. This isn't really "fixable" since that's the intended behavior. The real fix is to think more broadly about what a hygenic macro is, as in Michael Adams' POPL paper.

Or don't use macros

Thanks for the link. That's a nice paper. I still think we'd be better off generalizing syntax beyond macros. Macros can be recovered as syntax objects whose only semantics are macro-free expressions, but syntax objects in general could have all kinds of other semantics.

Systems with additional

Systems with additional semantics (see for example David Fisher's work on Ziggurat) have the same problems of hygiene that macros need. Hygiene is lexical scope for meta-programs, and is needed anywhere metaprogramming occurs.

lexical scope for meta-programs

Hygiene is lexical scope for meta-programs, and is needed anywhere metaprogramming occurs.

As a concatenative programming aficionado, I object!

No need for lexical scopes or the concept of hygiene if you simply don't use local definition of symbols. But compile-time evaluation, metaprogramming, staging, etc. are still useful.

Sure, but "lexical scopes

Sure, but "lexical scopes for meta-programs" is much easier to understand that what's going on in that paper.

Thanks for the link. Ziggurat was posted here a few years ago and I skimmed it a bit at the time. I may take a closer look.

That paper is about what

That paper is about what "lexical scoping for meta-programming" _means_. That the slogan is simpler is unsurprising.

The paper is about what

The paper is about what "lexical scoping for meta-prorgramming" means _in the context of a scheme macro system_. Thinking of syntax extension in terms of "macro expansion" is the source of pretty much all of the complexity.

Re: Or don't use macros

I don't like macros, i think they only exist because the underlying language has weak support for generic programming.

Type-classes can be used to write meta-programs as logic (the precise kind of logic depends on instance resolution rules).

This makes sense, as programs are proofs, and types are theories, type-classes are proof strategies.

signs of weakness

Fwiw, I agree with part of this. I see macros as a sign of a weak language. My reasoning to get there is kind of different, though.

To me, macros are among the simplest of a large class of complicating features that are introduced to augment the power of a core language, but result in an impedance mismatch between the augmentation and the core. I'd expect the impedance mismatch to limit the abstractive radius (per the smoothness conjecture); the way to avoid that limitation is to avoid that augmentation strategy in favor of tinkering directly with the core language. However, I'm inclined to count elaborate static type systems as another example of the non-smooth augmentation strategy.

A point I'm working on developing is that programming languages are still written in text because text is closer to the human capacity for language, which is immensely powerful because it's rooted in our sapience, which —when allowed to operate in its own element— can run rings around formal systems (which are limited by Gödel's Theorems; yes, this is subtle stuff). Programming, and in particular abstraction in programming, are limited in comparison to mathematics because while mathematicians engage in dialog with other sapient mathematicians, programmers try to engage in dialog with the non-sapient computer, which does not and cannot work at a sapient level. To achieve the sort of unfettered abstractive power enjoyed by mathematicians, a programming language should minimize/simply dialog with the machine, which (on consideration) is exactly what classic interpreted Lisp does: simple evaluator, simple data structure, absence of complicated types, bignums (no fussing with limits of complicated numeric representations), even garbage collection, all serve to cut down on wrangling with the computer, leaving the programmer free to concentrate on the abstract content.

I'm not a big fan of the Curry-Howard correspondence. I see it as a red herring. Formal reasoning is ultimately incapable of providing high-level insight; that's the take-away lesson of Gödel's Theorems; so the more tightly we tether our programming languages to formal reasoning, the more abstractively limited they'll be. Whereas computation itself is open to the full range of insights achievable by sapience, a take-away lesson from the Church-Turing thesis.

Formal reasoning Vs approximate mathematics.

A point I'm working on developing is that programming languages are still written in text because text is closer to the human capacity for language

I agree with this point, and I see it as part of the reason that visual programming has never really taken off.

I disagree about mathematics, I see the "woolyness" of mathematics as a big problem. Mathematicians assume so much implicit knowledge it is almost impossible to pick a a paper from an unfamiliar branch of mathematics and understand it. It would only take a new dark age of a generation or two to eradicate much modern mathematical knowledge. Ironically earlier works (Euclid etc) are more accessable despite being very old. Then problems we have formalising mathematics probably stem from there being multiple overlapping systems, and mathematicians make category errors slipping from one system to another without realising it, because humans aren't good at the details.

As you can guess I disagree with your assessment of formal reasoning. I think formal reasoning tells us a lot about the human mind, and the kind of mistakes human mathematicians make due to "cognitive illusions". I think formal verification of proofs is necessary and that formal reasoning is mathematics, with everything else sort of being an approximation to this.


Alas, my most pertinent thoughts on this, I'm realizing, are contained in unposted drafts that I'm still trying to complete and get out the door onto my blog. Though it's evidently not irrelevant here, I wouldn't try to burden LtU with the high volume of text and relatively low density of pertinence. Hopefully I'll get more of it out the door and remember to note it here.

A brief clarification might be useful. I'm not talking about "woolyness"; it's more to do with top-down versus bottom-up view of structures. Here's a way to think of it. You say

formal reasoning tells us a lot about the human mind, and the kind of mistakes human mathematicians make

I don't disagree with that (as far as it goes). But I would add that Gödel's Theorems can give us insight into formalism, and the kind of mistakes formal reasoning makes.

Godel isn't just about formal systems

If you have a Turing machine that outputs only true sentences, then there will be true sentences it doesn't output, by a variant of Godel. Unless you're expressing a belief that the Church-Turing hypothesis doesn't apply to "sapient" beings.

depends upon what the meaning of the word 'is' is

From context, it sounds as if you're using the term "formal system" more narrowly than I meant it here. For this purpose I would allow a Turing machine as a formal system.

If you have a Turing machine that outputs only true sentences, then there will be true sentences it doesn't output, by a variant of Godel.

That is the particular variant of Gödel I explored on my blog a while back.

We're all formal systems

If a Turing machine is a formal system, then why aren't we?

Godel applies or you are wrong

The way I read it is that Godel is that the only way for Godel to not apply is if your informal reasoning is so sloppy that it's simply incorrect. Sapient beings can be illogical, irrational, and incorrect, none of these things says anything about mathematics. This is one of the things I think Einstein was implying when he said "As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality.", Where a certain mathematical law is like a formal system.


By trying to say just a little, it seems, I'm leading myself to present incoherently fragmented bits of my currently-most-activley-developing draft blog-post. With limited resources, presumably it'd be a better investment to try to complete that draft than fail to get my point across here. (And yet, it's hard to sit back and watch while folks misunderstand where I'm going with this...)

The absence of exceptions to Gödel is what makes it a worthy challenge. Sliding past something that has exceptions isn't nearly as interesting. I've been puzzling over what Gödel implies about the relationship between formal reasoning and truth since I first realized what Gödel was (technically) saying; only within the past week or two have I come to appreciate that the key to this situation is not to let the other team dictate your strategic analysis. So, don't expect formal reasoning to define the realm of alternatives to formal reasoning.


Fwiw, I've got two drafts in progress, and posted earlier this year on the information-processing difference between sapience and non-sapience. A draft touching on, amongst other things, why programming languages have poorer abstraction than mathematics appears to be well short of ready to go. A draft centrally about sapience and the limits of formal reasoning looks very nearly ready to go. My post earlier this year was Sapience and non-sapience.

Syntax extension is to allow custom notations

It's not really about meta-programming. I think it's widely accepted that any time you can use a macro or a function, you should use a function. I agree that when meta-programming facilities are lacking, people might misuse syntax extension facilities to fill that void, but I don't think meta-programming facilities will eliminate the need for syntax extension.

Syntax extension is bad, okay.

I think syntax extension is bad, because it makes it hard to reason about code without knowing the current definition scope. I think a good language should have a small lexicon of well known syntax. This can include overloading, but I want symbols to have consistent meanings across contexts, so '+' can be overloadable but always means some kind of addition.

So I see the use of meta-programming as procedural construction of boilerplate code. Effectively its manipulation of the abstract syntax tree.

Generics are like macros with a type-system, so I guess my problem is with macros that treat code as a text cut and paste exercise, not with the manipulation of code to build 'code generators'.


I don't like macros, i think they only exist because the underlying language has weak support for generic programming.


A language with a more advanced type system wouldn't require macros.


folks who like types dislike macros, and folks (well, I'm a sample of one, anyway) who dislike types also dislike macros. Yet macros are historically ubiquitous. Interesting.


John -

Macros are from an age when every byte and every clock cycle was precious.

Serious question: Now that compilers can have arbitrarily complex and inefficient approaches to providing the same capabilities, why would we hold on to macros? Especially since moving the "macro" functionality later in the compiler cycle provides a dramatically richer set of information that can be used to reduce errors, increase readability, provide better feedback to the developer, etc.

Fiddler on the roof

Macros are a legacy feature, of course. There's more to it than that, though. For one thing, programmers can —or think they can— conceptualize what macros are doing. It's simple transcription, or, again, it seems to be.

Hygienic macros are an attempt to maintain the illusion of that simplicity while actually doing something insanely mind-bending behind the scenes, a gambit that in my experience never works, but people keep trying: the superficial pretense of simplicity is always flawed in ways that seem "small" to the feature implementor but, for the feature user, in this case the programmer, make the difference between true smoothness, a la the smoothness conjecture, and miserable frustration caused by things that look like they ought to be smooth and then ball up in a snarl under real-world tensions.

The disconnectedness of macros from whatever runtime model a given programming language uses, while it creates problems per the smoothness conjecture, also creates at least a seeming of generality, of being a familiar device applicable to all languages. That seeming advantage, however illusory, isn't something that more modern alternatives can readily compete with because all of them are more integrated with the programming language in which they occur — and while that's a huge advantage for smoothness, it guarantees that no one such alternative will apply as impartially across disparate language models as macros do.

In assembling a broad panorama of Lisp history for my dissertation, I found (btw) the interplay between macros and fexprs rather fascinating. Fexprs were naturally suggested by the structure of the very-early Lisp interpreter; but that interpreter, owing to lack of prior experience with such a creature, muffed the handling of environments and so had dynamic scope. Fexprs, it turns out, only come into their own when well-merged with static scope. So macros were introduced, very early on, as an alternative to fexprs since fexprs weren't working out right. Static scope was brought in as a non-default device but thereby came across as a complication, and as Lisp broke up into dialects in the mid-1960s, emphasis on compilation squeezed out the static-scoping device. Static scope started coming in as the default with Scheme in the mid-1970s, but that took a while to catch on, and meanwhile fexprs continued to fizzle in dynamically scoped Lisp and were finally axed in 1980, just before they might (conceivably) have begun to thrive in a new generation of statically scoped Lisp. A massive intellectual investment was then made, during the 1980s, in finding a way to containerize the essential kludginess of macros, i.e., develop hygienic macros. And then, just to put the icing on the cake, in 1998 Mitch Wand published a paper called "The Theory of Fexprs is Trivial". Frankly, Wand's paper is about reflection and has nothing really to do with fexprs as they exist in any Lisp dialect; but by this time the community had a massive emotional investment in macros, and fexprs were a popular target for ridicule (because that's what you do with an old paradigm in order to purge it from the community to give the new paradigm a clear field). So now, to this day, some people take it as "proven" that the theory of fexprs is trivial; and that misapprehension, which makes the momentum of macros much more difficult to overcome, was itself formed by repeated blows from the macro legacy over nearly forty years.

macros still

We still need macros and the like because type systems and language cores simply aren't expressive enough to allow the concision required for visual verification of correctness. a trivial example is a table layout representing a repeated application of some template expression, the columns of each row filling in the holes in the template.

In C you need this more than an FPL with HOFs which can reduce the boilerplate in some circumstances. On the other hand in Felix the syntax extension to support EBNF for regular definitions is clearly a major advance over the two other unreadable forms: combinators (stupid things always come in prefix which is unreadable) or string regexps (escapes and the like destroying the readability of all but the simplest regular expression).

As a demonstration the regexps used in ALL languages except Felix for the Shootout were wrong. ALL of them. Except Felix. They all copied PCRE syntax used in the C example which was wrong. The test was simple, regular expressions even for simple cases are unreadable by humans. Combinator forms are worse because they're even more verbose. Perl kills it because its substitution technology simulates regular definitions. But Felix kills Perl because it has a real EBNF grammar which reduces to combinators which reduce to library calls (which actually rebuild regexps to submit to Google RE2!).

The point of syntax extensions is to make the syntax comprehensible to humans with experience in a particular domain, and the major part of that is to remove boilerplate. Look at Jane St Ocaml libraries to see the extensive use of ppx syntax transformers.

There is a language which does this best, in which the "macros" are more or less seamlessly integrated with the rest of the language. And of course .. that language is C++.

The simple fact here is that so called "advanced" languages like Haskell and Ocaml are in fact extremely primitive. They cannot do the most basic polyadic operations. Haskell, Ocaml, and Felix ALL have generic maps, Haskell by cheating in the compiler, Ocaml with a ppx, and Felix also by cheating in the compiler. Yet the way to define these generics is fairly well known. In fact every programmer can routinely define a map for any inductive datatype.

Simple direct expression of the algorithm is better.

This can be done with generics, and typeclasses, but it's a bit ugly in current languages, see the HList paper for the database example. A language that was designed to allow this kind of generic manipulation would be able to do the examples you propose without macros.

Syntax expressions are bad because you cannot read them, and you cannot see the code that will actually be run.

Generic abstractions are better because we can always see the actual algorithm expressed directly and simply, and further, it can be applied generically meaning you only need (for example) one definition of quicksort for any application.


A basic difficulty in this sort of situation is that human readers understand symbols in an entirely different way than the computer does. Human readers are sapient, primarily perceiving a big picture to which the computer is inherently oblivious as it just follows an algorithm. Seems to me that one should be especially wary of any situation where computer processing of symbols gets hard to explain, such as an elaborate discussion of hygiene, because it suggests a large discrepancy between human and computer processing of symbols. (I'd like to think Kernel avoids this through its use of first-class environments to let the human programmer think explicitly about how the computer processes symbols.)

Working with Kernel, it emerged that unintentional hygiene errors usually result from quotation. The general principle — keeping in mind that quotation is easy to implement using operatives (Kernel-style fexprs) — seems to be that an operative shouldn't obfuscate the line between interpreted and uninterpreted syntax; though I never did precisely formulate that principle. Abstractly, that way of getting in trouble with Kernel seems much the same thing as what these macros do with the keyword list.

A curious variant strategy, that I unearthed in writing my dissertation, is "single-phase macros"; which I mention here because they offer yet another angle on the phenomenon of quoted symbols causing bad hygiene. (This is the first time I've found single-phase macros relevant to something outside the dissertation; though I thought when first noticing them that, even though I officially "don't like" macros, single-phase macros could be fun to play around with. :-)  Single-phase macros use a

constructor of single-phase macros called $macro, using a similar call syntax to $lambda. It takes three operands: a formal parameter tree, a list of symbols we'll call meta-names, and a template expression. When the macro is called, a local environment is constructed by extending the static environment of the macro (where $macro was called). Symbols in the formal parameter tree are locally bound to the corresponding parts of the operand list of the call to the macro; and meta-names are bound to unique symbols that are newly generated for each call to the macro [i.e., gensyms]. A new expression is constructed by replacing each symbol in the template with its value in the local environment; and finally, this new expression is evaluated in the dynamic environment of the call to the macro, producing the result of the call.

Here is a hygienic single-phase version of [short-circuit or]:

($define! $or?
   ($macro (x y) (temp)
      ($let ((temp x))
         ($if temp temp y))))

Symbols $let and $if can't be captured by the dynamic environment of the macro call because they are locally looked up and replaced during macro expansion[...]. The $let in the expanded expression can’t capture free variables in the operands x and y because its bound variable temp is unique to the particular call to $or?.

Single-phase macros appear remarkably stable. Unlike Kernel operatives, single-phase macros can't implement quotation, so it would have to be provided as a separate primitive in the language — and afaics, single-phase macros are only capable of violating hygiene if combined with some primitive that enables quotation.

Clojure’s syntax-quote

Clojure’s syntax-quote provides two features, one of which is similar to half of your “single-phase macros” design. That is, within a syntax-quote, symbols ending in # are replaced with gensyms. Matching symbols are unified across the full extent of the quote:

`(let [temp# x] (if temp# temp# y))
[temp__2__auto__ cljs.user/x]

Building this in to quote instead of a macro definition form decoupled the templates, which is critical for writing larger macros that compose fragments of code, which single phase macros don’t look like they can do. Of course, the tradeoff is that you now need unquote to get at external symbols, such as the macro parameters.

My example highlights another feature of syntax-quote, auto-namespacing of unardorned symbols. Here, if x or y were defined, they would have been resolved in their respective namespaces, which lets you refer to external methods freely without quoting or unquoting in the common case, such as cljs.core/let which is a macro that provides destructuring over the primitive let* special form.

composing fragments of code


I'm not sure what you mean by "critical for writing larger macros that compose fragments of code". Composition is not a problem, because single-phase macros are, well, single-phase. That is, there is no distinction between compile-time and run-time bindings, no distinction between compile-time environment and run-time environment. $macro is itself a first-class object, and so are $let, $if, and $or?. When $or? is called, it looks up $let and $if in its static environment, thus retrieving the first-class objects $let and $if and embedding them in the form which is then evaluated in the dynamic environment where $or? is called; and because $let and $if are not symbols, they self-evaluate with no danger of the dynamic enviornment capturing any symbol from the body of $or?. All of those symbols are already gone before the dynamic environment is consulted. If another macro is defined, with the definition of $or? within its static scope, it can call $or? without difficulty; it's even perfectly possible for a single-phase macro to recurse, just as an ordinary $lambda-based procedure could.

3D syntax

OK, I think I get you now.

Most simple macros have exactly one expression in their body: a code template. So much so that it's not uncommon for early Clojure programmers to assume that quoting is a construct only available in macros, and that quoting form is part of the syntax of the macro defining form! However, as macros grow, they often turn in to a bunch of functions that often use code templates within them and then offer a small macro entry point that calls the root template.

My concern was that your macro defining form does not allow for code between the binders and the template. However, I had forgotten the broader context of fexprs, in which case I guess I can think of your macro form as basically just a first-class syntax template and syntax-quote as an implicit invocation of such a template:

($define! $quote-with-temps
  ($vau (temps form) #ignore
    (eval (list (list '$macro '() temps

;; some x and y available here
($quote-with-temps (temp)
  ($let ((temp x)) ($if temp temp y))

It's just a small recursive-walk past that to remove the explicit temps list and discover it from symbols ending in #.

Have I understood correctly?

it looks up $let and $if in its static environment, thus retrieving the first-class objects $let and $if and embedding them in the form [...] those symbols are already gone before the dynamic environment is consulted

I believe the Racket folks call this 3D-Syntax. Clojure support unquoting (~) first-class objects in to syntax-quote (`) too:

user=> (inc 5)
user=> (eval (list 'inc 5))
user=> (eval (list `~inc 5))
user=> inc
#object[clojure.core$inc 0x41709512 "clojure.core$inc@41709512"]
user=> 'inc
user=> `inc
user=> `~inc
#object[clojure.core$inc 0x41709512 "clojure.core$inc@41709512"]

Additionally, both Racket and Clojure will allow serializable objects to pass from macro output in to compiled code. I believe that Clojure simply requires the objects be JVM serializable.

I have two big problems with leaning on this technique:

  1. It weakens the syntactic-flavor of metaprogramming in a lisp
  2. It interacts badly with live programming

To the first point, I'd much rather work with `clojure.core/inc, which is a namespace qualified symbol, and 'x__41__auto__ or some other gensym'ed symbol, than with something like #object[java.lang.Object 0x7ac0e420 "java.lang.Object@7ac0e420"], which is just the memory address of some first-class object. Yes, the symbolic constructs introduce hygiene risks, but the objects approach hurts usability. Critical features such as pretty printing, copy/paste, macroexpand, etc all become more difficult to build and use, and potentially less useful if they exist at all.

To the second point, binding to an object directly creates a sort of "hyper-static environment" developer experience. If you support runtime code reloading, you don't know which macros/fexprs have captured pointers to specific instances of function definitions all across your program.

Clojure also has first-class "vars", which are both syntactic and callable. While this doesn't help with gensyms for locals, it helps a ton for global definitions, which are the bulk of your program:

user=> #'inc
user=> `#'inc
(var clojure.core/inc)
user=> (var inc)
user=> (eval `#'inc)
user=> (#'inc 5)
user=> (eval (list #'inc 5))

The object that gets serialized in to your compiled program is the Var object, which indirects through the global name. So if at runtime you redefine clojure.core/inc, then every call site is updated. If I used the inc function object directly, stale versions would stick around.

I'll avoid an unrelated rant on ways to recover live programming UX in hyper-static environments (primarily: checkpointing. See Forth's "marker").

unquoted symbols

I guess I can think of your macro form as basically just a first-class syntax template [...]

Have I understood correctly?

I'm unsure.

The tricky thing about single-phase macros is that, although they're evidently template-based, there are no quoted symbols involved. Consider

($define! $or?
   ($macro (x y) (temp)
      ($let ((temp x))
         ($if temp temp y))))

Here the template is

($let ((temp x))
   ($if temp temp y))

There are seven symbol instances here, and all seven are evaluated in a local environment. If this were being written using $vau (and supposing Kernel had syntactic sugar for quasiquotation, and a primitive gensym), it would be

($define! $or?
   ($vau (x y) e
      ($let ((temp  (gensym)))
         (eval `(,$let ((,temp ,x))
                   (,$if ,temp ,temp ,y))

Every single symbol in the template is unquoted. There is, in fact, no way with single-phase macros to not unquote all the symbols. Every single symbol in that template has to be either a parameter, or a meta-name, or bound in the static environment, because if a symbol in the template were locally unbound, that would be an undefined-symbol error.

It weakens the syntactic-flavor of metaprogramming in a lisp

I perceive you're interested in macros as a compile-time optimization strategy, for which one would certainly want a syntactic flavor. I can't personally relate to your objection, as I don't think of metaprogramming as all that syntactic.

Of course, single-phase macros share with Kernel a wholesale collapse of the compile-time/run-time distinction, so the primary motive for quasi-quotation is somewhat undermined.

[doh; forgot to call eval in the $vau-based version of $or?; fixed.]

I do get it

there are no quoted symbols involved

Yeah, I got that. That's why I mentioned 3D-syntax. Maybe "$quote-with-temps" is a misleading name. The result is neither "quoted" nor are the temps _symbols_. So let's call it `$make-list-with-temp-vars". I get that ($make-list-with-temp-vars (x) (f x)) would return something like (list <proc:f> <var:x>) instead of '(f x).

I perceive you're interested in macros as a compile-time optimization strategy

You've perceived incorrectly. I'm interested in macros and their ilk as a means for expressivity. That said, while I used to, like you, be a believer in eliminating any phase distinction, I'm no longer so sure. Now I'm more interested in constructs that are phase agnostic. That is, phase distinctions are useful, but all phases should be programmed using the same tools (eg. type and term language are the same) and moving expressions between phases should preserve behavior as best as possible (ie don't do what Java method overloading does in the presence of dynamic dispatch with subtyping).

I've come to this position because I care about modularity of reasoning across time (read-time, compile-time, boot-time, HTTP-request-time, etc more fine grained than compile/run),. However, I also care about performance. More importantly, I care about being able to express performance-critical information about my program. Macros are a weak tool for that job, other than the fact that they can be enabled to have the same syntax as function calls, which gives some sort of syntactic-compatability as an escape hatch when you have to rewrite a macro as a function or a function as a macro.

I don't think of metaprogramming as all that syntactic

This is surprising to me, but maybe because I care a lot about observabilty and debuggability. If you can't pretty-print it, you can't debug it. Even if you have non-syntactic objects, you need some kind of unparse operation so that you can actually understand what you're looking at.

while I used to, like you,

while I used to, like you, be a believer in eliminating any phase distinction, I'm no longer so sure.

For me, also, things got subtler than that some time back. Amongst interesting ideas I've wanted to explore (and not explored) is to use something like Kernel for the task of generating object code, effectively turning a Kernel-like interpreter into a compiler (hopefully, a profoundly eloquent compiler).

I care a lot about observabilty and debuggability. If you can't pretty-print it, you can't debug it.

I substantially agree with these things. Past experiences suggest to me, though, that how things get displayed can make an astounding difference. In my experiments with Kernel interpretation (which for various reasons were disrupted and never completed to become public), each first-class value had a history stamp and possibly a name. The history stamp had (iirc) two parts: a source-code location (specifying the source file, and starting/ending line-and-column), and a [grammatical] aspect (indicating whether the object was actually read from the file, or the result of evaluation). The name would be simply the first symbolic name, if any, to which the object was bound in any environment (the procedure for binding a symbol to a value would suggest the symbol to the value, which would adopt the suggestion if it was nameable and didn't already have a name). Diagnostic messages display that information when describing a first-class value. I was hopeful that this might produce extremely lucid/effective diagnostics, and if it wasn't as successful as I'd hoped, of course I would have experimented further; but to find out would have required an accumulation of practical experience with such an interpreter, and the project got disrupted before that could happen.