Macro systems

When I first learned Scheme I had heard about how powerful the macro system was, and that the full power of the language was available during the macro system.

What I think is interesting was an assumption I had before I learned how the macro system actually works. I had assumed that one could call functions during macro expansion that were defined in the source program.

I am trying to find out whether such macro systems actually exist, where the AST is evaluated as needed while it is being transformed. One simple approach I can imagine, is that when a macro uses a function defined in the source, that function is evaluated/compiled as needed. If it contains macros that haven't been expanded into something meaningful, then the compiler simply fails.

I think this kind of system where macros are intertwined with a program interpreter, may be hard to reason about formally, but could be a very expressive tool.

Has anyone encountered such a macro system?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I've never worked with a

I've never worked with a language that interleaves syntax expansion and execution in a manner where each affects the other. I'd imagine it rather difficult to analyze or optimize a system where the mechanism to read the source and know its abstract structure is to execute it with potential side-effects.

That said, something like that was mentioned recently... I think TeX was named.

I think it's a bit like scattering an 'eval(...)' statement liberally throughout your code. Expressiveness isn't the right word for the properties of such a system. Flexible? Yes, it is that. Dangerous, too.

Agreed: flexible and dangerous

Hi David, thanks for capturing what I was trying to say a bit more elegantly. I was concerned that I might not have been able to make myself understood.

There are two separate approaches that I have been thinking about:
1) interleaves syntax expansion and execution, during run-time
2) interleaves syntax expansion and execution, during compile-time.

I believe (but could be wrong) some examples of number 1 besides Tex include: XMF, Katahdin, XLR.

I am more interested in number 2, and don't know any examples. I don't know of languages where the code can be executed at compile-time to modify itself.

In the implementation of such a system I actually imagine a lot of "parse(...)" statements which take an AST as input, and produce an AST as output.

In the system 2 I describe, where the the interleaeaved expansion/execution occurs during compile-time I can imagine two separate approaches:
1) the macro system mutates a single shared AST during macro expansion
2) the macro system takes the program AST as input and produces a new AST.

The second system, again is a bit more safe and easier to manage conceptually than the first. However, it is a bit more limited in the black magic that could be performed.

I am more interested in

I am more interested in number 2, and don't know any examples.

Because it is an impossible requirement. Once you need to execute a function defined in the source that is yet parsed it is promoted to runtime. The execution model is not any different from running an interactive session which is our every-day model of a system that compiles and executes incrementally and fills the symbol table on the fly. It's as simple as that.

Not as simple as that

There are four stages to my compiler:

  1. Parse-time: Source -> Abstract Syntax Tree
  2. Preprocess-time: Evaluation of the function "preprocess(ast : AST) : AST" in the source
  3. Compile-time: Compilation of the AST returned from the function "preprocess".
  4. Run-time: Execution of the compiled code starting at the function "main(args : List) : Int"

So while the preprocess stage inherently has its own "runtime", it is not really the actual runtime stage of the language.

Well, it is the runtime of

Well, it is the runtime of the language once you presume that a function is executed in 2. that is defined in the source that is yet processed by the compilation chain you've depicted. The cycle is unavoidable then and the most simple means to achieve the goal is to inflate it i.e. iterate through steps 1-4 an indefinite number of times. This doesn't change even if there is a feedback loop in the parser as in Katahdin or the preprocessor code gets modified etc.

When I give someone an

When I give someone an executable, and they run it, I consider what happens from that point on to be run-time. Everything that happens in the compiler is something else (e.g. parse-time, expansion-time, compile-time, whatever-else-time).

The approach I use in my compiler during pre-processing is to walk the AST starting at the "preprocess" entry point, interpreting the nodes as it goes performing no semantic verification, type-checking, or optimization ahead of time.

The semantics of the language are slightly different between what is available at preprocess-time and run-time. The compiler-pass does semantic verification, optimization, and type-checking.

Impossible?

I am more interested in number 2, and don't know any examples.

Because it is an impossible requirement.

You say impossible, I say Metalua. Converge can also sort-of do this, but Metalua is currently more flexible in that particular regard - although it can be a bit difficult to tame the flexibility.

Compile-time staging

I don't think that Metalua is a counterexample, since if I understand what it is up to, it is doing something fairly analogous to CL & scheme, where the compile-time is staged.

The docs for Converge I've had a hard time getting my head around, but if I understand correctly, it is indeed an example of 2. The kind of embedding of syntax in this language, and indeed in Katahdin, is much better behaved than in TeX, since it is possible to reliably determine a top-level, which consists of everything but the regions of embedded code, and which determines their syntax.

If we imagine the general class of possible such languages with recursively embedded code, then these embedded regions may have further embedded regions, and if they may be dynamically created at run/compile-time, then there is no guarantee that this hierarchy of hereditarily embedded code is finite. Similarly the determination of the grammar might not be ensured to be non-terminating.

These opportunities for nontermination are obviously undesirable, but expanding macros in lisp at compile time has this risk already, and there is a sense in which the whole process is well-behaved: the interleaving of parsing and compile-time computation to determine grammar is recursive, with well-defined boundaries between the two. I'll change my claim about Static syntax to say there should be a well-behaved separation between syntax and evaluation of code.

Chris clarified things

Chris clarified things already. "Runtime" means for him that the whole chain of code transformations has been closed once and the program gets finally started whereas I consider runtime as, well, the time when the program is executed ( even piece-wise ) no matter if this happens at compile time or later.

XMF stages parsing & execution

According to Tony Clark, who should know, XMF has an LL(k) grammar.

It is possible in Scheme

R5RS do not allow a "proper" metaprogramming, but most of the practical implementations actually do - see (define-macro ...) in Bigloo, for example.

There is no need in compiling/evaluating on demand, since Scheme (and most of other Lisp flavours) can be compiled incrementally, expression by expression.

And, yes, it is quite easy to define a macro which will eval its argument in place.

This project is built around such a macro system, in a good old Common Lisp-ish incremental style with a little twist.

Of course, Common Lisp itself, Arc and many other Lisps features a similar macro metaprogramming.

The problem that I run into

The problem that I run into with "(define-macro", was that if I wanted to do hard-core meta-programming I would end up rewriting portions of the SRFI-1 list library as macros. Of course I am not an expert in Scheme, so perhaps there are some elegant solutions to the problem.

To give a more concrete example, I once found myself wanting a "fold" function at macro-time. I already had "fold" implementation in my application, but I found myself having to re-implement a "fold" in terms of "define-macro". When you say it is easy to define a macro that will eval its argument in place, what could a Scheme/Lisp programmer do to avoid having multiple implementation of "fold" in their code (i.e. one for macros and one for non-macros).

Clearly in the case of fold, duplicating the code is not a big deal, but the problem grows as one does more and more sophisticated meta-programming and requires more and more expressiveness.

Alternatives

I'm not clear on which macro systems you're referring to. In Scheme implementations, "define-macro" is usually an informal copy of the Lisp macro system, which certainly supports full use of Scheme at macro expansion time, and you can use the same functions without duplicating them.

The standard Scheme macro system in R5RS, syntax-rules, is a more restrictive macro system, which uses pattern matching and rewriting and doesn't let you run full Scheme (unless you take advantage of its Turing completeness and compile your Scheme code to syntax-rules.)

However, R6RS specifies a version of the syntax-case macro system, which allows you to import and use the same code (full Scheme) at both macro expansion time and at runtime. In fact, it's quite sophisticated about handling the phase issues that this involves, in conjunction with its library/module system. R6RS certainly lets you use something like SRFI-1 at both expansion time and runtime.

The only real "restriction" is that macros ultimately need to operate on syntax, not ordinary values, so while you can do things like use a standard fold over lists of syntactic values, you can't simply use, say, arithmetic operators on syntactic values without converting them to ordinary Scheme values first. This means that macro code to sum a literal list at expansion time might look like this:

(fold-left + 0 (map syntax->datum (syntax (arg ...))))

...where fold-left, map, and + are standard R6RS procedures (not macro-specific), but the use of syntax->datum is needed to convert syntactic values to ordinary Scheme values. However, these sort of conversions can be abstracted away in functions and macros themselves, so it's possible to e.g. automatically lift ordinary Scheme functions to operate on syntactic values.

What I wanted!

"The standard Scheme macro system in R5RS, syntax-rules, is a more restrictive macro system, which uses pattern matching and rewriting and doesn't let you run full Scheme (unless you take advantage of its Turing completeness and compile your Scheme code to syntax-rules.)"

This is exactly what I was looking for, thanks!

Not a typical macro system

Not a typical macro system as in manipulating AST, but if you are interested in similar effects (like modifying the parent environment) then Tcl's upvar and uplevel might be of interest. (It sidesteps the compiletime/runtime division)

Forth has had this since its beginnings

I realize that most readers of LtU are in a "higher plane", but Forth implements both approaches mentioned by cdiggins.

1) interleaves syntax expansion and execution, during run-time
Forth starts in immediate mode, where expressions are evaluated as they are parsed. But, when a defining word is evaluated (most importantly :), the parser switches to compiling mode, in which most words have their run-time semantics compiled. However, immediate words are evaluated as encountered. Most of what normal folke would think of as syntax (e.g., IF ... THEN, DO ... LOOP) are actually just immediate words that affect the state of the compiler. New "syntax" can be created on the fly. For example, many older implementations didn't have a built-in CASE ... ENDCASE statement, but a few lines of code afforded it.

2) interleaves syntax expansion and execution, during compile-time.
Forth has an EVALUATE word that evaluates a string (at run-time) using the same mechanism as in 1).

Great point

I overlooked Forth. Thanks for bringing it up!

Look at

Matthew Flatt's Composable and Compilable Macros [oops, fixed link]

More on syntax-case

I'll second that reference, and add some detail about the R6RS syntax-case macro system, which exploits Flatt's paper.

The following quotes from the topic post are true statements about the syntax-case macro system:

  • "the full power of the language [is available in] the macro system."
  • "one [can] call functions during macro expansion that were defined in the source program."
  • "the AST is evaluated as needed while it is being transformed."
  • "interleaves syntax expansion and execution, during [expansion time]"
  • "the macros system takes the program AST as input and produces a new AST."

The same is also true of typical implementations of the Lisp-derived define-macro system, but syntax-case has the benefit of being hygienic by default, so you don't need to worry about things like unwanted variable capture and shadowing by macros.

The general model of both systems is that macros are functions which take an AST as input and produce an AST as output, and can use arbitrary Scheme code to transform the AST.

One of the most obvious differences between the two systems is that define-macro represents the AST using ordinary Lisp lists, whereas syntax case makes use of an abstract syntax data type, which allows it to more easily manage the hygiene issues.

Another big advantage of syntax-case is that, in conjunction with the library system, it handles phase issues robustly - macros that depend on ordinary code that depends on macros, etc. These phase issues are addressed in Flatt's paper. In practice, those kinds of issues can quickly become unmanageable with define-macro.

Thank you for lending some

Thank you for lending some clarity. I had trouble relating Flatt's paper to the problem at hand.

REPLs, staging & compilation

The simple fact that, in a read-eval-print loop, one input is run before the next is parsed does not show that the language fails to separate the parse and execute phases of processing the input. If each separate input, be it a short piece of code input in response to a REPL prompt or script passed to the interpreter, could be processed in separate stages, then the language supports staging. And if the language cannot support staging, then there is not generally going to be a useful sense in which that language can be said to support compilation, though it may support some limited source transformations such as partial evaluation, or compilation for a useful subset of the language.

My understanding is that mainstream dialects of Forth do support staging.

That seems too strong

And if the language cannot support staging, then there is not generally going to be a useful sense in which that language can be said to support compilation...

Allowing a macro to invoke functions and reference variables that are bound in its lexical context isn't any different from allowing later initializing expressions to use values computed by earlier initializers.

And as you point out, it is possible to imagine a language design in which the compiler is obliged to perform compile-time evaluation of certain well-defined classes of expression. In fact, that class could be very rich. As long as the family of expressions is pure and is constrained to a causal dependency order that conforms to some well-defined ordering of lexical binding across compilation units, there is no semantically observable distinction between compile-time evaluation and initialization-time evaluation.

The causal/lexical ordering constraint, by the way, is why BitC has the somewhat peculiar constraints on initialization-time expression environments that it does. If we got it right (and I hope that we did), the compiler will be able to fully pre-evaluate all terminating initializers, no matter their complexity.

Soory, am I missing

Soory, am I missing something, doesn't the eval-when form in Common Lisp provide this feature? It allows you to define functions which can be used in macros in the same file. You can also declare if you want these functions to exist in the compiled file or not.

Agreed

You're right - there's been some confusion in this thread. Both Common Lisp's macros and Scheme's syntax-case seem to meet the stated requirements. It's possible that the original question was a reaction to Scheme's more restrictive syntax-rules system, or else that there's some other requirement that hasn't been made clear (at least to me).

Sample code

I don't know much about eval-when, but it sounds to me like there is still a need to add special declarations to functions to indicate in which stage/phase they are available (e.g. macro-time or run-time). I'm proposing something a bit different.

Here is some sample code:

module simple1
{
  function f(x, y) {
    writeln(x);
    writeln(y);
   }
   
   function main() {
     f(1, 2);
     f(3, 4);
   }
	
   function addTrace(x) {
    if (defined(x.element) && x.element == "func") 
       // standard library function
       // inserts a trace statement into a code block
       insertTrace(x.body, x.name);
     else 
       foreach (var o in x)
         addTrace(o);
   }
	
   function preprocess() {
     // ast is a special global variable available only at compile-time
     addTrace(ast);		
   }
}

When you run the program the output is:

main
f
1
2
f
3
4

What one should notice is that there are no special declarations to distinguish between compile-time and run-time semantics. At compile-time the code is interpreted starting at preprocess. Any functions not encountered is simply ignored.

Is this possible using eval-when or syntax-case?

[Edit] Fixed the code to actually match what actually happens in my compiler. For those who want to see it in action, A windows-only version of the compiler (with C++ source) can be found here.

Anything is possible at zombo.com^Wsyntax-case

it sounds to me like there is still a need to add special declarations to functions to indicate in which stage/phase they are available (e.g. macro-time or run-time).

Only one thing is really needed here: some sort of hook for the macro system to integrate with the host program. In your example, the 'preprocess' function serves this purpose, (apparently) acting as a whole-module macro transformer.

Nothing stops you from using a similar approach with other typical macro systems. Here's an example using syntax-case, with 'add-trace' and 'insert-trace' implemented as ordinary functions, with no special declarations:

#!r6rs
(import (rnrs))

;; general helper functions, to avoid non-standard dependencies

(define (ast->list ast)
  (syntax-case ast () ((_ exp ...) (syntax (exp ...)))))

(define (println x) (display x) (newline)) ; who says r6rs isn't minimal?

;; insert-trace implementation was missing from original example
(define (insert-trace ast)
  (syntax-case ast ()
    ((define (name arg ...) body ...)
     #'(define (name arg ...)
         (println 'name)
         body ...))
    ((other ...) (syntax (other ...)))))

;; the rest of the code corresponds to the original example

(define (add-trace ast)
  (cons #'begin (map insert-trace (ast->list ast))))

(define-syntax preprocess add-trace)

(preprocess

 (define (f x y)
   (println x)
   (println y))

 (define (main)
   (f 1 2)
   (f 3 4)))

(main)

I've tried this in Ikarus Scheme, which I used for this example because it's not fussy about phase issues (more on this below). It gives the same output as the original example.

The uses of 'syntax-case' in the code just perform pattern matching on ASTs, i.e. it's not phase-related - you could call the same functions at runtime if you had some reason to. The 'add-trace' function uses cons and map instead of pattern matching, just to illustrate that manipulation of syntax using standard functions is also possible.

There's only one line in this example that involves "special declarations" about macro-time vs. runtime, namely the definition (define-syntax preprocess add-trace). However, in the original example, 'preprocess' is similarly "special". Both versions of 'preprocess' serve the same purpose: to mediate between the requirements of macro expansion, and "ordinary" AST-transforming functions. This pattern could be abstracted out into a general macro for this style of preprocessing, but I've made it self-contained above.

The main difference between the approach used in this example and typical macro systems, is that the latter don't limit you to a single "hook" like 'preprocess'. To see why, consider how tracing would be implemented if you only wanted it on one of the two calls to 'f', e.g. using code such as (trace (f 1 2)) in the first line of 'main'. This is where having a "special declaration" for macro transformers starts to become important - without that, you'll pay some sort of price, with the details depending on what alternative mechanism you're using.

There's one other relevant issue that should be mentioned, relating to macro expansion phases. As an example, the above code won't run unmodified in PLT Scheme (v4.0.x, anyway), because PLT is stricter about phases. It will complain that 'add-trace' is undefined, because it's an ordinary function that's not available by default in the expansion-time environment. The way to make it available is to be explicit about dependencies between phases, and move add-trace, insert-trace etc. to a separate module. This module can then be imported for use at expansion time, or runtime, or both. There's still no textual code duplication, and this doesn't place any other limitations on the use of "ordinary" functions at expansion time. These issues are discussed in Flatt's paper.

As Ikarus demonstrates, it's not essential to be this strict about phasing, at least in this example. In simple cases, it is usually easier if you don't have to worry about phases. However, for complex multi-module systems, in which macros depend on functions which depend on macros and so on, across module boundaries, or where macros generate other macros, there are benefits to being explicit about phasing, even if only to make it easier for the programmer to reason about it all.

The papers about syntax-case, and many of the papers they reference, are worth studying. Readscheme.org is on extended vacation, but here's the archive.org cache of readscheme's macro page.

macros walk code with two left feet

When macros are used for codewalking they tend to step on other, independently developed macros. Codewalking macros are not, in general, composable.

While macros work well for surface syntax manipulations, languages that aim to be extensible should supply different facilities for semantics manipulation and program transformations and discourage codewalking by macros. Examples of such facilities include hyperspaces and aspect-oriented programming, program and local post-parse AST->AST transforms, support for annotations to drive these transforms, libraries that supply typechecks and optimizations, etc.

Useful when you have two left shoes

Hygienic macro systems make codewalking macros quite a bit safer - at least, certain classes of stepping on other things become less easy to do by accident. It also depends what you're doing - e.g. if you're designing a DSL, you may not care that much about composability with other macros. I think that the PLT Scheme universe of syntaxes may have a few codewalkerish systems, although I can't cite examples offhand.

I didn't intend to push that approach in general, though, but I also didn't want to just say "you don't want to do that". Instead, I wanted to make the point that, based on the evidence so far, the things cdiggins is asking about are not fundamentally different from what other macro systems already do, in terms of the way that transformers have to be integrated into programs.

I did try to plant a seed in the direction of more localized macro use, in the point about tracing individual function calls.

Thank you very much Anton

Thank you very much Anton for taking the time to describe to me the details of the macro system, and implementing a system that does precisely what I wanted in Scheme.

Aside

What's the matter with Readscheme? It's a great resource and I'm sad to see it go... Do they need new hosting, or is this really just a temporary thing?

Re: Aside

Jim Bender, who runs & hosts it, posted this explanation back in December, about a problem with an upgrade to high bandwidth service. It's a bit overdue to come back now, but I've been assuming that was just due to human bandwidth limitations.

Good news

Thanks!

Non-communication Side Effects

Based on responses above, it seems like cdiggins is looking for something much weaker (where macros are intertwined with a program interpreter) than I initially imagined.

Interleaving execution with parsing, in its stronger form, would ensure that neither the syntax (e.g. the set of macros, syntax rules, EBNF, whatever) nor the AST can be determined entirely by examination of static source files due to communications dependencies. For example, one might grab a URL that has an EBNF form for which part of the page is to be processed, or one might explicitly feed a subgraph or string to a remote expert system for processing/optimization/annotation or even transformation into an AST.

If this is weakened to non-communicating effects, the system becomes a great deal safer, and improved uniformity can allow useful syntax highlighting and refactoring browsers.

I think communications dependencies aren't best applied in syntax support (better be able to parse without the compiler firing off messages), but I do see a great deal of use for them elsewhere (e.g. compile-time requesting a schema from a database in order to locally optimize and verify local SQL query definitions, or grabbing up JPEG files in order to preprocess them into a database of TGA).

Most of the systems mentioned above seem to achieve the lesser form of compile time operations and resource resolution, though some might achieve both. E.g. Forth's immediate words achieve staged execution, but unless use of immediate words can define words in response to external conditions then it doesn't really qualify for making the syntax itself dynamically interleaved with execution. I mentioned TeX above, but even it is likely weaker than I imagine as possible.

There are many sweet spots between syntactic flexibility and semantic chaos.

And speaking of 'sweet spots', I believe Forth and Scheme have selected one that greatly reduces the role of syntax. They provide no allowance for syntactic ambiguities and context dependence, and thus simplify reasoning and processing at moderate cost in tersity that might have been achieved by allowing rarely realized ambiguities and supporting context dependence. This trade is certainly hitting one of those sweet spots, but I've seen many decry the lack of flexibility somewhat frustrating at times. A fine example of context dependence reducing verbosity is type inference; verbosity can be reduced if such implied relationships can be shifted into DSLs and task sub-languages.

In ETL, I'm reaching another

In ETL, I'm reaching another sweet spot.

A ETL grammar specifies mapping from the source to AST (in terms of objects, properties, and values). The grammar applies to the entire source, but it could be specified per source (using document type declaration). The compiler works with AST, and there just should be new compiler plugin that understand new AST nodes, or new syntax construction could be sometimes expressed in terms of old AST nodes (the tutorial in the documentation has several examples of such definitions).

So basic services like syntax-highlighting could be provided even without implementing macro processor. The current version of the framework can generate XML that just a dump of parser events. And this XML document could used to generate HTML like this one (the left side is AST, the right side is syntax highlighted source, both are clickable, but scrolling is not yet implemented). I even managed to upgrade old grammars to new syntax using a simple XSLT.

procedural macro paper

I recommend the D-Expressions: Lisp Power, Dylan Style paper from Jonathan Bachrach, MIT and Keith Playford;
This discusses two solutions (static compiler plugins, compile time loading and eval-when) for procedural macros in Dylan, a compiled language.