Fortifying Macros

Fortifying Macros. Ryan Culpepper, Matthias Felleisen, ICFP 2010.

Existing macro systems force programmers to make a choice between clarity of specification and robustness. If they choose clarity, they must forgo validating significant parts of the specification and thus produce low-quality language extensions. If they choose robustness, they must write in a style that mingles the implementation with the specification and therefore obscures the latter. This paper introduces a new language for writing macros. With the new macro system, programmers naturally write robust language extensions using easy-to-understand specifications. The system translates these specifications into validators that detect misuses—including violations of context-sensitive constraints—and automatically synthesize appropriate feedback, eliminating the need for ad hoc validation code.

Presents syntax-parse, which seems like a truly great advancement over existing macro writing facilities.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Programmers define modular,

Programmers define modular, reusable specifications of syntax and use them to validate uses of macros. The specifications consist of grammars extended with context-sensitive constraints. When a macro is used improperly, the macro system uses the specifications to synthesize an error message at the proper level of abstraction.

Is this the "truly great advancement"? Defining grammars to maintain macros?

If you know how to specify your language and extend it using grammar rules consistently, you don't need the tangled mess, which are macros and macro systems.

A month of implementation can easily save hours of debugging

If you know how to specify your language and extend it using grammar rules consistently, you don't need the tangled mess, which are macros and macro systems.

Right. You can write your own compiler.

I don't understand why you

I don't understand why you guys attempt to make an arcane science out of transformations which are essentially refactoring level and when dealing with appropriate tools just a variant of scripting. No need for developing a fully formed backend which can or shall be presupposed.

Because you haven't every used any interesting macros

Here are a few interesting macros, implemented in Racket, and all with papers published about them:

  • A typechecker that works on a modular basis
  • A first-class module system
  • A Java-like class system (the CLOS-like system doesn't have a paper)
  • A parser and lexer generator
  • An optimizing pattern matcher
  • The system described in the article under discussion

Now, which one of those is "refactoring level" and "just a variant of scripting"?

Yes, very clever and

Yes, very clever and certainly worth tons of PhDs. It would be even more impressive of course to grow a language up to a ring tone downloader which can't be implemented without lots of language transformations.

[meta]

Are you contributing something useful to the discussion? I don't think your comments in this thread have been up to the standards described here: http://lambda-the-ultimate.org/policies

Weird

I don't think it is good form to try to use the policies document as a rubric, c.f. #4 and the fact you mentioned a bunch of "papers" above without their actual titles or hyperlinks to them or even DOI numbers.

I am just saying.

I also don't think the things this paper argues over, clarity and robustness, are easily quantifiable and can easily devolve into worthless debate. That is why there are so many approaches to this problem. Peter van Roy has great advice on this in the thread HOPL III and the History of Haskell: Projects with a vision should interact only occasionally

From an engineering perspective, I think Kay wants a system like Nemerle but presumably use something like .NET Code Contracts (design-by-contract) to specify violations. If so, then the right reply here would be to address the level at which the error is shown to the programmer, which is something Felleisen and Culpepper discuss: detecting the true cause of an error, rather than letting it propagate to the code that dislikes some combination of parameters.

I don't think the policies

I don't think the policies are intended to stifle debate. Surely that's not the goal. However, speaking for myself here, I haven't been able to get to the bottom of Kay's objection. Is it related to the original post, or simply a general objection to macros? I am not sure what the argument was, anyway, apart from the fact that it was "Kay contra macros". Clarification would be good.

debate

I'm certainly not suggesting that debate on macros should be stifled, nor that it's out of bounds to object to macros. But I think comparing peoples' research to ringtone downloaders is not the kind of discourse that LtU wants.

Yes, it is a bit like "Kay

Yes, it is a bit like "Kay contra macros" but in the presence of suggested augmentation of macros by grammars which introduce a safety belt which naturally leads to techniques which are simpler, well factored and more sound than source code embedded source code transformers/evaluators and also around for about 50 years. So instead of "fixing" macros by adding more sophistication to them they can be swept into the dustbin of history, so to speak.

Charles Stewart is of course right that this hints to compilers but stripped down compilers or compiler components which do not offer much beyond surface analysis ( Lex/Yacc, ... ) and source-to-source transformation ( Stratego/XT, ...). They lack type checkers, flow graph transformers, byte code verifiers and optimizers etc. which might be added on behalf but are not essential for the sort of "scrap your boilerplate" kind of tasks.

Omnipotence and stepping to the upper limit of the power hierarchy and other archaisms are not intended.

Huh?

I honestly have no idea what you're talking about, or what points you are trying to make. Perhaps if you could write in a less opaque style we could have a discussion.

Source transformations vs. source transformations

I don't think whole-program source-to-source transformation offers the radical improvement you are after. My impression is that many of the issues in this domain are the same sort of issue as those arising with macros, and so research on whole-program transformation has a big overlap with research on macros.

Do the whole program folk and the macro folk cite each other much?

No they don't

Look at the works cited list in Fortifying Macros. There is no comparison to how something like this might be accomplished in something like ASF+SDF or Stratego/XT. [Edit: At an extreme, pluggable types fit in here, too.]

I was going to bring this up. By the way, the reason I understood Kay's comments about ring tone downloaders (and of course found that funny) was because I remember Kay's posting history about this subject. He tends not to say much, but when it does it is very clear what set of ideas he supports (concrete syntax, scannerless parsing, meta-environments, syntax definition for algebraic signatures, unparsing definitions, etc.). If you search LtU for concrete syntax, for example, you'll really only find either the Stratego/XT folks posting on LtU about or Kay. The only reason I know this is because I was searching LtU a year or so ago about concrete syntax and scannerless parsing, since basically all my books/journals on compilers didn't seem to mention it.

The cool thing about this Fortifying Macros paper though is it is a step towards fixing such overlaps. The Related Work section in this paper though is a bit light. But I don't also know if this paper was the right place to mention such things.

Great, we didn't have a macro debate-fest in a while!

Is this the "truly great advancement"?

I find your tone quite uncalled for. In case you didn't notice, I qualified my statement with "over existing macro writing facilities".

Defining grammars to maintain macros?

The paper goes beyond what's usually understood under "grammar" to allow you to define domain-specific, semantic constraints (such as that the names bound by a LET must be distinct), and then provide useful feedback to the programmer in case those constraints are violated. The paper lays it out clearly how that's an advancement over existing systems, and I don't see how such issues are tackled by "grammars" in general.

the tangled mess, which are macros and macro systems

Again, this arrogance is unneeded. If you want to talk shop I'm all ears, but only if it's constructive.

Meta- and Macro-

When I was toying with my own little language, I wondered whether I needed 'Meta-programmation' or 'Macro-programmation' (and eventually focussed on the former.)

Sometimes, macros are used for crude syntax manipulations. the C preprocessor does that and is surprinsingly useful in spite of its lack of intelligence.

Other times, some very conplex transformations (like the 'optimising pattern matcher' cited above) could benefit from being useable at run time. Using pluggable object systems could as well be made 'meta-' instead of 'macro-' for a debugger.

I wonder who have studied the idea of unifying meta- and macro- constructs. Any paper ?

Another interesting aspect is using the same structures either for direct execution or for macro-expansion. HDLs does that with some combinatorial constructs. A FOR loop is executed sequentially by a simulator and is flattened (i.e. macro-expanded) to logic circuits by a silicon compiler.

Maybe with FEXPRs

Disclaimer: I'm going to use fexpr to mean both fexprs and fsubrs. Actually what I have in mind are the operatives from John Shutt's Kernel programming language, which take an environment parameter that is first class besides the operand tree. To anybody unfamiliar with Kernel operatives I would recommend starting with the "Decomposing lambda ... " paper which can be found here.

In theory it should be possible (in some cases) to use fexprs and partial evaluation to get the behaviour you describe. The idea is that you use fexprs instead of macros, and then you use a partial evaluator that can (sometimes) transform the code using certain fexprs in the same way that macros are expanded.

For example, if you have the $cond operative in kernel (which is like the cond syntax-form/macro in scheme) a partial evaluator could in principle change this code:


($cond
((positive? x) x)
((negative? x) (- 0 x))
(#t 0))

into this:

([$if] ([positive?] x 0) x
([$if] ([negative?] x 0) ([-] 0 x))
0))

Where the bracketed names ([symbol]) stand for the actual procedures/applicatives instead of the symbols enclosed.
[Sorry I don't know how to indent code here...]

I say in theory because I don't know of any published papers on this matter. I would believe that John is working at least in the theoretical background for this in his PhD dissertation. I would also think that Thomas Lord may be working in something akin to this from what I read, although I could be wrong. Myself I'm trying to work out something like this for a functional subset of the Kernel programming language, but I'm still a couple of months away of even considering publishing (I have something like a proof of concept thou).

The main difficulties with this approach are (in my opinion) two:

  1. Binding stability
  2. In order to use this approach, it should be possible to determine the binding of the symbols appearing in an expression in the environment in which it is evaluated. Allowing indiscriminate mutation of environments undermines this to the point of making the approach infeasible. The Kernel programming language has mechanisms to control the mutation of bindings but it is still too early to know if these mechanisms are enough for this purpose (However it is important to note that the justification of these isn't to allow partial evaluation/compilation but rather they are justified by principles of design in the Kernel report.)

  3. Termination condition
  4. This is more difficult. Of course like with any partial evaluation we need some termination condition. In this case I think online partial evaluation is the only sane way, so we need a good abstract model of all data structures used in evaluation (including trees, environments, ...). The main problem is indentifying the fexprs that behave like macros to expand those. Macros are required to terminate in finite time, however general fexprs are not. One way to simplify this is to ask the programmer to annotate fexprs that are used as macros, so that the partial evaluator always expands them blindly. Other more sophisticated ways are of course possible, for example the partial evaluator could try to keep a list of the different arguments of the successive calls to a fexpr and keep going when there is a reduction of some type that would eventually reach a base case. Of course the fact that fexprs can generate new fexprs means that the number of created fexprs should also be taken into account. This is an open problem even for regular online partial evaluators, but there are a number of approaches that give good results on simple cases of fexprs.

Well, this post is getting too long. I hope in a couple of months I can write an article and link it here. In the meantime, if anybody is interested in this approach I would love to hear opinions/ideas.

Interested

I'm definitely interested in this approach. It's an idea that's been kicked around for ages, but that as far as I know has not been fully investigated. Please do share any results, positive or negative.

Great paper!

I'm only partway through, but I can already say that this is the most interesting work on macros (at least, on Scheme-type macros) that I've seen in years. One of the big problems with macros is that even though you can define major language features with them (e.g. an OO system built out of macros in Scheme is a typical application of complex macros) the error reporting will not look like the error reporting in a language that has (in this case) OO built-in. This drastically reduces the utility of macros. The system described in this paper (if I understand it correctly) fixes this problem, so I think it's legitimate to call this a big advance, almost as big as going from non-hygienic to hygienic macros.

Part-way through misses the best part

And I quote (emphasis mine):

7.1 Case: loop
The loop macro [Shivers 2005] allows programmers to express a wide range of iteration constructs via loop clauses. The loop macro is an ideal case study because the existing implementation performs almost no error-checking, and its author makes the following claim:

It is frequently the case with robust, industrial-strength software systems for error-handling code to dominate the line counts; the loop package is no different. Adding the code to provide careful syntax checking and clear error messages is tedious but straightforward implementation work.
Olin Shivers, 2005