Extending the Scope of Syntactic Abstraction

Extending the Scope of Syntactic Abstraction by Oscar Waddell and R. Kent Dybvig, POPL '99. (Also: Waddell's thesis with the same title.)

The benefits of module systems and lexically scoped syntactic abstraction (macro) facilities are well-established in the literature. This paper presents a system that seamlessly integrates modules and lexically scoped macros. The system is fully static, permits mutually recursive modules, and supports separate compilation. We show that more dynamic module facilities are easily implemented at the source level in the extended language supported by the system.

This paper is probably known to many LtUers, but it's never been posted, and I find it very enjoyable.

It introduces two very simple forms, (module name exports-list body) for naming and enclosing a body of code containing definitions and expressions, and (import module-name) for importing the definitions from such a module into the current scope.

Module names are lexically scoped just like variables, and modules can appear wherever definitions can occur, so one can define modules (say) inside a lambda. Furthermore, modules and import forms may also be generated by macros.

They show how more advanced features (such as qualified references ("module::var"), anonymous modules, selective importing and renaming, and mutually recursive modules, among others) can be built on top of this simple base using a hygienic macro system, cleverly and without much fuss.

Side note: such a "syntactic" module system can be contrasted with R6RS's "static" library system. There is currently some discussion to this effect in the Scheme community.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Another take on this

It's nice that these features are available 'without much fuss'. If you start from a more complicated base (dependent types, first-class types, first-class fields), then you get these features very easily too. AFAIK, AUTOMATH had the necessary features to do all of this (with its telescopes aka dependent records) in 1968. Axiom has had that for a (mere) 20 years.

Built on a different foundation, these features have also been available in system for algebraic specifications for a rather long time.

My main point is that the various research communities (programming languages, specification languages, mechanized mathematics languages [CAS, proof assistants, ATP, model-checkers, SAT solvers, etc] should really talk to each other more! The number of times that I have seen a 20+ year gap between one community re-discovering/re-inventing what the another had known all along is really frightening.

Think of the brains

I think there are two main obstacles to such cross-fertilization:

The first is that topics such as module systems ("done right") are hugely complex and need a lot of enthusiasm, brain power, and time to master. Waddell's thesis for example is probably at the limit of what most of us can keep in their head at a time (which I assume is needed for good design).

The second is that different communities have different goals. The goal of the authors of the paper seems to be to create a module system that is well designed, theoretically tractable, integrated with syntactic abstraction, portable to a large number of Scheme systems (with different semantics and histories), and can be implemented efficiently. Other communities have completely different goals, such as for example high mathematical purity, and may consider, say, macros or separate compilation less interesting.

In both cases, it's hard to communicate effectively. Either because learning about the other's system and results is too hard, or because the other's goals differ so much from one's own goals, that the other's results are difficult to apply.

P.S. Motto for a research laboratory: What we work on today, others will first think of tomorrow.Alan Perlis ;)

I agree

I completely agree that you list two important obstacles to cross-fertilization. But look at the benefits of surmounting these obstacles! I like to think of Design Patterns as well as the use of category theory in functional programming as 2 brilliant examples of successful cross-fertilization. What I'm saying is that I wish we had a reward system in place for research which properly supported 'thinking big' (to balance out your first barrier). If there were more rewards for 'integrative thinking' (instead of mostly for finding one's specialty and sticking to it), that would allow the different communities to share their different solutions faster, which again would help.

But with macros?

The hard part about any Scheme module system is handling macros, particularly procedural macros. I'm skeptical that AUTOMATH and Axiom handle them as well as systems like Waddell's (or PLT Scheme, or Scheme48).

Agreed, but

You are right that Scheme handles macros 'best' amongst most (all?) languages. But the feature set of AUTOMATH and Axiom were chosen so that people did not feel they needed to resort to macros to get their work done. It's a different set of goals (as has already been mentioned).

Need for macros

My point was merely that Waddell's contribution over earlier module systems, including those in AUTOMATH et al, is proper handling of macros.

As for the need for macros, we'll have to agree to disagree. I don't think there are any feature sets that could obviate the need for syntactic abstraction, regardless of the goals of the language.

Syntactic abstraction

We completely agree that syntactic abstraction is needed. It's just that macros are not the only way to get there. Agda (and Maude) provide nice ways to define mixfix operators (and priorities) which seem to work quite well for defining new syntax and associated semantics.

Binding...

IMO, the real payoff of macros is that they give you the ability to to mess with binding structure. This is something that we can't do solely with defining mixfix operators.

Concretely, suppose I write a library in Agda which gives datatype constructors which form a cartesian closed category. Then it's natural to want to define terms of this library using lambda-notation, but mixfix operators aren't enough to do this. We need something like a macro which can run the bracket abstraction algorithm to turn those lambda-terms into a bunch of combinators wired together.

Is binding special?

I don't see that this particular example requires macros much more than any other syntactic translations require macros - you could bake this conversion to combinators into the language. The question is, after you've done that, will you still want other syntactic translations?

Binding is special

How, given only combinators, can you combine a variable name with an open term referencing that variable?

In general, how, without macros, could you write a binding for that binds two variables, given one that only binds one?

True

If you want to disguise binding in a form that looks like a normal function call, then you must have some mechanism that circumvents the normal evaluation of parameters. (It doesn't necessarily need to be macros, though, if that implies syntax to syntax transformation). My point was just that by making the lambda in a language more abstract, you open up custom interpretations of bindings like neelk was talking about.

?

If you want to disguise binding in a form that looks like a normal function call, then you must have some mechanism that circumvents the normal evaluation of parameters.

I don't know what you mean by "disguise binding" here. What I want is to abstract over binding.

Also, I don't have any idea what you mean about how circumventing the evaluation of parameters. Are you suggesting something fexpr-esqe?

I just meant macros are able

I just meant macros are able to rewrite (a b c d) without first evaluating a, b, c, and d. You're right, you can't abstract over binding in the scheme I described.

Good point

Yes, binding is special. Being able to make new binding operators is very useful. And, yes, that's the extra oomph that "hygienic macros" gives you.

I guess I just wish that people saw that new binding forms do not need to be 'macros'. Macro has the connotation of automatic delta-expansion, which I rarely want. I want a definitional extension mechanism that allows me full control over delta-expansion. Maybe I missed that feature from Scheme macros?

Delta-expansion

I'm confused as to where delta-expansion comes in. My understanding would be that in a delta-expanded program, every expression of pair type would be a syntactic application of the `cons' procedure. No macro system that I know of does that, so I assume you're thinking of something else, but I don't know what it would be. Or perhaps you're thinking of some particular macro system that does this?

Can you explain further?

Delta expansion

I should have skipped the fancy terminology and simply said "automatic expansion of definition". The macro system I know automatically expand, i.e. perform inlining. I want an syntax extension mechanism that does not (necessarily) do that.

Try Scheme

Scheme macros don't automatically inline (if you're referring to inlining functions). For example,

(define-syntax-rule (lambda2 x y body) (lambda (x) (lambda (y) body)))
(define K (lambda2 x y x))

Nothing is automatically inlined here.

Final Publication

The official publication of this paper is in the ACM Digital Library, though the full text is behind a paywall.

yes, enjoyable. But...

That's a fantastically neat paper in almost every respect. Nevertheless, here is an example of a quote from it that irks me and I'd like to briefly describe why, put it in some historic context, and point out in broad terms how it can be fixed:

Our design derives from the philosophy that a program- ming language should be based on a small core language augmented by a powerful syntactic abstraction facility. The core language should have simple constructs and a straightforward semantics, and the syntactic abstraction facility should permit the definition of new language constructs whose meanings can be understood in terms of their static translation into the core language.

Taken on its own, in isolation, that quote concisely expresses one of the essences of Scheme since its earliest beginnings. For example, in the oldest dialects, you can reduce core control constructs to procedure application plus a conditional operator (so (proc operand ...) and (cond branches of a conditional)). With those control constructs, existing alongside LAMBDA of course, higher level constructs like AND and OR for control or LET for a mix of control and binding can be built by straightforward translation down to just the core.

In fact, the earliest papers describing Scheme worked on the basis of just such a translation, though only reliably for a fixed, finite set of explicitly enumerated syntactic constructs. E.g., you could *use* LET or AND in the official language definition, but if you wanted to add a new construct, NAND, you could only add that in ad hoc, implementation-specific ways.

In the first Revised Report on Scheme (R1RS), for the first time there was a defined way to extend Scheme syntax. The definition had potential, though it is a bit fuzzy on some critical details and, since I have to guess, my guess is that Steele's implementation of it was ad hoc and quirky.

Scheme fled the nest. Other implementations were born. A variety of superficially incompatible approaches to syntactic abstraction proliferated. It would not be until R5RS that the language officially included *some* mechanism for syntactic abstraction -- an extremely simplified and weakened form of the mechanism Waddell and Dybvig describe in this paper.

Now, in the quote above, it is said: "the syntactic abstraction facility should permit the definition of new language constructs whose meanings can be understood in terms of their static translation into the core language.". What about that? Well, it is (all too often) taken to mean: "the syntactic abstraction facility should permit only the definition of new language constructs whose meanings can be understood in terms of their static translation into the core language.

Well, so what, right?

One cost in R5 is the addition to the language of a complex evaluation rule to handle the kind of "global alpha renaming" that enables even its weakened form of program-defined syntax. Another cost in R5 is the incompleteness of the formal semantics (for no formal account is offered of syntactic definitions). Another cost is the loss of any clear account of the semantics of a dyanmic environment (one which a programmer redefines things in a running program). Another cost is the huge leap in complexity of a minimal implementation from R4 to R5.

As if these problems weren't bad enough, the problem becomes compounded in R6RS. The macro system of R5RS in its striving for static semantics and hygiene fails to permit even the deliberate capture of identifiers (or introduction of deliberately captured identifiers). As the paper at hand shows, that's an important feature and so R6RS introduces a still weakened form of the syntax extension described by Waddell and Dybvig in this paper, yet with two bugs: It isn't powerful enough to define the code in this paper. Also, the specification given in R6RS is formally unimplementable (and this is deep, not a mere typo).

This paper lays out and then makes lovely use of a syntactic extension system that can be fully statically reduced to a core language containing no syntax definitions -- it's beautiful in that way. But to get there, it has to still further compilicate the evaluation rules for "the expander" compared to R5 and even compared to R6.

Are we to conclude that there is no such thing as a fully static meta-language for syntax extensions sufficient to implement the next big feature after modules? And the one after that? Must the essential definition of the underlying language in which we express these hacks constantly expand? Will we never have a good story for how this language is supposed to operate in dynamic programming environments?

No, the correction to the whole mess is as simple as going back to R1RS and doing what it tried to, but cleaned up a bit - namely to add Scheme FEXPRs that, in addition to accepting their arguments unevaluated, take an "environment" parameter or parameters characterizing the caller's dynamic context. E.g.:

((fexpr (env x) ...) exp)
is equivalent to:
((lambda (env x) ...) (lambda a reification of the env) (quote exp))
and then if you can statically tell that in a form:
(operand op ...)
that operand is statically bound to a fexpr of a known body, you can immediately start reducing using little or nothing more than the optimization rules described in the RABBIT thesis. All of the alpha-renaming tricks used to implement modern syntax-rules and syntax-case just "fall out for free" from that approach.

Scheme needs but one mechanism for syntax abstraction. One which permits but does not demand syntax extensions that can be statically eliminated. Schemeish fexprs (which except a reified environment in addition to their unevaluated arguments).

The Kernel guy, at least, has the right idea.

One cost in R5 is the

One cost in R5 is the addition to the language of a complex evaluation rule to handle the kind of "global alpha renaming" that enables even its weakened form of program-defined syntax.

Do you mean something like R6RS's expansion process? (I'm pointing to R6RS because I'm more familiar with it.)

R6RS introduces a still weakened form of the syntax extension described by Waddell and Dybvig in this paper

How is R6RS weaker than the syntax-case described in this paper? I thought R6RS's syntax-case was based on it.

Are we to conclude that there is no such thing as a fully static meta-language for syntax extensions sufficient to implement the next big feature after modules? And the one after that? Must the essential definition of the underlying language in which we express these hacks constantly expand?

Excellent point.

No, the correction to the whole mess is as simple as going back to R1RS and doing what it tried to, but cleaned up a bit - namely to add Scheme FEXPRs that, in addition to accepting their arguments unevaluated, take an "environment" parameter or parameters characterizing the caller's dynamic context.

With this mechanism, could the module system from the paper be implemented with high fidelity? What about hygiene, for example?

high fidelity static hygiene from fexprs

One cost in R5 is the addition to the language of a complex evaluation rule to handle the kind of "global alpha renaming" that enables even its weakened form of program-defined syntax.

Do you mean something like R6RS's expansion process? (I'm pointing to R6RS because I'm more familiar with it.)

Sure. The full syntax case system of the R6 library uses an expansion / alpha-renaming process that is in some sense of generalization of the one in R5 and core R6's syntax-rules. Syntax-rules uses a specialization of the general system in so far as it doesn't permit intentional variable capture to be introduced in an expansion.

How is R6RS weaker than the syntax-case described in this paper? I thought R6RS's syntax-case was based on it.

I was mistaken. My recollection was that "set!" macros had been omitted but this was incorrect (as I discovered when double-checking before answering you).

With this mechanism, could the module system from the paper be implemented with high fidelity? What about hygiene, for example?

Yes, it absolutely could be implemented. In fact, given these Scheme-ish fexprs, building a library for syntactic closures (a la syntax-case) and defining a simpler, dumbed down version (a la syntax-rules) would be very good things to do. The fexpr semantics I described would translate those static macro systems into a bunch of lambda forms, the key aspect of which is that the forms needed to do the equivalent of expansion would all be statically apparent and, by virtue of their structure: statically eliminable. If you wrote code using only these static syntax systems (including modules) - it's all defined (and compiled as) uses of the very low level fexpr mechanism, but once compiled, your code would contain no fexprs. It's all constant folding, a kind of safe beta-reduction, and alpha-renaming - but just over lambdas. Indeed, you could then *define* top-levels in which the static syntax systems are available, but fexprs have been hidden and are unavailable - and you would have a statically compiled, feature-generous sublanguage that could be R5, R6 core, or R6 core with the R6 standard library. Conversely, you could define top level environments that admits fexprs (and take a performance hit for for some cases of their non-static use -- "exotic features must pay their own way" as Dyvbig describes a motto of Chez Scheme) - and do some pretty fun and interesting stuff in that environment. Meanwhile, you have a simple core and a straight story about semantics of everything defined in that core language, including a sane semantics for an interactive programming environment.

The fexpr semantics I

The fexpr semantics I described would translate those static macro systems into a bunch of lambda forms, the key aspect of which is that the forms needed to do the equivalent of expansion would all be statically apparent and, by virtue of their structure: statically eliminable.

That certainly sounds interesting.

Could you give just a sketch of how one would go about defining module and import from the paper using "Schemeish fexprs"?

yes, but...

I'm happy to but can you either (or both) wait a bit or make clearer to me what you are specifically skeptical of?

I'm writing a slightly detailed account in hopes of influencing the Scheme standardization project - so it is helpful for me to understand where skepticism lies. And if I understand where *your* skepticism lies I can more concisely give you a quick sketch here.

I'm not skeptical, I just

I'm not skeptical, I just can't figure out how one would do the kind of namespace manipulation required, using fexprs that can then be eliminated by a hopefully not too complex analysis.

P.S. Please post the report to LtU when it's finished. The topic is very interesting.

Not really

1. The R6RS is perfectly implementable. I believe Ypsilon is currently the most conformant implementation, passing all currently-available conformance tests.

2. It's perfectly possible to implement the Waddell-Dybvig module system as a macro. See http://docs.plt-scheme.org/reference/package.html for an example.

3. Fexprs are a terrible idea, and most of the claims you've made about them are false. You can't, for example, implement either the Macros-that-work algorithm or the Dybvig algorithm without global control over the syntactic environment. You also can't implement many things that are useful applications of modern Scheme macro systems using fexprs. See just about any of the papers here: http://www.ccs.neu.edu/scheme/pubs/.

4. I wish people would stop citing a report which does not contain a precise specification of anything at all as evidence that fexprs are truly a viable option.

R6RS un-implementability

1. The R6RS is perfectly implementable. I believe Ypsilon is currently the most conformant implementation, passing all currently-available conformance tests.

Here is my reasoning as to why it is not implementable:

The R6RS syntactic closure system in the R6RS standard library is clearly Turing complete. That is, the expansion process must necessarily permit the execution of arbitrary computations.

Because it is Turing complete, the arbitrary code of the expansion process can contain subforms whose evaluation is (a) not terminating; (b) not provably not terminating in any particular axiom system your compiler happens to choose.

It is therefore possible to write an R6 form in which if certain sub-forms of a macro definition ever terminated (if evaluated by the expander) a syntax error would be produced - yet because those subforms never terminate, no syntax error will ever be produced.

It is additionally possible to write R6 programs in which no runs of the program ever require the complete evaluation of those non-terminating sub-forms - although it is again possible, given any axiom system for an expansion system - to write programs that the expander can not prove will not need the expansion of those non-terminating sub-forms.

R6 defines a syntactically correct program as one which never produces a syntax error in its macro expansions.

R6 requires that implementations do not begin execution of any module or top-level program until it is proved that the expansion contains no syntax errors.

Therefore, for any implementation, there exist syntactically correct R6 programs which that implementation will fail to execute.

No R6 implementation can be both complete (execute every syntactically correct program correctly) and consistent (execute only syntactically correct programs correctly).

Similarly, no implementation of R6's EVAL procedure can be complete and consistent for its specification recapitulates the "no syntax errors in expansions before evaluation begins" rule.

Incidentally, if you remove the restriction that an implementation may not begin execution until a module or top-level program is proved to be free of syntax errors in its expansion, and instead perform expansion lazily, and remove the restriction that correct (EVAL-free) programs may not generate run-time syntax errors, then it would be possible to have a correct and complete implementation of R6. What you would give up is any remaining pretense that syntactic abstraction is strictly a pre-processing affair. Indeed, you could now go full circle and define fexprs and classic lisp defmacro using R6 macros. That you can also go the other way (defining the more complicated R6-style of macro in terms of Scheme-ish fexprs, and in such a way that they can be statically eliminated when used as intended - is pretty much my main point.

Fexprs are a terrible idea, and most of the claims you've made about them are false. You can't, for example, implement either the Macros-that-work algorithm or the Dybvig algorithm without global control over the syntactic environment. You also can't implement many things that are useful applications of modern Scheme macro systems using fexprs. See just about any of the papers here: http://www.ccs.neu.edu/scheme/pubs/.

All of the results developed for static hygienic systems are derivable from their redefinition in terms of Scheme-ish fexprs, thanks to the simple semantics of lexical scope and lambda.

Perhaps it would help (or perhaps not) to offer an analogy to continuations. Some uses of continuations really mess up static optimization and static analysis of code. Other uses of continuations are easy to optimize and analyze. Chez has a great attitude about continuations: Let exotic features "pay their own way". Statically reducible uses of continuations are handled well by the optimizer. Other uses are brute-forced a bit.

That's a good way to deal with the implicit reification of continuations.

Schemish-fexprs amount to the implicit reification of environments and source forms. Some uses are statically reducible, others are not. Let the non-reducible uses "pay their own way" but, meanwhile, all of the work done on static hygienic macro systems is a model for an interesting class of reducible uses. Perhaps it's a case of "Lambda: the ultimate syntactic abstraction".

Missing the definition

First, syntactic closures refers to a particular Scheme macro system, which is not the one specified in the R6RS, nor the one discussed in Waddell's thesis. I'm not sure what you mean by the term, it seems like you mean "hygenic macro system".

Second, your whole discussion on R6RS implementability is predicated on confusion about the meaning of execution. Execution of a library or top-level program occurs *after* expansion, not concurrently with it. Therefore, there is no issue with the undecidability of expansion. I will note that this was already explained to you at [1]. Also, your claim about the specification of `eval' in R6RS is incorrect. You can see this from the spec [2], where no mention is made of whether expansion even must occur first.

Third, this claim is still false:

All of the results developed for static hygienic systems are derivable from their redefinition in terms of Scheme-ish fexprs, thanks to the simple semantics of lexical scope and lambda.

If anyone ever provides a semantics of "Scheme-ish fexprs", it will be provably false. The "simple semantics of lexical scope and lambda" do not enter into it. If you doubt this, produce an implementation of `syntax-rules' in terms of "Scheme-ish fexprs".

[1] https://groups.google.com/group/scheme-reports-wg1/msg/3a4e6ede6b1854ac
[2] http://www.r6rs.org/final/html/r6rs-lib/r6rs-lib-Z-H-17.html

not missing a thing

It's amazing how swiftly the misplaced bitterness that broke out on the WG1 list spills out here. But to try to dance around that and briefly respond:

Second, your whole discussion on R6RS implementability is predicated on confusion about the meaning of execution. Execution of a library or top-level program occurs *after* expansion, not concurrently with it.

That is what R6 mandates, yes, and that is the R6 mistake that is central to my point. R6 mandates that execution (semantically post expansion) may not begin until after the implementation is satisfied that expansion will produce no syntax errors. R6 specifies a language in which, for every implementation, there exist macro definitions which will never produce a syntax error, yet whose expansion phase will not terminate in that particular implementation. Among such programs are those which can be fully executed if we add a (consistent) axiom "the expansion of this program produces no syntax errors". Thus, for every implementation of R6, there exist programs which should be executed, but which the given implementation will fail to execute since it lacks the requisite axiom. R6 is bitten by a Goedel/Turing incompleteness/halting-problem bug in its ham-fisted attempt to ensure that it's Turing-complete macro system be statically analyzed for halting properties. Every correct R6 implementation (one that never mis-executes a program) has some correct R6 programs that a different R6 implementation will correctly execute but that it itself can not. Given R6 as specified, choose just one of correctness or completeness of an implementation.

I'll say it again differently: if you give me an R6 implementation then, not easily but definitively, I can show you a program for which we can (a) prove that a *different* implementation would correctly execute the program; (b) prove that your implementation will not execute the program at all. And then you hand me back the better implementation I gave you and we can do the same thing again: I'll show you a program that that implementation won't execute, but that a better implementation can correctly execute. Along the way I will have to prove various independence results using something like Goedel's or Turing's techniques, use these to add a new axiom to your implementation, and thus derive an improved implementation -- but it can be done.

Said again differently: it is *impossible* to write a complete meta-circular interpreter for R6 unless (at the very least) you assume a halting problem oracle as a primitive capability.

You have no problem writing implementations of R6 that are consistent in the sense of never mis-executing a program - but in every case there will be correct programs that your implementation fails to execute at all, but that another implementation can execute.

Yes, Mr. Hsu commented on this topic but he misunderstood the issue. The mere discussion of the issue on the WG1 list has inflamed such passions that the topic has been simply dropped cold for now. But do not mistake the absence of a specific reply to Mr. Hsu on that list for an indication of his correctness.

Also, your claim about the specification of `eval' in R6RS is incorrect. You can see this from the spec [2], where no mention is made of whether expansion even must occur first.

The relevant passage from R6 reads:

If the first argument to eval is determined not to be a syntactically correct expression, then eval must raise an exception with condition type &syntax. Specifically, if the first argument to eval is a definition or a splicing begin form containing a definition, it must raise an exception with condition type &syntax.

For every given implementation, it is undecidable whether or not certain expressions mandate the raising of such an exception and yet, in cases where independence can be shown, eval should proceed anyway. It is the clear intent of that passage that eval should first expand, and then evaluate - in that sense it recapitulates the Goedel/Turing trap of the standard library macro system.

If anyone ever provides a semantics of "Scheme-ish fexprs", it will be provably false. The "simple semantics of lexical scope and lambda" do not enter into it. If you doubt this, produce an implementation of `syntax-rules' in terms of "Scheme-ish fexprs".

I'm not sure how a semantics can be false. Do you mean inconsistent? Do you mean "trivial" in Dyvbig's sense? In any event...

Yes, I'm waded hip-deep into writing a specification of Schemish fexprs and showing how syntax-rules can then be defined. There seems to be no special "hardness" to the problem, just tedium. I'm amazed, frankly, that people (not just you) would find it so surprising but that is the way of the world, I guess. It's just sad that such counter-factual yet passionate responses as the mere suggestion have received have, at least as things stand, banished any consideration of the topic from the WG1 consensus process.

Very many things

Although I'm not on WG1, my guess is that people there are frustrated, as I am, that you insist on assuming that no one has ever though about macro systems intelligently, nor has any good reasons for the way the development of macro systems in Scheme over the last 20 years has gone.

Your claim about R6RS is similarly foolish, since the R6RS mandates that programs/libraries be fully expanded before execution begins. Therefore, there is no undecidability problem. You may not like that mandate, but it's a part of the R6RS. The fact that there are some programs that could start executing earlier than some macro expansion but for effects (such as non-termination) in later expansion is neither here nor there, since the R6RS rules out such evaluation strategies.

For `eval', you will note that the R6RS fails (probably erroneously) to specify that such an exception must be raised before any evaluation of the expression.

Finally, by "it" I was referring to your claim that

All of the results developed for static hygienic systems are derivable from their redefinition in terms of Scheme-ish fexprs, thanks to the simple semantics of lexical scope and lambda.

which can only be provably false given a semantics for "Scheme-ish fexprs".

please avoid ad hominem

Now, I know that the tone here is not explicitly disprectful and that we should assume no foul is intended. There are no hurt feelings. I really don't intend a flame-war. But, logically speaking, I am accused here of being a complete idiot, and I'd like to address that:

Although I'm not on WG1, my guess is that people there are frustrated, as I am, that you insist on assuming that no one has ever though about macro systems intelligently, nor has any good reasons for the way the development of macro systems in Scheme over the last 20 years has gone.

I think that that is a pretty unreasonable reading of what I've written. As some small examples, I would point out that I described the topic paper in these threads as quite lovely and that I've remarked that all of the theories that have developed around static macro systems are directly and usefully applicable to a partial theory of Schemeish fexprs. I've also pointed out that a dual of Schemeish fexprs is present (if a bit imprecisely) in R1RS and that the Kernel author is certainly to be credited for more elaborately developing the idea. I've elsewhere pointed out the presence of Schemeish fexprs in SCM and their duals in other implementations and pointed how they relate to, for example, the kinds of optimization you see in the Larceny family of compilers. This notion that I am devaluing the work on static macro systems or claiming that nobody has previously thought of Schemeish fexprs is entirely baseless. That claim puts me, personally, in a bad light purportedly for things I've said but without regards to the facts of what I've said - which is why I consider it an ad hominem attack.

Your claim about R6RS is similarly foolish, since the R6RS mandates that programs/libraries be fully expanded before execution begins. Therefore, there is no undecidability problem.

Thanks mainly to Goedel, if you show me a meta-circular expander which purports to be complete, I can show you how to construct a set of syntax definitions which are correct R6, yet which your expander will fail to expand. And I will show you that a complete expansion is correct. And show you how to construct a second, different expander which handles that case. And, then we can loop forever that way: you give me back my improved expander and I'll show you another case it can't handle. You can take my improved expander and make whatever correctness preserving changes you like to it and I'll still give you back a program it can't handle. This is the pitfall of defining a turing complete macro language but then insisting by fiat that it is statically analyzable. Your fiat implies that no implementation can be complete, not even an idealized implementation without resource limitations, ergo, R6 can not be completely implemented.

The fact that there are some programs that could start executing earlier than some macro expansion but for effects (such as non-termination) in later expansion is neither here nor there, since the R6RS rules out such evaluation strategies.

I'm sure that I don't see how non-termination is an effect but for what it's worth, for any correct implementation of R6RS, there exist valid programs that terminate in finite time which that implementation can not execute (because full expansion is impossible for that implementation).

For `eval', you will note that the R6RS fails (probably erroneously) to specify that such an exception must be raised before any evaluation of the expression.

You will please note that R6 fails to aspire to any formal account of EVAL but that that the plain english reading of the narrative specification clearly implies such an exception must be raised before any evaluation - which implies that no implementation of EVAL can be complete, for reasons already mentioned.

Finally, regarding your "finally": I have nothing which is either concise or kind to say other than the concise response that you are changing your tune and that LtU user misimoni has a more reasonable approach to this dialog.

ad hominem avoidance

This notion that I am devaluing the work on static macro systems or claiming that nobody has previously thought of Schemeish fexprs is entirely baseless.

What devalues the contributions of those that have worked on existing macro systems is failing to spend any time figuring out what they have done or why they did what they did and rejected the solutions you propose.

Thanks mainly to Goedel, if you show me a meta-circular expander which purports to be complete, I can show you how to construct a set of syntax definitions which are correct R6, yet which your expander will fail to expand. And I will show you that a complete expansion is correct. And show you how to construct a second, different expander which handles that case. And, then we can loop forever that way: you give me back my improved expander and I'll show you another case it can't handle. You can take my improved expander and make whatever correctness preserving changes you like to it and I'll still give you back a program it can't handle. This is the pitfall of defining a turing complete macro language but then insisting by fiat that it is statically analyzable. Your fiat implies that no implementation can be complete, not even an idealized implementation without resource limitations, ergo, R6 can not be completely implemented.

First, metacircularity is not required to be an R6RS implementation. Second, the R6RS does not require the macros to be statically analyzable. It requires the macros to be run before execution begins. It's fine for a macro to fail to terminate, and thus execution to never begin. If you disagree, please provide a set of syntax definitions the meets the conditions you describe for any R6RS implementation (PLT would be most convenient for me, but any will do).

The fact that there are some programs that could start executing earlier than some macro expansion but for effects (such as non-termination) in later expansion is neither here nor there, since the R6RS rules out such evaluation strategies.

I'm sure that I don't see how non-termination is an effect but for what it's worth, for any correct implementation of R6RS, there exist valid programs that terminate in finite time which that implementation can not execute (because full expansion is impossible for that implementation).

Non-termination is widely considered an effect. Also, those programs don't terminate in finite time - they loop for ever during expansion. It is illegal to execute them at all.

I don't think the passage on `eval' requires what you think it does, but that level of language lawyering is probably beside the point. Plenty of implementations of `eval' are complete, for the reasons I describe above. They are, in fact, required to not terminate if the expansion infinite loops.

Last but not least, I see now that my "finally" was perhaps easily misinterpreted. What I meant was that I firmly believe your statement to be false, but it obviously can't be proved to be false without a semantics. If you have such a semantics, I'd be interested to see it. And recall that people have done many more interesting things with macros than `syntax-rules'.

failure to avoid

You wrote "What devalues the contributions of those that have worked on existing macro systems is failing to spend any time figuring out what they have done or why they did what they did and rejected the solutions you propose."

If you were to say instead something like "I wonder if you have spent any time reviewing the history of .... because I think that ...." then I would be able to respond to you in a friendly way and might even choose to do so. But if you wish to rephrase yourself that way, please use my email address not LtU because in this LtU thread, at least from my admittedly biased perspective, you seem to be switching the subject from the topics relevant to the paper under consideration to questions about my character. I would rather not cooperate any further than this on the topic of what a scoundrel I might or might not be, in this forum.

Thanks.

-t

p.s.: The documented and orally transmitted history of the reports is quite fascinating, in its rich detail, at least in my experience examining it, over the decades.

In the end, all is fexprs.

Some years ago, I began to be very annoyed about lisp/scheme macros. The problem is that, unlike functions, they do not exist at execution time. They can't be, eg, returned from functions, stored in variables, applied, etc. They represented a division of Lisp syntax into two kinds, one well handled, first class, and with a solid theoretic grounding, and one handled in many varying ways, first-order, and with many completely different theoretic groundings of varying degrees of solidity.

Understand, it was the distinction that got to me most, and not any particular problem with a particular form of macrology (though of course such problems also existed). Differences in properties between functions and macros was and is a glaring inconsistency in a language family whose strength was and is all about consistencies.

So I've worked, intermittently, on a lisp that has only one kind of function - first-class as well as first-order. First-class macros turn out to be functions, of a sort. And a call to them necessarily has all the information you need to call any function, so you can compile all calls the same way. So you can "apply" without knowing or caring whether the thing you're applying is an ordinary function or a function that does tricky things with its argument expressions. It's gone through a lot of versions. It turned out that it allowed a lot more grace in handling function closures as reified environments, and that lexically scoped syntactic abstraction (as in this paper) is a beautiful thing, whether achieved by macros qua macros, or by an expanded universe of function definitions. Ultimately, it has what I realized are pretty close to being fexpr semantics.

So I went back and read the "fexprs are a bad idea" papers, (the most well-known of which can be found at http://www.nhplace.com/kent/Papers/Special-Forms.html) and what I found was that they were simply done wrong at the time. The way they were implemented at that time and on those systems, they were in fact a bad idea.

The problem that arose was that their implementation of fexprs failed to encapsulate an environment reference with each argument. This led to ambiguity in environment references in composed fexprs. When a fexpr was used as an argument in another fexpr, it became impossible to identify just one "calling environment" for expanding the arguments, and that implementation wanted to use the same environment to evaluate all arguments. Therefore when fexprs were composed, there was no way to figure out which environment to use for argument evaluation, and in some cases no correct single choice existed.

This problem is just a version of the same kind of namespace problem that was later addressed by promises, delayed evaluation, lazy semantics, lexical environments, environment reification, and alpha-renaming in hygienic macros. A big difference between then and now is that, thanks to the years of work people have put in on macrology, we now have the tools to do fexprs without causing semantic problems. John Shutt and Peter Michaux have both noticed this too - and been more useful to the community than me because they have been more productive in terms of papers and code.

However, the resulting language is clearly not scheme. Scheme requires evaluation of arguments and procedure before the procedure is called, and fexpr semantics mean the procedure must control the evaluation of arguments.

I dropped the WG1 mailing list when it became clear, firstly, that the lisp I was interested in working on could not possibly be any kind of scheme, and secondly that they intended WG1 to be a crippled version (excuse me I mean "subset") of the WG2 language rather than a cleaner, simpler language that derived more semantic power by removing WG2 restrictions. I considered it carefully, then decided that I was likely to contribute more heat than light and that participation would probably just depress me.

Two examples of restrictions that would simplify scheme and make it more powerful are the R6 restriction on lexically scoping module imports whose advantages are demonstrated by the paper above, and the restriction on decidability of macroexpansions which Tom Lord and Sam Tobin-Hochstadt have been "debating" above.

I'll try to explain this in simpler terms. There are syntactically correct R6RS programs, whose macroexpansions can be expanded as far as any particular evaluation needs in finite time, whose generated code for any particular evaluation would run in finite time, which cannot be run on any R6RS system that conforms with the requirement that macroexpansions must succeed or raise an error before the execution of the macroexpanded code begins. That's bad enough. But it's actually worse than that. The sets of syntactically correct programs that particular implementations reject are nonidentical, which means that not only are there undecidable programs, but there is no clear division between decidable and undecidable programs. You can't even decide which programs are decidable and undecidable consistently. Which is which depends on what implementation you are using.

In order to run any syntactically correct program that can run in finite time, an implementation would have to interleave macroexpansion "on demand" with execution. But R6 contains a restriction which forbids doing exactly that.

Increasingly, since scheme seems to have abdicated its place as the lisp language that derives more semantic power from simple constructs with fewer restrictions, there's a niche for another lisp to take up that role. And I'm pretty sure that when another lisp fills that niche, it will be a lisp with fexpr semantics, because they are more consistent (semantically, lispy languages are more consistent when there is only one kind of callable entity, and it is first class) and because the barriers to implementing them well have largely fallen.

oh, brother (Ray)

For the purposes of LtU I want to record the following statements not as some rigorous claims about PLT but rather as a first hand account of some Scheme WG1 experience from a member of that working group. That is, I'm a member of the Scheme WG1 team and I'm just subjectively reporting on my experience. For LtU, here is some history to remember. For Ray, here is some possible reason to not give up on Scheme quite that quickly or at least in that way. Heat of the moment stuff. As Tom Leherer once said in a completely unrelated context: "Not only am I prepared to retract these comments, I'm prepared to deny that I ever made them!" He quipped that, of course, speaking into a microphone with tape recorders running...

Ray, don't give up on WG1. There are one of three possible outcomes. (a) things fizzle out, WG1 diverges and/or evaluates to naught. (b) WG1 gets its act together; (c) WG1 produces crap. I think that (b) is still a distinct possibility. In the event of (a) or (c) then I think you, me, and some others all just come bury it quickly in a better solution to which I think you, me, and some othes should apply the name "Scheme (really)".

Here is, to the best of my understanding Ray, what happened so far on the WG1 list re Schemeish fexprs:

I engaged in discussion with the topic with a few WG1 members who seemed to be interested.

Another member resigned, reportedly citing that discussion with indignation as his motivation for resigning.

The chair of WG1 informed me that, from the start, I was a probationary member of WG1 and, in light of the resignation of some other member, I would have to defend myself to the chair or else be expelled from WG1 by the chair. I don't actually recognize the chair's authority to expel me in such a way but on the other hand I don't think any good can come from challenging the question of his authority on that matter -- and so for now, per order of the chair, I shall not discuss Schemeish fexprs in the WG1 consensus process. The kind way to put it is that the chair has requested suspension of the topic on the wg1 list and threatened expulsion of this member otherwise. This member defers, to a degree, on the chair's prerogative for now.

Yes, it's a bit absurd, but it is what it is.

Nevertheless, discussion continues off list. And perhaps more importantly, off-list I continue to write a more formal account of the very obvious definition of Schemish fexprs down. Shutt did a lot of the tedious heavy lifting already here.... it's all just grunt work to say it well from a Scheme perspective.

As far as I am concerned, there are three possible outcomes:

We overcome the politics and fix core scheme in WG1 and my discussion and ultimate finalization of Schemish fexprs prevails.

Or... WG1 fails on that point and I put forward an alternative Scheme standard which ultimately achieves authority over WG1 by acclimation.

Or... both I and WG1 fail by attrition to produce anything, in which case Ray carries on, alongside Shutt et al.

We have in WG1 a bright chair who is, nonetheless, at these early stages, incompetent at organizing. Give it time. We can debug this. The chair can learn and, meanwhile, progress happens anyway. Give WG1 chance. I think it still has a shot, in spite of early bad indications.

whatever the wg1 situation

i for one am curious to learn more about your fexpr adventures.

Sigh. I had not been aware....

I had not been aware of the happenings on the WG1 list; as I said, I dropped it pretty early.

It sounds, however, like I was right on both of the main reasons wny I dropped it. Firstly, given what happened to you (and my firmly held opinions on related topics), I really would probably have contributed more heat than light. Secondly, it sounds like the participation really would probably have caused me to become depressed.

I have a limited capacity for carrying on a technical argument while remaining psychologically healthy, especially with people who don't understand or acknowledge rigorous arguments. It's a handicap I have to acknowledge. I can remain rigorous, professional, and polite for a very very long time, eventually even perhaps producing something beautiful, while steadily getting angrier and more depressed, in ways that tend to be damaging in my personal life. Or I can abandon the professionalism and politeness, vent my anger and depression into the debate, and make what can best be termed a "negative contribution" to the process.

Coding is much better for me than debating. A computer doesn't need convincing and doesn't have an ego to get bruised.

the art of negotiation

It sounds, however, like I was right on both of the main reasons wny I dropped it. Firstly, given what happened to you (and my firmly held opinions on related topics), I really would probably have contributed more heat than light. Secondly, it sounds like the participation really would probably have caused me to become depressed.

It's a skill. It's an art. You have to be really careful and extraordinarily diplomatic to be productive in any kind of ad hoc committee environment like this. All it takes is years of practice and lots of patience.

Hopefully, 10 years from now, I'll start to be getting good at it. :-)