Lambda the Ultimate

inactiveTopic ll1-discuss on macros
started 11/26/2002; 7:41:56 AM - last post 11/29/2002; 11:27:13 PM
Ehud Lamm - ll1-discuss on macros  blueArrow
11/26/2002; 7:41:56 AM (reads: 2075, responses: 18)
ll1-discuss on macros
John Wiseman provides a summary of the debate.

One point, which is related to recent discussions on the PLT mailing list, is that in many systems it is hard to debug and trace macro expansions. This makes macro errors nastier than most other errors. This fact may be at the root of programmers distrust of macros.


Posted to general by Ehud Lamm on 11/26/02; 7:48:24 AM

Ehud Lamm - Re: ll1-discuss on macros  blueArrow
11/26/2002; 8:02:34 AM (reads: 1703, responses: 0)
So it seems Todd doesn't like macros all that much, and really wants languages to be extended (databse integration, constraints, parsing and other "disruptive" ideas). Isn't that kind of a double standard? Macros let us create these extensions (e.g., SchemeQL)!

Luke Gorrie - Re: ll1-discuss on macros  blueArrow
11/26/2002; 9:39:44 AM (reads: 1680, responses: 0)
What I've found interesting in the LL1 macros threads are the suggestion that you can do pretty much anything macros can do just by using closures (or perhaps lazy evaluation), even to the point of similar syntactic sugar. In that sense the main difference between a macro and a function would be much like a compiler and an interpreter, with each able to express the same thing in basically the same way.

If I rememeber some of the examples right: with-foo could use a closure for the body and wrap it how it wants; define-xxx only needs to be a macro when the define-yyy's it wants to call are macros (because they expect a literal name of what to define and won't take a variable.); control structures like 'if' could also use closures. Maybe it's the static parts of Lisp and Scheme - like existing macros and special forms that demand literals for arguments - that make macros so important for them? I find that an interesting thought to ponder.

.. not to suggest that the difference between a compiler and an interpreter isn't a big one (?). My favourite part from the bits I read was one of Guy Steele's responses, pointing out another type of "expressiveness": http://www.ai.mit.edu/~gregs/ll1-discuss-archive-html/msg02120.html

Patrick Logan - Re: ll1-discuss on macros  blueArrow
11/26/2002; 10:44:42 AM (reads: 1647, responses: 0)
A follow up to Guy Steele's message addresses Luke's statement above that closures suffice...

http://www.ai.mit.edu/~gregs/ll1-discuss-archive-html/msg02122.html

A lot of syntax extensions in Scheme are done to hide the lambda keyword. In Smalltalk there is a lower tax for a closure, e.g.

Smalltalk => [:x | <body>]

Scheme => (lambda (x) <body>)

Smalltalk also has keyword arguments. A lot of syntax extensions in Scheme are to provide "English like" forms, where Smalltalk keyword arguments can be more English-like than Scheme's s-expressions.

Common Lisp has keywords, some Scheme's do too.

Lazy evaluation is just an expression wrapped in a (possibly implicit) closure. So...

(delay (+ 1 2))

...is just syntax that wraps the (+ 1 2) in a closure to delay evaluation and ensure it it evaluated only once...

(let* ((tag (unique-token)) (result tag)) (when (eq? result tag) (set! result (+ 1 2))) result)

So with closures you don't need lazy evaluation either, but it comes in handy. Sometimes syntax extensions come in handy.

Luke Gorrie - Re: ll1-discuss on macros  blueArrow
11/26/2002; 10:50:47 AM (reads: 1631, responses: 0)
You can really cut down on the "syntactic overhead" of lisp closures with pretty lambdas in Emacs.

John Wiseman - Re: ll1-discuss on macros  blueArrow
11/26/2002; 11:06:17 AM (reads: 1660, responses: 1)
Steele also had a couple other examples of macros doing things that would be difficult to do just with closures; basically anything that relies on the "text" of code in the macro, like helpful trace macros or symbolic differentiation (yikes!):

http://www.ai.mit.edu/~gregs/ll1-discuss-archive-html/msg02088.html

Ehud Lamm - Re: ll1-discuss on macros  blueArrow
11/26/2002; 11:43:05 AM (reads: 1708, responses: 0)
Right. Light weight lambdas can reduce the need for macros for sure (indeed, even awkward ways of writing high order functions, without first class functions in the language, can sometimes bve used instead of macros, if these are not available in the language). But I agree with Matthias Felleisen basic point:
Macros exist for the very same reason that lambda exists (or Lambda): 
to abstract over patterns. 
lambda abstracts over values. 
Lambda abstracts over types. 
Syntax abstracts over syntax. 

Noel Welsh - Re: ll1-discuss on macros  blueArrow
11/27/2002; 9:38:24 AM (reads: 1481, responses: 0)
One of the things that people on the ll1 discussion sometimes gloss over is that convenience is a big deal. All Turing-complete languages are equal in the abstract. It's the concrete usability of the language that is important and this is where macros are useful.

Arguments that a language doesn't need macros because it has certain syntactical features (e.g. keyword arguments, light-weight block notation) are also missing the point. Macro-enabled language can (and do) implement those features.

Many of the examples have been of very small macros, that don't show the full power of macros. Better examples IMHO include pattern matching, scsh and aspect-orientated programming.

Patrick Logan - Re: ll1-discuss on macros  blueArrow
11/27/2002; 9:43:36 AM (reads: 1492, responses: 0)
I agree with Noel absolutely. All those non-trivial macros make them worthwhile.

Another significant macro I'll add to Noel's list is modules. The Chez Scheme module system (also available in SISC and any other Scheme that syntax-case has been ported to) raises modules to another level of capability above languages like Modula, Ada, Java, C#, etc. that build just one simplistic notion of modules into its core parser.

Franck Arnaud - Re: ll1-discuss on macros  blueArrow
11/27/2002; 11:34:58 AM (reads: 1483, responses: 2)
That macros are useful is another way of saying that metaprogramming (programs that make programs) is useful, and most people can agree with that. The question is how to do it.

I find it a bit suspect that there is a need for a distinct macro language: why can't you write macros in the host language, thus having a "quoting" mechanism rather than a macro system. Or conversely, if the macro language is so good, how can you justify not writing the compilers nearly entirely using macros? After all compilation is a form of meta programming.

Seeing the way most macro systems evolve -- start with convenience tool for small shortcuts added to a macro-less host language, and end up by accident with a turing complete macro system -- seems to hint that there is something dodgy somewhere with the concept as usually understood. Is there any macro-centric language design (where the macro system was designed first and the core language added afterwards, only to support things that cannot be done with macros)?

I'm also uneasy with what happens in a strongly typed context: the type of a macro seems an elusive concept, hence it may seriously destroy one's ability to reason about programs.

I also wish supporters of macro system used less crap examples, e.g. it's difficult not to throw away a paper that proudly announces that the new macro system allows to do something completely disgusting like implementing "printf" in languages where there are plenty of cleanly typed and better ways to do this.

So it seems Todd doesn't like macros all that much, and really wants languages to be extended (databse integration, constraints, parsing and other "disruptive" ideas). Isn't that kind of a double standard?

Well meta programming can be done with domain specific languages, not necessarily language extensions, so maybe the double standards are with those who advocate both macros and domain specific languages not implemented with macros.

Patrick Logan - Re: ll1-discuss on macros  blueArrow
11/27/2002; 12:35:42 PM (reads: 1458, responses: 0)
Scheme, and Lisp in general, do provide syntax extensions in the language itself. Look at syntax-case. It is essentially a quoting/unquoting mechanism with expressive pattern matching.

Ehud Lamm - Re: ll1-discuss on macros  blueArrow
11/27/2002; 1:16:12 PM (reads: 1496, responses: 0)
meta programming can be done with domain specific languages

I am not sure I understand what you are trying to describe.

Franck Arnaud - Re: ll1-discuss on macros  blueArrow
11/27/2002; 1:47:51 PM (reads: 1437, responses: 1)
I am not sure I understand what you are trying to describe.

Compare the ordinary 'yacc' packages (a DSL) with same done in C++ templates (code is generated at compile time), or with the parser packages in Haskell (which incidentally do not seem to require macros despite not being a separate DSL).

Ehud Lamm - Re: ll1-discuss on macros  blueArrow
11/27/2002; 2:57:34 PM (reads: 1497, responses: 0)
Oh I see. You are talking about code generators. You might enjoy a previous LtU discussion about DSLs and program generators.

Franck Arnaud - Re: ll1-discuss on macros  blueArrow
11/28/2002; 4:33:59 AM (reads: 1406, responses: 0)
This thread is about code generators! Macros are code generators. What I have trouble understanding is why people seem to make such a clear distinction between the two. I fear this distinction may be harmful as people concentrate on syntaxic devices allowing to mix meta program and program -- it seems the main difference, code generators usually separate -- and relegate the semantics so that macro languages end up not being good languages to write compilers in, which is what they should be.

Frank Atanassow - Re: ll1-discuss on macros  blueArrow
11/29/2002; 6:08:50 AM (reads: 1373, responses: 0)
I don't entirely agree with Ehud or Matthias Felleisen that macro abstraction is comparable to lambda-abstraction.

In fact, there are two distinct uses for LISP-style macros: syntactic extension and partial evaluation.

Syntactic extension just changes the concrete syntax. It lets you write things in a different order, abbreviate things and stick in keywords to clarify the role of syntactic components.

In ML and Haskell, you can define new infix operators. Macros in this sense are like a more general mechanism with more control over precedence, plus the ability to define what are sometimes called `distfix' (distributed fixity) operators. Syntactic extension does not increase expressive power.

Partial evaluation, OTOH, does. It lets you subvert the usual evaluation policy and evaluate some parts of a program statically and use the results in the dynamic stage. The last bit is important; I might do some arithmetic computation in a macro, for example to make sure I've passed it enough arguments, but the result of that computation is not used dynamically, so that is not an example of partial evaluation in this sense.

Syntactic extension is just that: syntactic. It only affects the concrete syntax. You can maintain the same abstract syntax and semantics, and syntactic extension just provides more ways of mapping source texts to abstract syntax trees.

Partial evaluation is a semantic feature, as becomes evident in a typed language, where you must introduce a new type corresponding to "code" (of some type) which gets evaluated in a later stage. "Code" is basically a new sort of value in the semantic domain, which corresponds to the abstract syntax; you can create it, manipulate it, deconstruct it, etc. (See, for example, MetaML or MetaOcaml.)

There is a gray line between syntactic extension and partial evaluation which has to do with the role of variables and application (actually, redexes in general, plus effects and recursion). I can use LISP-style macros to duplicate an application, for example:

(defmacro (twice e) `(+ ,e ,e))
(twice (* 1 2))
Now the question is, should the computation be shared or duplicated? In LISP/Scheme, it's duplicated, because this expands to
(+ (* 1 2) (* 1 2))
but a reasonable thing to do here is to stick in a let-expression to share the computation. So, expand to:
(let ((e (* 1 2))) (+ e e))
(Every good macro programmer does this manually in most circumstances anyway.) In a lazy language, you get the same problem, only when a variable is duplicated (because its binding will get evaluated twice). Similarly if you discard computations.

Ideally, one would like to separate syntactic extension from partial evaluation because it improves orthogonality, separation of concerns, optimization, etc. But this sort of thing is why it is difficult to do it in practice.

Luke Gorrie - Re: ll1-discuss on macros  blueArrow
11/29/2002; 6:44:37 PM (reads: 1335, responses: 1)
<This space intentionally left blank>

No way I can see to delete the message, which I'd posted on the wrong page!

Ehud Lamm - Re: ll1-discuss on macros  blueArrow
11/29/2002; 11:27:13 PM (reads: 1368, responses: 0)
Luke, trying to read your messages causes a null pointer exception...

Noel Welsh - Re: ll1-discuss on macros  blueArrow
12/5/2002; 5:39:22 AM (reads: 1275, responses: 0)
Or conversely, if the macro language is so good, how can you justify not writing the compilers nearly entirely using macros?

http://citeseer.nj.nec.com/peytonjones96compiling.html

Many compilers do some of their work by means of correctness-preserving, and hopefully performance-improving, program transformations. The Glasgow Haskell Compiler (GHC) takes this idea of "compilation by transformation" as its war-cry, trying to express as much as possible of the compilation process in the form of program transformations.