When to create syntax?

As opposed to procedural abstraction, higher-order functions, etc. Nice answer from Patrick Logan (who was once a member of the LtU team but seems to have forgotten about us...)

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Macros vs (Lazy) Procedures

I'm interested in what the differences are between macros and procedures which take their arguments as unevaluated expressions. For instance, in Tcl you can avoid eager evaluation by enclosing the argument in braces {}, in which case the procedure gets passed the unevaluated string. So, I would write the when-ready example as:

proc when-ready {body} {
    wait-until-ready $timeout
    uplevel #0 $body
}
when-ready {
    puts "I'm glad this is finally ready!"
    do-something
}

when-ready is just an ordinary procedure. The downside in Tcl is that it falls to callers of a function to know whether it expects an unevaluated script/string or an evaluated result (not to mention the games with uplevel to fix scope). However, IIRC, 3-Lisp allows you to do similar things with its reflective procedures which also get passed an unevaluated expression, (+ continuation and environment), but at the level of abstract syntax rather than strings. Here, the caller doesn't have to be aware of the difference between eager or lazy evaluation of the arguments, at least not syntactically (it may make a big difference to semantics, which perhaps is an argument in favour of Tcl's approach which forces callers to be aware of this). At this level, then, it would seem that lazy passing of unevaluated expressions (in some form in which the expressions can be explicitly manipulated) fulfills a similar role to that of macros for syntactic extension. But again, IIRC, 3-Lisp then also had macros. So, I'm slightly puzzled as to what the pros and cons of each approach are. Is a macro just a "lazy procedure" which is evaluated at compile-time rather than runtime? If you had these type of lazy procedures, could you implement macros with just a partial evaluator? Are there principled reasons to choose one approach or the other, or should a language provide both?

I did once start reading about 3-Lisp descendants like Black, which seemed to explore the use of partial evaluators with reflective procedures. However, there seemed to be a lot of conceptual baggage to do with the metaphysics of infinite reflective towers and so on. For me, this seemed to obscure a simpler, more practical relationship with macros and syntactic abstraction, but perhaps I just need to spend more time fully understanding the concepts.

"Lazy passing of unevaluated

"Lazy passing of unevaluated expressions" is certainly similar to macros, and the connection with partial evaluation is good. The only real hole in that picture is that naive passing of unevaluated expressions loses the original expression context, a shortcoming that macro systems have worked very hard to overcome. So the typical modern macro system (Scheme's syntax-case, for example) provides a much richer notion of the contexts of both macro definition and macro expansion. Since modeling this context typically relies on heavy interaction with compile-time data structures or semantic structures (symbol tables, binding environments, etc.) modeling the semantics via run-time passing of unevaluated code is more problematic.

But the idea of phased computation is really powerful, including macro expansion on one side and staged evaluation (dynamic eval) on the other and a permeable boundary between static and dynamic phases (perhaps based on partial evaluation). This is IMHO something of a holy grail.

You might enjoy Macroexpansion Reflective Tower, if you haven't already seen it.

Thanks for the link

The only real hole in that picture is that naive passing of unevaluated expressions loses the original expression context, a shortcoming that macro systems have worked very hard to overcome.

I thought this would probably be the main difference. As I alluded to in my post, you have to play tricks with [uplevel] in Tcl which is not a satisfactory experience. I've been meaning to really get to grips with Scheme as a day-to-day language for a while now, so that will give me a chance to get a handle on using a macro system and how it compares to other approaches.

Thanks for the link, I hadn't come across that particular paper before.

Oh man returning of lambda fo

Oh man returning of lambda forms from functions in emacs lisp. That is painful, because there are only dynamic environments, and so no syntactic closure. To fake syntactic closure, instead of using (lambda (x) x) (which actually evaluates to itself when not being applied), you have to use quasiquotation.

Scala's implicit closures

in my little understanding ad knowledge, I think that one of the best approaches to passing unevaluated expressions to functions are Scala's automatic closures, which allow you to build new control structures avoiding the need for the caller to explicitly state that an expression must be evaluated lazily

Just wan't to mention

that that code looks alot more eqvivalent to the lisp code:


(define (when-ready body)
  (wait-until-ready time-out)
  (if (not (ready?))
      (eval body)))

(when-ready
  '((display "I am glad this is finally ready!")
    (newline)
    (do-something)))

than the code in the article.

Yes

Although I don't see the need for the (if (not (ready?)) ...) condition. But the point I was making was that doing things this way is just as easy as with macros. In fact, it's slightly less code and to my eyes is easier to read. The problem, as pointed out, was scope. The Tcl version evaluates its script in the global scope. I could have made it evaluate in the dynamic enclosing scope by using [uplevel 1 ..]. That is usually the Right Thing, at least for Tcl.

Going further, in the same blog post as the topic of this discussion, there is a link to this post describing further uses of macros where "[p]rocedures would not work...". Well, as far as I can see procedures would work for both of those examples. I'm pretty certain both could be easily written in Tcl (it would be easier if I could understand the lisp code that implements the vector example, but it looks like Perl to me). The only big issue is getting the scope right. Tcl solves this with uplevel, which seems a much simpler solution than a full-blown macro system.

Well, I wasn't realy contesting your point.

Just thinking that it might be good to point out that you can do it that way in lisp to. But as you pointed out, you get problem with scope.

And by the way you probably can do a macro in lisp that let you write macros in the Tcl syle whithout having any scope problems.

Another example

on the same line as Patrick's comment:
CLIM Conventions for macros:

For a macro named "with-environment", the function is generally named "invoke-with-environment".

Example:

(defun invoke-with-foo (arg cont)
  (funcall cont)
  arg)

(defmacro with-foo (arg &body body)
  `(invoke-with-foo ,arg
      (lambda () ,@body)))

More than Performance Implications

Most of the talk about lazy versus strict evaluation has been related to performance issues. For that sort of thing, wouldn't it make sense to try to move that off to a different level, specified separately from the algorithm (as RDBMS folks often suggest). After all, if I'm trying to get an algorithm correct, I don't really give a damn about performance, and if I'm working on performance, I would like to be able to muck about with transformations that guarantee that the algorithm won't be changed. Is this sort of thing possible to do?

Are there other, non-performance-related, reasons to prefer strict over lazy evaluation? I certainly find lazy easier to understand, since it's much more consistent. It's always irritated me that in LISP and Scheme there is no way, looking at an arbitrary form, to tell what the order of evaluation is. You have to examine enough context to know whether the form is a standard function call or a macro. Haskell and Smalltalk seem more elegant for not having this problem. (And in Smalltalk, it even eliminates syntax for things like "if".)

It can change correctness

Changing evaluation order in the presence of effects can change the result. And almost all practical PLs have effects (even if only in form of non-termination).

So I guess the possibility for separation of evaluation order from code proper is minimal (it is not an aspect, or maybe aspects are not really separatable, too... :-) ).

To each his own

Well, to each his own. For me, strictness is so intuitive that I can only understand lazy code by either pretending it's strict (which breaks down quite soon) or by mentally wrapping everything in sight with a thunk (which makes the code insanely complicated).

But I don't understand what this has to do with order of evaluation. Lisp prescribes left-to-right order for function calls, but Scheme explicitly states that no ordering can be assumed (the semantics even have a randomizer to prevent you from reasoning about evaluation order), though both are explicitly strict.

I agree that sometimes it'd be nice to tell a macro from a function call right off, but if you want that, use a naming convention.

not quite right...

You have to examine enough context to know whether the form is a standard function call or a macro. Haskell and Smalltalk seem more elegant for not having this problem. (And in Smalltalk, it even eliminates syntax for things like "if".)
Even in a lazy language, you still need to know the difference between macros and function calls in order to reason about the code. No matter the evaluation strategy and order (eager/lazy, left-to-right/unspecified) macros still introduce a phase distinction. (You might want to look at Composable and Compilable Macros, although there are lots of other good papers on the subject.)

Of course, lazy evaluation obviates the need for macros in many cases, but they're certainly still useful, and if you have them, you still have the same problem...

Implicit lambdas in Scala

A neat feature of Scala is "Automatic Closure Construction".

If the function parameter expects a closure and you're not passing one, it'll automatically convert the parameter to a closure.

Algol 68 had that

It was called "proceduring", and was used to replace Algol 60's call-by-name arguments. However, it was removed in Algol 68 Revised, because of the possibility of loops in the automatic coercions.

Ha!

it was removed in Algol 68 Revised, because of the possibility of loops in the automatic coercions

Sounds amusing. I can imagine how this can happen. Still it would be amusing to see the reaction the developers had at the time. Is this reaction recorded in writing somewhere, I wonder?

Pico has proceduring too

Not the editor, but the Scheme front-end (or top-end), which is itself very interesting, I think. You specify which formal parameters are to be procedures (if you want, it's not a requirement), and matching actual parameters are automatically wrapped in thunks, just as in Algol 60.