Mildly Extended MixFix

This started as a response to Fortifying Macros, but then I decided it wanted to be a new post.

As some of you know, BitC's design goals include indirect support for the processes that produce robust and secure software. For example, we consider impact on code audit when evaluating language features and ideas. Because of audit in particular, I've been very resistant to adopting a macro system. I've also had real reservations about MixFix, but we ended up adopting it. In the process, two mildly clever ideas emerged that may interest people here. Neither is implemented yet, but I expect the first one to go in later today.

The first pertains to macros (thus this post). In most mixfix systems, all "holes" are equal. In ours, we decided to have two hole markers. One behaves in the usual way. The other converts its argument into a thunk. Thus, the declaration:

infix precedence _and#_

declares and to be a quasi-keyword whose first argument is eagerly evaluated and whose second argument should be wrapped in a nullary lambda. The corresponding signature for _and#_ is:

_and#_: fn (bool, fn () -> bool) -> bool

Particularly when this type of specification is combined with the automatic block-insertion behaviour of our layout implementation, a surprising number of constructs that would otherwise require macros can be fabricated. I suspect that the mechanism will ultimately be abused, but it goes a long way toward reducing the size of the core language, and it offers a weak form of "poor man's lazy evaluation".

The second idea is named syntax tables. A syntax table is simply a named set of mixfix rules. While they are not first-class values, naming them allows them to be imported across module boundaries and/or explicitly placed in effect. This means that one can do something like:

syntax SpecializedSyntax is
  ...mixfix rules here...

def f(x) = 
  let y = 5 in
    using syntax SpecializedSyntax
      arbitrary code

Unless it is specified as extending the default syntax table, a new syntax table is essentially bare (a few core constructs are implemented using internal mixfix rules; those are always present). This means that you don't get any mixfix syntax in such a table, so you are free to completely rearrange and/or redefine operators and precedence according to requirements. At the same time, major shifts in syntax are visibly signalled to the potential auditor, and confusion that might result from strange ad hoc arrangements of precedence are precluded.

It also means that mixfix doesn't ``leak'' across module boundaries without an auditor-observable syntax indicating that a change has gone into effect.

Finally, the ability to wipe the slate and rebuild the entire precedence arrangement means that the poor, struggling syntax developer isn't constrained to shoehorn their expression language into the existing precedence hierarchy.

Most of my concerns about mixfix from an audit perspective had to do with the fact that unqualified import could engender fairly substantial surprise. Naming the syntax tables and incorporating an explicit introduction syntax alleviated most of my concerns.

These may both turn out to be dumb ideas, but they were at least thought-provoking to me.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

In most mixfix systems, all

In most mixfix systems, all "holes" are equal. In ours, we decided to have two hole markers. One behaves in the usual way. The other converts its argument into a thunk. [...] and it offers a weak form of "poor man's lazy evaluation".

The extreme would be languages like OBJ-3 which give the power of strictness analysis to the programmer, not the compiler.

Finally, the ability to wipe the slate and rebuild the entire precedence arrangement means that the poor, struggling syntax developer isn't constrained to shoehorn their expression language into the existing precedence hierarchy.

Is this such a big deal? Generalized left corner parsing, scannerless generalized left-right parsing (SGLR), and parsing expression grammars all allow relatively easy ways to embed DSLs. One issue the Fortifying Macros paper touches upon is the ability to produce good error messages.

Regarding your syntax tables, you may want to look at how Maude works, since Maude is backed by a very, very strong module system where you can re-export a module with new syntax, and you can also define views on modules to extract only the parts of the module you really care about.

Not a big deal - which is the point

Is this such a big deal?

No, it's not. It's neither a significant implementation change nor a conceptual leap of any sort. Which is more or less the point - we tend to look here (on LtU) at fairly extreme design positions, and I was pleasantly surprised at how much power was embodied in this relatively weak approach.

As to syntax tables, yes, I do think that's significant in a mild sort of way. It's not particularly new -- as Brandon points out below, Coq has something similar -- but I think it's significant from a usability perspective. There really is a problem with current mixfix systems where existing precedence cannot be rebound and/or re-arranged. The ability to just chuck them and start over is actually pretty useful.

But no, none of it's magic in any sense.

What? Precedence is not enough for disambiguation

You can still yield multiple parse trees if two operators have the same precedence. Beyond just associativity, you need a way to group expressions so that the operands have clear precedence in comparison to the operator.

If your argument is that the programmer shouldn't be able to specify precedence within operators (anticipating a potential objection, not introducing a strawman here), then I am not sure what the point of syntax tables would be in the first place.

As I said, every Maude module inherently supports what you are talking about viz a viz syntax tables, but it is probably more well thought out. It is not about the ability to just chuck out expressions so much as very easily test the ambiguity of syntax definition in a Maude module via the parse command. The parse command inherently understands whether the example can yield multiple parse trees or not. That is "auditor-observable".

Usability Principle: Any language that supports syntax definition needs to make it braindead easy to unit test that definition.

Maude allows mixfix definition through the use of underscores (referred to as underbars in Maude). It does not share the problems "with current mixfix systems where existing precedence cannot be rebound and/or re-arranged". See Chapter 3 of All About Maude, or Chapter 3 of the Maude Manual (free).

Which is more or less the point - we tend to look here (on LtU) at fairly extreme design positions, and I was pleasantly surprised at how much power was embodied in this relatively weak approach.

I was mentioning OBJ-3 to give you a compass and map on which to view your pursuit. I am agnostic on what you actually choose, at least for the moment. By all means, experiment.

Talking past each other

I think we are talking past each other. The problem I was talking about concerns the design of new syntax. What I was trying to say in regards to precedence is that few (any?) current mixfix systems provide a way to drop or change a pre-existing mixfix declaration (e.g. one provided in the prelude). So in many languages providing mixfix, it is possible to get into a situation where you want to place a new operator "between" two existing operators, but the pre-existing precedence levels don't permit you to do so.

So the ability to start with a clean syntax table and put into it only what you want is useful. As has been mentioned elsewhere here, it isn't new - Coq has such a mechanism.

I'm not seeing where Chapter 3 of the Maude specification addresses this concern, but I wonder if that wasn't a pointer based on talking past each other. I do appreciate the pointer to the language reference. It's something I should look at more closely.

The ability to specify clean precedence for a new expression syntax is entirely orthogonal to the audit problem. The issue for audit is simply to ensure that whatever expression syntax is currently in effect is visibly called out in the input source code.

Coq

The scope control and syntax extension features you are considering sound very much like those in Coq.

The Coq notation system identifies a bit of syntax (which will be considered a single production by CamlP5) and an expression pattern.

"{ A } + { B }" := (sumbool A B) : type_scope.

allows the nicer "{a=b}+{a<>b}" for the type of booleans associated with propositional facts (incidentally, = and <> are also infix notations, which hide an implicit type parameter). No choice or computation is allowed in the interpretation, just a new surface syntax for the given expression. The notation will automatically be used for printing inferred types or expressions.

Coq has pretty good features for looking up these notations, browsing the currently active and the available syntax extensions, and printing code with without using notations when requested (and incidentally for printing code with inferred type arguments shown).

Notations are organized into "interpretation scopes", which can be invoked for a single expression like (1 + 2 * 3)%Z, or activated for the rest of a section or module. More implicit features you might not like also allow notations to declare that some of their subexpressions should be parsed in particular notation scopes, and for modules to export a scope in such a way that an unrestricted "Require Import" of that module will open the syntax in addition to importing all the symbols from the module (on the other hand, you probably don't allow unrestricted imports in code that needs careful auditing anyway).

Coq not

Coq probably is not the best guide here. I find it way too easy to make its parser crash with internal errors through some rather harmless notation definitions.

Scala has something similar

The thunk idea reminds me a lot of how Scala handles lazy arguments. If you pass an argument to a method that expects a function that returns that argument type, it automatically wraps the argument in a function that evaluates to the original expression. It always seemed like a really nice solution for letting users define control-flow structures without needing a complex macro system.

So, I don't think your idea is dumb. I like that you're trying to consider readability at the same time as allowing some syntactic extension. Most people seem to go all the way in one direction or the other (i.e. Lisp or Go).

Implicit cast ++ungood

I had read about this, but I consider this sort of implicit casting behaviour undesirable. In fact, it was probably my "ick" reaction to what Scala was doing that prompted me to think of thunking holes.

The introduction of the thunk in BitC mixfix is unconditional. It's not driven by a type mismatch.

Not quite

If you call a method that expects a function of type Unit => X with an expression of type X then you get a type mismatch.

scala> def foo(f : () => Int) = f()
foo: (f: () => Int)Int

scala> foo(3)
:6: error: type mismatch;
 found   : Int(3)
 required: () => Int
       foo(3)
           ^

If you call a method that expects a call-by-name parameter then you do get an implicit thunk.


scala> def bar(x : => Int) = x
bar: (x: => Int)Int

scala> bar(3)
res1: Int = 3

The call by name type syntax does look a bit like a function type, but it's not used as a function, it's used as a call-by-name expression.

Some side effects can show the thunking behavior

scala> def baz(x : => Int) = {
     |    println("method starting")
     |    x
     | }
baz: (x: => Int)Int

scala> baz({
     |   println("thunk called")
     |   3
     | })
method starting
thunk called
res2: Int = 3

Oops

Oops, my mistake. This is what I get for only having a book/blog knowledge of Scala.

Separate infix and thunking

I would rather have separate infix and thunking constructs.

A good thing about mixfix (eg. Agda's mixfix) is that they can be seen as regular function declarations : _and_ is just a specific kind of name for a function, and that (a and b) desugars to (_and_ a b) is simple and clean.
If you add an optional thunking behavior, things are not so simple. Even with control of mixfix rule scopes, the reader will often encounter use of unknown (or temporarily forgotten) mixfix. In that case, not being sure of the evaluation rules can be a pain.

It not necessarily difficult to add a light syntax for thunking. For example, I was thinking of \( _ + 1 ) as a syntaxic sugar for fun x -> x + 1. In that setting, \( e ) (with x not occuring in e) would be a degenerate case ( fun x -> e ). In essence, you have a concise sugar for explicit thunking.
You would then write a and \(not b) instead of a and (not b).

Still regular declarations...

A good thing about mixfix (eg. Agda's mixfix) is that they can be seen as regular function declarations...

I think that's still true for mixfix with thunking. _and#_ is just a procedure name, but one whose second argument is thunked (indicated by the #_.

My general sense is that thunking is a feature best used lightly, but sometimes used helpfully. Not sure if we'll keep this or not, but it's been interesting to play with.

Sorry, but that seems like a red herring

Because of audit in particular, I've been very resistant to adopting a macro system.

From where I sit, the WHILE loop is an example of a macro. How does having WHILE in the language negatively impact auditing? In fact, WHILE is just one simple example that shows that macros can greatly enhance the auditing process - without WHILE, you'd have to audit on the level of GOTOs.

Furthermore, when I look at Java code, it often looks like the expanded results of macros, and is very bad for auditing. Of course, higher-order functions remove some of the drudgery, but it is my experience that well-written macros can greatly enhance code clarity, with WHILE being just one small example.

One example not trend

The while macro is an example of a well-motivated macro that is easy to build and easy to use. Unfortunately, that makes it an exceptional case. The majority of macros, in my experience, are either clever or unsafe or both. Clever, in this context, is a Very Bad Thing.

So I think you're trotting out the exception that proves the rule here.

CLOS, then

What about Common Lisp's defclass, defmethod, etc?

These macros provide the user interface to the CL object system. Each of them expands to various helper routines that manipulate the object system (as described in The Art of the Metaobject Protocol). Without these more or less cosmetic macros, the system would be effectively unusable and unauditable. Note how each of them presents a convenient mini-language that is "parsed" at compile-time - using only functions for the same tasks would lead to contortions.

Implementing object systems is of course not a typical application programming task. But still, the manipulation of complex structures is, and the CLOS macros show that it's possible and economical to implement "user interfaces" for such manipulations with macros. And I don't see how they are clever or unsafe.

I think that macros are one of the greatest tools in any language for bootstrapping it from a small and simple core. The macros should be hygienic and the macro expansion should have a strong phase separation, to make it hard to write macros that are unsafe. SRFI 72 and PLT Scheme's macro systems are great examples of such systems.

Manuel: I'm aware that

Manuel: I'm aware that macros can be used well. Heck. I'm the guy who implemented the memoizing defmacro for Franz Lisp in 1981. There's no question that there are good uses for them.

But at the end of the day, a properly written hygenic macro is bloody hard to write, and I've seen an awful lot of bad ones. So it's something we're leaving out of BitC - at least for now.

Pet peeve

OK, Jonathan. ;) It's just that the claim that macros lead to auditing problems is one of my pet peeves.

Can you point me

... at a set of auditing tools that have done a good job providing support for macro audit? There may well be tools out there that I don't know enough about.

The men who stare at lines of code

Hm... under auditing I understand the scrutinizing of the source. For something more dynamic, you might want to look at the paper Debugging Hygienic Macros, which shows the tools the PLTers have developed for debugging and studying macros.

Tools these days

These days there are a lot of tools involved, because human scrutiny isn't very reliable. Thanks for the paper pointer. I look forward to reading that.

Hard to write

a properly written hygenic macro is bloody hard to write, and I've seen an awful lot of bad ones

How exactly does this distinguish macros from higher-order functions, or complex data structures, or really anything we might want in software at all? Note that this is the point Guy Steele makes in the "Macros Make Me Mad" discussion.

It's what Jonathan Bachrach wrote

... in that thread here:

The vast majority of Lisp macros that I ever wrote could
(and therefore should) have been done without using macros
had there been available the proper lambda stuff (lexical
scoping, "funargs", and so on). Macros would still have
been very useful and important but they would have been used
far less often...

So there are two points about this. The first is that every language has to make choices about where to draw lines on what not to include. For the moment, this is mine. It may change.

The second is that, in my experience, programmers who understand staged compilation are rare, and programmers who understand enough to grok the problems of macro hygiene are rarer still. Sometimes it's better to decide that certain transformations should be done outside the language using some other tool.

I'm not sure I've drawn the right line in BitC. There are places in the Coyotos kernel, for example, where a proper macro system would have saved me a bunch of hassle overcoming deficiencies in C. Experience will tell.

The one think I am certain about is that "experimental" features are very hard to retire once you deploy them. It's better to make "macro" a reserved word now and come back to this question later than it is to rush a poorly thought out macro system into deployment.

Jack

I find that Bacharach quote very strange in this context, since the major languages with macros (CL and Scheme, now Racket/Clojure as well) have all had all of the pieces of lexical scope right for more than 20 years. And yet their users write macros all the time.

No one is saying that you've designed a bad language because it doesn't have macros. Far be it from me to tell someone what's missing from their language - I've been on the receiving end of that too many times. But those of us who use macros every day to program at a higher level of abstraction (abstraction being the most valuable audit tool), would appreciate you not deriding them unnecessarily.

Don't think I did that

I don't think I've derided macros. In fact, I've been quite consistent in saying that they are powerful and valuable when properly used. What I have questioned is whether most users, in practice, use them properly.

The difficulty with having this discussion on LtU is that the LtU audience is, by its nature, an elite crop of programmers. There are things that I'ld happily trust you to get right that I would not trust most of my former colleagues at Microsoft to get right. Not because they are dummies (they aren't), but because (a) real macros aren't part of their experience, (b) they aren't accustomed to thinking in this particular type of layered processing, (c) the available debugging and audit tools for macros in production are weak. It's far better, in my opinion, to encourage those users to build little languages than to get them tangled up in mastering macros.

Now as I have more or less said, this is an issue I want to come back to in the language design. I'm very prepared to believe that my concerns about misuse of macros can be resolved, and I think that trying to do so is an interesting problem for later. I don't want to commit myself to something that I'll get wrong if I do it now, and I want to have time to really think through the mis-uses I've seen and figure out what mitigations are possible at the design level.

Thanks for the link to that

Thanks for the link to that thread -- both that thread in particular, and those mailinglist archives are some really rich, provocative discussion that I hadn't seen before.

No, it's what Guy Steele wrote when trolling Todd Proebsting:

in that same thread Classes make me mad, Procedures make me mad, Algebraic expressions make me mad, and the moral of the story here:

Okay, now that I've sent out my little parodies,
let's discuss them a bit.

My point is not that Todd should lighten up on macros;
on the contrary, if anything, I would say he expects
too little of those who define functions. All too many
programmers *are* really lousy at writing documentation.
A library of procedures or methods can constitute just
as big an extension to a language as a handful of macros.

Where the line is drawn will depend to some extent not
only on the skill level of the programmer but also on
the customs and expectations of the community within
which the programmer functions. If macros are part of
the customary toolset of a community of programmers
and there are guidelines and expectations about how they
are and are not to be used, and about the level of
documentation needed, then they need not be a problem.
The same is true of classes, procedures, and algebraic
expressions---and there was a time and place in the
history of computers when each of these was regarded with
similar suspicion by some group of (non-stupid) programmers.

Many macros, particularly in the C world, indeed have not
been documented with the level of care and rigorous review
that has been devoted to, say, certain packages of Lisp macros
over the years, IMHO.

--Guy Steele

Interestingly, Guy has an "embarassingly" short list of advice for auditor-observable macros:

- Don't include free references to "local" bindings.
Unfortunately, I see this a lot from people who are
not experienced macro writers.
- Don't evaluate "value" parameters more than once.
Do all evaluations "in order", unless there is a very
good reason to do it otherwise.
- Don't expand a "body" in more than one place, unless
there's a very good reason to do otherwise.
- If your language doesn't do "hygiene", do it by hand.
- Macros aren't functions; don't use them where a function
would do. I hope your language and its compiler can do
inlining without you having to resort to macros.
- Stay close to the syntax of the "built-in" forms of the
rest of your language. Nobody likes having to learn
lots of new syntax.

and Eric Kidd has more heuristics.