Nanopass Compiler Framework

While not directly related to PL theory, I consider compiler construction a close relative. This set of papers describes Indiana University's compiler coursework. The key to these papers is "very small source-to-source transformations", remaining executable between passes for pedagogy and verification. Scheme is used as the source and target language because its s-expression syntax requires no 'marshaling' to and from human-readable syntax and AST structures. Later stages of the compiler include annotations directly in the source as quoted, unevaluated lists.

There is also a Summer Scheme Workshop (with complete compiler code) demonstrating some of these ideas. If anyone finds code for the Nanopass or Micropass compiler, please let me know. I'm interested in studying them as well.

A compiler structured as a small number of monolithic passes is difficult to understand and difficult to maintain. ... An attractive alternative is to structure a compiler as a collection of many fine-grained passes, each of which performs a single task. This structure aligns the implementation of a compiler with its logical organization, simplifying development, testing, and debugging.

[1] Compiler Construction Using Scheme Hilsdale, Ashley, Dybvig, and Friedman.
[2] Compiling: A high-level introduction using Scheme Haynes
[3] A Nanopass Framework for Compiler Education Sarkar, Waddell, Dybvig

Ancillary:
[4] Destination Driven Code Generation
[5] Summer Scheme Workshop; Compiling Scheme

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Looks interesting. Thanks.

Looks interesting. Thanks.

The Scheme Compiler Workshop

The compiler code from the Scheme compiler workshop is worth a read - as is the paper "Destination Driven Code Generation", which explains the code generation strategy used.

The compiler produces Sparc machine code, and since not everyone has a Sparc, a Sparc emulator written in Scheme is included!

Nine years later

Nine years later, but various nanopass implementations and examples are now freely available via github etc:

Nanopass in Prolog

I intend to write the compiler for my language project as a nanopass compiler. Has anybody done any work on nanopass compilers in Prolog? The abstract syntax is easy to represent, so are the intermediate language grammars and AST transformations. It seems like you would not need any framework at all.

Compiler = Bookkeeping + Term Rewriting

Abstractly, writing a compiler is doing bookkeeping plus term rewriting. The idea to use multi-pass compilers is straightforward, and has been invented and reinvented countless times. Roughly, fast old compilers did a lot of bookkeeping using mutable state because of performance and resource reasons. Nowadays, people push more into the direction of pure term rewriting (on the AST or intermediate representation) because of correctness.

Abstractly, it's a gliding scale. Academically, you can do without the bookkeeping and go for pure term rewriting. ASF+SDF is such a system, I gather, but you pay in performance.

So, a multi-pass compiler is completely standard and I don't think people even bothered to write about it. But the question in Prolog is, of course, how you're going to do the bookkeeping. In the end, Moore's law has ended, we roughly hit a 2GHz barrier on microprocessors, implying compiler writers (whose intention is to compile large programs), are more or less stuck doing the bookkeeping with fast imperative data structures.

Nanopass compilers extend the idea of multipass compilers with:

(1) the intermediate-language grammars are formally specified and enforced; (2) each pass needs to contain traversal code only for forms that undergo meaningful transformation; and (3) the intermediate code is represented more efficiently as records, although all interaction with the programmer is still via the s-expression syntax.

So, nanopass compilers extend the idea with robustness and ease of implementation; this isn't new either. GHC probably internally can be called a nanopass compiler too since I expect that the type structure on the AST, or intermediate, makes sure the result of a pass is well-formed, and (I hope) they have some higher-order functions which make sure you don't need to express passes by explicitly stating how to treat each member of an AST.

You'ld need a Prolog with typing to express the grammar correctness of passes. Moreover, you'ld probably need some higher-order extension to Prolog to implement short declarative passes. The unification system of Prolog might be exploited to implement some new ideas on passes or type inference, but I doubt it'll pay off for pretty much standard languages. And then there is bookkeeping: sure, academically you can use lists, but you'll pay in performance, so you'll likely need an extended Prolog.

I am rewriting my own compiler in C++ now. (I don't recommend that to anyone.) I'll do nanopasses too but without the type safety on the AST since I sometimes abuse the fact that I can switch between grammars.

Grammar Specification

The biggest difference between multi-pass and nano-pass is the number of parses. In nano-pass each pass does exactly one thing (for example closure conversion). A multi-pass compiler might have half-a dozen passes, an nano-pass ten times that or more.

The problem in C++ is specifying the grammars for the intermediate languages. How are you going to do that? I started working in C++ but rapidly found myself re-inventing prolog to specify the grammars.

In Prolog you don't need typing or higher-order anythings. Here's an example:

grammar_A(var(X)).
grammar_A(lam(X, Y)) :- var(X), grammar_A(Y).
grammar_A(app(X, Y)) :- grammar_A(X), grammar_A(Y).

In prolog you can exploit the fact that types are predicates, so 'var(X)' checks that 'X' is a var. Generally you can have predicates for 'int(X)' etc, but you can also wrap values in structures to indicate they have been type checked, for example:

check(X, int(X)) :- int(X).

Now you can take a struct and check it conforms to grammar_A, you can extend:

grammar_B(X) :- grammar_A(X).
grammar_B(let(X, Y, Z)) :- var(X), grammar_B(Y), grammar_B(Z).

So we can easily define the grammars for the intermediate languages, and we can define the necessary transformations simply too.

Hm

I doubt this can work without open recursion. Is (lam x . let y = x in y) accepted by your grammar B?

Grammar Checking With Type Inference

Its checking the grammar, not type checking the program. However here's a full type inferencer that works :-)

First some helper functions, not really part of the checker, you may already have defined:

member(Key, cons(Head, Tail), Val) :-
    dif(Key,Head),
    member(Key, Tail, Val).
member(Key, cons(def(Key, Val), Tail), just(Val)).
member(Key, nil, nothing).

append(cons(H, T1), L, cons(H, T2)) :-
    append(T1, L, T2).
append(nil, L, L).

unifyall(cons(T, Tail), T) :-
    unifyall(Tail, T).
unifyall(nil, T).

unifyeach(cons(Tapp, Tail), Tdef, Cxt) :-
    duplicate_term(Tdef, pair(C1, Tapp)),
    unifyeach(Tail, Tdef, Csub),
    append(Csub, C1, Cxt).
unifyeach(nil, T, nil).

split(X, cons(def(Y, T), Tail1), Tail2, cons(def(Y, T), Tail3)) :-
    dif(X, Y),
    split(X, Tail1, Tail2, Tail3).
split(X, cons(def(X, T), Tail1), cons(T, Tail2), Tail3 ) :-
    split(X, Tail1, Tail2, Tail3).
split(X, nil, nil, nil).

And now the inference:

nat(zero).
nat(succ(X)) :-
    nat(X).

bool(true).
bool(false).

# type inference
expr(var(X), cons(def(X, Type), nil), Type).
expr(nat(X1), nil, nat) :-
    nat(X).
expr(bool(X), nil, bool) :-
    bool(X).
expr(pair(X, Y), C3, prod(A, B)) :-
    expr(X, C1, A),
    expr(Y, C2, B),
    append(C1, C2, C3).
expr(app(Fun, Arg), Cxt, B) :-
    expr(Fun, C1, arrow(A, B)),
    expr(Arg, C2, A),
    append(C1, C2, Cxt).
expr(lam(Var, Body), Cxt, arrow(A, B)) :-
    expr(Body, C1, B),
    split(Var, C1, Def, Cxt),
    unifyall(Def, A).
expr(let(Var, Body, Rhs), Cxt, T2) :-
    expr(Body, C1, T1),
    expr(Rhs, C2, T2),
    split(Var, C2, Def, C3),
    unifyeach(Def, pair(C1, T1), C4),
    append(C3, C4, Cxt).

And finally a test:

:- expr(lam(x, let(y, var(x), var(y))), C, T).

yes(C1 = nil, T8 = arrow(T9, T9)).

So that not only checks the grammar by infers the types, and type checks the grammar too.

Checking the grammar does not include the type inference so would simply be:

grammar(var(X)) :- var(X).
grammar(nat(X)) :- nat(X).
grammar(bool(X)) :- bool(X).
grammar(pair(X, Y)) :- grammar(X), grammar(Y).
grammar(app(Fun, Arg)) :- grammar(Fun), grammar(Arg).
grammar(lam(Var, Body)) :- var(Var), grammar(Body).
grammar(let(Var, Body, Rhs)) :- var(Var), grammar(Body), grammar(Rhs).

Which is obviously correct, and is exactly the same as the inference code with the return context and type removed.

Not what I meant

Marco correctly pointed out that an important requirement discussed in the paper is for it to be easy (and not boilerplate-heavy) to add intermediate passes. You sketched a way to extend an existing grammar A into a grammar B for an intermediate representation (with lets).

I am unconvinced that this sketch actually works (without having to duplicate the whole grammar, which is a solution marco explicitly ruled out), because you haven't expressed open recursion. I think that the program example I gave, which morally should be part of the let-extended grammar, is not recognized by the parser you sketched.

Your new larger example does not answer this question, because it contains a single pass, no transformation into an intermediate representation with a slightly different grammar (differing only on a small number of rules compared to the size of the full grammar).

Boilerplate Heavy

Well, its lot better than trying to do this in C++. In any case I understand what you mean about the recursion problem. Sorry I missed the point. I still think its solvable, I'll have a look and update later.

I also think you would have trouble expressing the grammar including type checking with less boilerplate. I don't think the scheme nano-pass compilers incorporate type checking into the grammar definitions.

Language Grammar Extension.

So here's my first attempt at solving the grammar extension problem. First define some primitive types:

nat(zero).
nat(succ(X)) :-
    nat(X).

bool(true).
bool(false).

var(x).
var(y).

Define a language called "a":

def(language_a, use_a).
grammar(L, var(X)) :-
    def(L, use_a),
    var(X).
grammar(L, nat(X)) :-
    def(L, use_a),
    nat(X).
grammar(L, bool(X)) :-
    def(L, use_a),
    bool(X).
grammar(L, pair(X, Y)) :-
    def(L, use_a),
    grammar(L, X),
    grammar(L, Y).
grammar(L, app(Fun, Arg)) :-
    def(L, use_a),
    grammar(L, Fun),
    grammar(L, Arg).
grammar(L, lam(Var, Body)) :-
    def(L, use_a),
    var(Var),
    grammar(L, Body).

Define a language called "b" that extends language "a":

def(language_b, use_a).
def(language_b, use_b).
grammar(L, let(Var, Body, Rhs)) :-
    def(L, use_b),
    var(Var),
    grammar(L, Body),
    grammar(L, Rhs).

Is expression is valid in language "a":

:- grammar(language_a, lam(x, let(y, var(x), var(y)))).

NP

Not accepted in language "a". Try "b":

:- grammar(language_b, lam(x, let(y, var(x), var(y)))).

yes.

It is accepted in language "b". I think this is quite light on boilerplate, and it would allow compact formal definitions of intermediate languages.

Fusion

Do you know of any work on automatically fusing the passes in a "nanophase" compiler?

There is a lot of work on that subject + Deforestation

In the time people started writing compilers there was a lot of work on this, i.e. the fusion of multiple passes, but I doubt you'll still find that. People started to recognize that fusion could also be seen, more algorithmically, as an abstract program transformation. Haskell lifted/generalized that work to "deforestation", the general practice of avoiding building the intermediate data structures between transformations. I don't know whether deforestation actually pays off for writing compilers pragmatically.

(Even 'deforestation' probably pragmatically boils down to a pass where the composition of two functions is partially evaluated. I am not sure it is something academically noteworthy. In Haskell, Clean, possibly some ML's, it's just something which you can do more easily than in C.)

XStream

One work that I find interesting and is not well-known is

Streaming XML transformations using term rewriting
Alain Frisch and Keisuke Nakano
2007

This paper describes XStream, a Turing-complete programming language which allows the programmer to write XML transforma- tions in a functional tree-processing style and have them evaluated in a streaming way: the output is produced incrementally while the input is still being parsed. The programmer does not need to care explicitly about buffering. We introduce the language, describe some techniques used in the implementation and present some per- formance results.
The idea behing XStream is pretty simple. The high-level description of the semantics of an XStream script that we mentioned earlier consists in parsing the input document into a term x (that is, computing a substitution which maps the variable x onto a term which encode the input document) and then reducing the term main(x) until reaching a normal form. The code produced by XStream does the same, but incrementally. It maintains a current term which is initially main(x) and it reduces it periodically. Initially, no information is known about x, but parsing the input incrementally discovers the structure of x, thus triggering more reduction steps. When parsing is over, the content of x is fully known and the reduction can proceed until a final normal form. In parallel to this incremental parsing and evaluation, parts of the final result are extracted from the root of the current term, sent to the output, and discarded from memory.

Not going to

I am not going to represent the grammar (as multiple type following the CFG.) I played with it, but due to type considerations in most languages your implementation blows up. Sure, it's academically more safe but I am not willing to pay the cost, I want to switch between grammars internally, and there is pragmatically no pay off for the correctness. You get some correctness for free, but this is simply not a feature worth checking; if your passes are expressed concisely the extra type-check is mostly redundant.

I also don't see how your Prolog code describes the grammar of an AST.

Bullshit

The biggest difference between multi-pass and nano-pass is the number of parses.

The authors are completely clear about what they mean with a "nanopass" compiler. Of course, there are multipass compilers which do a small, or a large, number of passes. Nano doesn't have anything to do with the number of passes but with the three extra constraints as expressed by the authors. Syntax correctness, ease of implementation, and record structure.

a collection of many small passes

See: http://www.cs.indiana.edu/~dyb/pubs/nano-jfp.pdf

From the beginning of the introduction:

Production compilers often exhibit a monolithic structure in which each pass performs several analyses, transformations, and optimizations. An attractive alternative, particularly in an educational setting, is to structure a compiler as a collection of many small passes, each of which performs a small part of the compilation process. This “micropass” structure aligns the implementation of a compiler with its logical organization, yielding a more readable and maintainable compiler. Bugs that arise are more easily isolated to a particular task, and adding new functionality is easier since new code need not be grafted onto existing passes nor wedged between two logical passes that would have been combined in a monolithic structure.

This is the introduction to the paper from the people I believe did the first work on nano-pass compilers and coined the term, so I think its authoritative.

RTFM

In the paper you reference it is clearly stated what they mean with nanopasses and I quoted that verbatim.

The idea to use multiple small correct passes is very old. Even on PDP11's some compilers where written that way; it is generally understood to be an academically nice manner of writing a slow compiler.

The authors succinctly describe what they mean with nanopasses. It's a multipass compiler addressing extra safety concerns.

I am pretty sure they could not have gotten away by just calling an educational multipass compiler "nanopass". Using small multiple transforms is an old idea, everybody would prefer to write their compiler that way (but usually cannot, because of performance), and there is therefore nothing new to claim apart from that they address extra safety in their implementation.

What's your point?

I described a nano-pass compiler in the same way they do in their paper introduction. I don't get what you are trying to say. Unless you are complaining about my spelling passes as parses? It is clear from their introduction that the key point of nano-pass is many small passes (which is clearly more passes than a monolithic or few-large-passes compiler).

They did nothing new. It's just multipass.

What's wrong to sticking to terminology? They clearly described what they mean with a "nanopass", and they needed to because some people would find the introduction of this very old idea with new terminology slightly offensive.

You, and they, cannot sell a multipass compiler as nanopass, and the authors understood that and therefore didn't. Multipass compilers are very well understood and professional compiler writers wouldn't, or won't, let you get away claiming new terminology such as "nanopass" (as in: a large number of small correct passes) is anything new, interesting, or even a good idea. In fact, it's the oldest trick in the book.

As I said. Small multiple passes are generally recognized as a manner of writing a robust but slow compiler. Something for beginners or academics. Nice, but in production compilers you invariably end up fusing multiple passes together and everybody knows it.

They did something nice but (honestly) nothing new. You will find similar ideas in compiler compilers, as for instance CDL3.

The authors get that they cannot claim anything new here (unless they did something extra), and there is no new terminology to introduce. Therefore they defined clearly what they mean with nanopasses: a small pass which addresses safety concerns for educational purposes.

Like them, I would stick to standard terminology here for obvious reasons.

Thats Okay

If you want to take issue with the original authors, and claim that nano-pass was not actually anything new or different, that's fine with me. However, many compilers like GCC are multi-pass, and they have large complex passes.

If I want to specifically build a compiler with lots of small passes, yes it technically is multi-pass, but that does not tell you much about it, it could be the same internally as GCC.

What name would you use for a compiler where you want to have lots of passes each doing one simple distinct thing? Multi-pass clearly is not specific enough.

Authors probably well aware

No need. The authors were probably well aware that claiming anything new in that respect would get them scoffed. If you read the paper you'll see they tip-toe around the subject.

Why would you need new terminology? If you simply state it does a lot of passes people will know enough?