Active Libraries and Universal Languages

Active Libraries and Universal Languages

The ideas in this dissertation had their early origins in the years I spent working on the Blitz++
library for numerical arrays. Rather than relying on the compiler to optimize arrays, it performed
these optimizations itself using template techniques (‘expression templates’ and ‘template metaprograms’).
The fact this could be done made me wonder about the general problem of domainspecific
code optimization. From reading the literature it seemed a widespread debate: where ought
domain-specific optimizations be performed? In compilers? Compiler plug-ins? Metalevel code?
Preprocessors? Libraries? The C++ experience indicates that with a sufficiently powerful language
and compiler, libraries can define their own optimizations, and we can package abstractions and
optimizations together as a coherent library. Template metaprogramming is, let’s be frank, a rather
miserable programming environment — its popularity suggests a real need in this area. The definition
of Active Libraries helped turn these vague thoughts into a concrete goal: to realize
compilers and languages to support such libraries in earnest.
This dissertation proposes one possible direction for this.

Or, shorter:

Roughly speaking, this thesis addresses the question: How might we provide DSLs that are fast
and safe?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

For me, the answer is simple.

A compiler should be programmable by itself, i.e. allow to program new constructs and code transformations by using the programming tools the compiler provides for run-time. Some call it meta-programming, some call it templates, some call it macros, but in the end, it is programming, after all.

Lazy Evaluation

One of the neat things about lazy evaluation is that you can set up what you _intend_ to compute, and then ask for the resulting value. Before the value is calculated, "something" can rearrange the graph of potential computation so it can run faster, or lighter, or both.

Confession

I don't see how this is distinct from multi-stage programming. In particular, see Gaussian Elimination: a case study in efficient genericity with MetaOCaml, which I think has been mentioned before, but I could not find on Lambda.

Summary

I just read the dissertation and I can try to summarize it for you. The basic idea is to do away with metaprogramming and explicit staging by instead requiring the compiler to perform a certain fixed set of optimizations. Some might be specified by annotations such as an "inline" keyword, others could be required for all programs. Using such a compiler, he shows how to create new abstractions via ordinary user-defined functions, which are guaranteed to be removed at compile time (or else you get a compile-time error). Thus, in a sense, you are extending the language and compiler via a user-defined library.

For example, he shows that static typechecking arises as a consequence of applying these optimizations to dynamically typed objects. You can add new static typechecks to a language by writing a library that performs dynamic typechecking and instructs the compiler to fail if the typechecking code cannot be optimized away.

These are just the high-level parts---most of the dissertation is about compiler design and theorem proving, which may be of independent interest. Theorem proving is used to show that certain abstractions will always be removed by the compiler.

The dissertation is also impressively well-written, so if you are writing a dissertation soon you might want to take a look at it.

Thanks

Thanks for a thoughtful review.
I wonder, whether there is some synergy between this paper and Linguistic Reuse.