Meta-Programming

A Temporal Logic Language for Context Awareness in Pointcuts

A paper by Charlotte Herzeel, Kris Gybels, Pascal Costanza presented at ILC2007.

Some program concerns cannot be cleanly modularized, and their implementation leads to code that is both hard to understand and maintain. In this paper we consider extending an e-commerce application, written in CLOS, with two of such crosscutting concerns. Though most of the time Common Lisp's macro facilities and CLOS method combinations can be used to modularize crosscuts, we discuss the use of a more declarative solution when crosscuts depend on the execution history. For this purpose we give an overview of HALO, a novel pointcut language based on logic meta programming and temporal logic, which allows one to reason about program execution and (past) program state.

Charlotte Herzeel presented this at ILC with a flashy web-shop application built on Edi Weitz's standard tools of the web trade. The example takes the basic web-shop and uses aspects to add a lot of funky rules about promotion and discounts based on stock levels, what's selling well, what the user has been browsing, etc, and makes sure the discounts are honoured at the checkout.

The focus on adding new and interesting functionality with aspects is refreshing. So is the basis in a powerful language like Lisp. The weakness of most aspect-oriented programming examples in my eyes is that they appear to be working around artificial problems caused by stubbornly using a base language that's ill-suited to the task. That only plays well with the audiences who are also stubbornly using those base languages for ill-suited tasks :-)

A Certified Type-Preserving Compiler from Lambda Calculus to Assembly Language

A Certified Type-Preserving Compiler from Lambda Calculus to Assembly Language

I present a certified compiler from simply-typed lambda calculus to assembly language. The compiler is certified in the sense that it comes with a machine-checked proof of semantics preservation, performed with the Coq proof assistant. The compiler and the terms of its several intermediate languages are given dependent types that guarantee that only well-typed programs are representable. Thus, type preservation for each compiler pass follows without any significant "proofs" of the usual kind. Semantics preservation is proved based on denotational semantics assigned to the intermediate languages. I demonstrate how working with a type-preserving compiler enables type-directed proof search to discharge automatically large parts of the proof obligations.

Software/proof source code and documentation

Slides are available from a talk I gave at the Projet Gallium seminar at INRIA Rocquencourt, in OpenOffice and PDF formats.

I found this while attempting to follow up on this post. The approach taken here—dependently-typed ASTs and denotational semantics—and the observation that, carried to its logical conclusion, the result is typed intermediate and assembly languages, proof-carrying code, etc. suggests to me a connection to this work. It also seems to offer a denotational counterpoint to Leroy's coinductive big-step operational semantics, also done in Coq. Fascinating stuff.

Type-Level Computation Using Narrowing in Omega

Haven't seen this paper by Tim Sheard mentioned on Ltu before. As in previous papers Sheard tries to put Howard Curry to work in terms that are understandable for those that don't have a Phd in type theory.

Abstract:
Omega is an experimental system that combines features of both a programming language and a logical reasoning system. Omega is a language with an infinite hierarchy of computational levels. Terms at one level are classified (or typed) by terms at the next higher level. In this paper we report on using two different computational mechanisms. At the value level, computation is performed by reduction, and is largely unconstrained. At all higher levels, computation is performed by narrowing.

Meta-Compilation of Language Abstractions

Meta-Compilation of Language Abstractions, a dissertation by Pinku Surana.

High-level programming languages are currently transformed into efficient low-level code using optimizations that are encoded directly into the compiler. Libraries, which are semantically rich user-level abstractions, are largely ignored by the compiler. Consequently, library writers often provide a complex, low-level interface to which programmers "manually compile" their high-level ideas. If library writers provide a high-level interface, it generally comes at the cost of performance. Ideally, library writers should provide a high-level interface and a means to compile it efficiently.

This dissertation demonstrates that a compiler can be dynamically extended to support user-level abstractions. The Sausage meta-compilation system is an extensible source-to-source compiler for a subset of the Scheme programming language. Since the source language lacks nearly all the abstractions found in popular languages, new abstractions are implemented by a library and a compiler extension. In fact, Sausage implements all its general-purpose optimizations for functional languages as compiler extensions. A meta-compiler, therefore, is merely a shell that coordinates the execution of many external extensions to compile a single module. Sausage demonstrates that a compiler designed to be extended can evolve and adapt to new domains without a loss of efficiency.

A very interesting and detailed paper, which touches on many perennial LtU subjects, and once again shifts the line between user programs and the compiler. If you're tempted to say "this sounds like X...", then read Chapter 2, which gives a comprehensive comparison to alternative approaches, including static type inference, traditional macro systems, templates, partial evaluation, and multi-stage languages such as MetaML and MetaOCaml.

Some carefully selected quotes which I think summarize the summary quite well:

The Sausage system provides a framework for programmers to write new domain-specific optimizations and inject them into the main compiler.
...
This dissertation takes the rather extreme view that anything beyond Core Scheme is a compiler extension.

Pinku Surana will be presenting this work at a meeting of LispNYC on Feb 13th in New York City. An announcement with details of the meeting can be found here.

Ott--a tool for writing definitions of programming languages and calculi.

Ott—a tool for writing definitions of programming languages and calculi.

Ott is a tool for writing definitions of programming languages and calculi. It takes as input a definition of a language syntax and semantics, in a concise and readable ASCII notation that is close to what one would write in informal mathematics. It generates LaTeX to build a typeset version of the definition, and Coq, HOL, and Isabelle versions of the definition. Additionally, it can be run as a filter, taking a LaTeX/Coq/Isabelle/HOL source file with embedded (symbolic) terms of the defined language, parsing them and replacing them by target-system terms. For a simple example, here is an Ott source file for an untyped call-by-value lambda calculus (test10.ott), and the generated LaTeX (compiled to pdf) and (compiled to ps), Coq, Isabelle, and HOL definitions.

Most simply, the tool can be used to aid completely informal LaTeX mathematics. Here it permits the definition, and terms within proofs and exposition, to be written in a clear, editable, ASCII notation, without LaTeX noise. It generates good-quality typeset output. By parsing (and so sort-checking) this input, it quickly catches a range of simple errors, e.g. inconsistent use of judgement forms or metavariable naming conventions.

That same input can be used to generate formal definitions, for Coq, HOL, and Isabelle. It should thereby enable a smooth transition between use of informal and formal mathematics. Additionally, the tool can automatically generate definitions of functions for free variables, single and multiple substitutions, subgrammar checks (e.g. for value subgrammars), and binding auxiliary functions. (At present only a fully concrete representation of binding is supported, without quotienting by alpha equivalence.)

The distribution includes several examples, in varying levels of completeness: untyped and simply typed lambda-calculus, a calculus with ML polymorphism, the POPLmark Fsub with and without records, an ML module system taken from (Leroy, JFP 1996) and equipped with an operational semantics, and LJ, a lightweight Java fragment.

Peter Sewell and his team continue to bridge the gap between the informal and formal worlds of programming language semantics.

Charming Python: Decorators make magic easy

Python made metaprogramming possible, but each Python version has added slightly different -- and not quite compatible -- wrinkles to the way you accomplish metaprogramming tricks. Playing with first-class function objects has long been around, as have techniques for peaking and poking at magic attributes. With version 2.2, Python grew a custom metaclass mechanism that went a long way, but at the cost of melting users' brains. More recently, with version 2.4, Python has grown "decorators," which are the newest -- and by far the most user-friendly way, so far -- to perform most metaprogramming.

While metaprogramming is inherently a bit confusing, I think this article could have a been a little clearer. Still, it's a nice highlevel introduction to decorators.

A Stepper for Scheme Macros

A Stepper for Scheme Macros. Ryan Culpepper, Matthias Felleisen.

In this paper, we present a macro debugger with full support for modern Scheme macros. To construct the debugger, we have extended the macro expander so that it issues a series of expansion events. A parser turns these event streams into derivations in a natural semantics for macro expansion. From these derivations, the debugger extracts a reduction-sequence (stepping) view of the expansion. A programmer can specify with simple policies which parts of a derivation to omit and which parts to show. Last but not least, the debugger includes a syntax browser that graphically displays the various pieces of information that the expander attaches to syntactic tokens.

Another paper from the Scheme workshop.

Apart from being a nice exercise for macro lovers, a good macro debugger is essential for lowering the barrier to macro programming. Since macro programming is an important technique for building DSELs and for language oriented programming in general, having better macro debugging facilities is a Good Thing.

Is "post OO" just over?

While studying the conference program of the upcoming OOPSLA 2006 I discovered under the category "essay" an author who has quite something critical to say about AOP:

Aspect-oriented programming is discussed as a promising new technology. Like object-oriented programming, it is beginning to pervade all areas of software engineering. With its growing popularity, practitioners and academics alike are beginning to wonder whether they should start looking into or it, or otherwise risk having missed an important development. The author of this essay finds that much of aspect-oriented programming's success seems to be based on the conception that it improves both modularity and the structure of code, while in fact, it actually works against the primary purposes of the two, namely independent development and understandability of programs. Not seeing any way of fixing this situation, he thinks the success of aspect-oriented programming to be paradoxical.

This is not just another internet rant about the latest PL hype but the author, Friedrich Steimann, had done interesting work about AOP before. In particular his latest paper about typed AOP:

AOP and the antinomy of the liar

but also his award winning former critical AOP review:

Domain models are aspect free

Ruby metaprogramming techniques

Back when Python was all the rage, we often discussed metaprogramming tricks in Python. Well, it seems the metaprogramming action has moved to Ruby, just like everything else... ;-)

Reflective Program Generation with Patterns

Reflective Program Generation with Patterns. Manuel Fähndrich, Michael Carbin, James R. Larus. October 2006.

Runtime reflection facilities, as present in Java and .NET, are powerful mechanisms for inspecting existing code and metadata, as well as generating new code and metadata on the fly. Such power does come at a high price though. The runtime reflection support in Java and .NET imposes a cost on all programs, whether they use reflection or not, simply by the necessity of keeping all metadata around and the inability to optimize code because of future possible code changes. A second---often overlooked---cost is the difficulty of writing correct reflection code to inspect or emit new metadata and code and the risk that the emitted code is not well-formed. In this paper we examine a subclass of problems that can be addressed using a simpler mechanism than runtime reflection, which we call compile-time reflection.We argue for a high-level construct called a transform that allows programmers to write inspection and generation code in a pattern matching and template style, avoiding at the same time the complexities of reflection APIs and providing the benefits of staged compilation in that the generated code and metadata is known to be well-formed and type safe ahead of time.

Macros, multi-staged programming etc. are the appropriate buzzowrds.

LtU readers will probably be interested in the STM example (see sec. 7.1)

XML feed