Meta-Programming

Language Virtualization for Heterogeneous Parallel Computing

Hassan Chafi, Zach DeVito, Adriaan Moors, Tiark Rompf, Arvind Sujeeth, Pat Hanrahan, Martin Odersky, and Kunle Olukotun describe an approach to parallel DSLs that is a hybrid between external DSLs and internal DSLs in Language Virtualization for Heterogeneous Parallel Computing.

As heterogeneous parallel systems become dominant, application developers are being forced to turn to an incompatible mix of low level programming models (e.g. OpenMP, MPI, CUDA, OpenCL). However, these models do little to shield developers from the difficult problems of parallelization, data decomposition and machine-specific details. Ordinary programmers are having a difficult time using these programming models effectively. To provide a programming model that addresses the productivity and performance requirements for the average programmer, we explore a domain-specific approach to heterogeneous parallel programming.

We propose language virtualization as a new principle that enables the construction of highly efficient parallel domain specific languages that are embedded in a common host language. We define criteria for language virtualization and present techniques to achieve them.We present two concrete case studies of domain-specific languages that are implemented using our virtualization approach.

While the motivation of the paper is parallelization the proposed design looks like LINQ expression trees dialed to 11.

Semi-implicit batched remote code execution as staging

Oleg Kiselyov has just posted another amazing work: Semi-implicit batched remote code execution as staging.

Batching several remote-procedure or remote-object operations into one request decreases the number of network client/server round-trips, reduces the communication overhead and indeed significantly improves performance of distributed applications. The benefits are offset by the cost of restructuring the code to incite large batches; and by the increase in the difficulty of reasoning about the code, predicting its performance let alone establishing correctness. The overall research goal is to reduce the downside.

We describe a semi-implicit batching mechanism that makes the points of remote server communication explicit yet conceals the proxies, saving the trouble of writing them. The changes to the client code are minimal: mainly, adding calls to force. The type-checker will not let the programmer forget to call force. The remote batch server is simple and generic, with no need to optimize for specific clients.

Our mechanism batches both independent and data-dependent remote calls. Our mechanism is compositional, letting the programmer build nested applications and conditional (and, potentially, iterative) statements using composition, application and naming. Writing a remote program is exactly like writing a typed local program, which is type-checked locally, and can even be executed locally (for debugging).

The key insights are treating remote execution as a form of staging (meta-programming), generalizing mere remote function calls to remote applicative and conditional expressions, and introducing an embedded domain-specific language, Chourai, for such expressions. A batch of dependent remote function calls can then be regarded as a complex applicative expression in the A-normal form. Another key insight is that emulating call-by-value via call-by-need surprisingly makes sense.

Here's an example piece of Chourai code, for deleting albums whose rating is below 5 among the first n albums of an album database (called "large") hosted by the server. get_album, next_album, and similar functions constitute the "RPC" interface to the server.

     let delete_low_rating n =
      let rec loop album i =
        let t = guard (app2 lt (app get_rating album) (int 5)) 
                      (fun () -> app delete_album album) in
        if i >= n then force t else
        loop (app next_album album) (succ i)
      in loop (app get_album (string "large")) 0;;

Amazingly, delete_low_rating 4 requires just one round-trip to the server!

Directly Reflective Meta-Programming

I was recently pointed to the following fascinating paper:

Directly Reflective Meta-Programming, Aaron Stump (HOSC 22(2), June 2009).

Existing meta-programming languages operate on encodings of programs as data. This paper presents a new meta-programming language, based on an untyped lambda calculus, in which structurally reflective programming is supported directly, without any encoding. The language features call-by-value and call-by-name lambda abstractions, as well as novel reflective features enabling the intensional manipulation of arbitrary program terms. The language is scope safe, in the sense that variables can neither be captured nor escape their scopes. The expressiveness of the language is demonstrated by showing how to implement quotation and evaluation operations, as proposed by Wand. The language's utility for meta-programming is further demonstrated through additional representative examples. A prototype implementation is described and evaluated.

Meta-programming without any encoding at all. The only minor drawback that I can see is that this language is untyped - and designing a type system for it would be extremely challenging.

Omega - Language of the Future

When I discovered Tim Sheard's Languages of the Future, I realized that PLs do indeed have a future (beyond asymptotically approaching CLOS and/or adding whimsical new rules to your type checker). Compared to languages like Lisp, pre-generics Java, and Python, the "futuristic" languages like Haskell and O'Caml seemed to mainly offer additional static verification, and some other neat patterns, but the "bang for the buck" seemed somewhat low, especially since these languages have their own costs (they are much more complex, they rule out many "perfectly fine" programs).

Ωmega seems like a true revolution to me - it shows what can be done with a really fancy typesystem, and this seems like the first astounding advancement over existing languages, from Python to Haskell. Its mantra is that it's possible to reap many (all?) benefits of dependent programming, without having to suffer its problems, by adding two much more humble notions to the widely understood, ordinary functional programming setting: GADTs + Extensible Kinds.

Sheard and coworkers show that these two simple extensions allow the succinct expression of many dependently-typed and related examples from the literature. Fine examples are implementations of AVL and red-black trees that are always balanced by construction - it's simply impossible to create an unbalanced tree; this is checked by the type-system. It seems somewhat obvious that sooner than later all code will be written in this (or a similar) way.

How to understand this stuff: my route was through the generics of Java and C# (especially Featherweight Generic Java, FGJω, and A. Kennedy's generics papers). Once you understand basic notions like type constructors, variance, and kinds, you know everything to understand why GADTs + Extensible Kinds = Dependent Programming (and also esoteric stuff like polytypic values have polykinded types for that matter).

It is my belief that you must understand Ωmega now! Even if you're never going to use it, or something like it, you'll still learn a whole lot about programming. Compared to Ωmega, other languages are puny. ;P

Fortifying Macros

Fortifying Macros. Ryan Culpepper, Matthias Felleisen, ICFP 2010.

Existing macro systems force programmers to make a choice between clarity of specification and robustness. If they choose clarity, they must forgo validating significant parts of the specification and thus produce low-quality language extensions. If they choose robustness, they must write in a style that mingles the implementation with the specification and therefore obscures the latter. This paper introduces a new language for writing macros. With the new macro system, programmers naturally write robust language extensions using easy-to-understand specifications. The system translates these specifications into validators that detect misuses—including violations of context-sensitive constraints—and automatically synthesize appropriate feedback, eliminating the need for ad hoc validation code.

Presents syntax-parse, which seems like a truly great advancement over existing macro writing facilities.

The Future of C#

One of the future additions to C# announced by Anders Hejlsberg in this entertaining video from 2008 is Compiler as a Service. By that he means the ability to eval code strings (and I'm guessing that this will also be integrated with C#'s built-in AST objects).

He shows this off at around minute 59, to great effect and great excitement by the audience. It feels like an inflection point. There probably won't be another REPL-less language from now on.

I predict that after that, they'll add hygienic macros and quasisyntax.

A Lambda Calculus for Real Analysis

A Lambda Calculus for Real Analysis

Abstract Stone Duality is a revolutionary paradigm for general topology that describes computable continuous functions directly, without using set theory, infinitary lattice theory or a prior theory of discrete computation. Every expression in the calculus denotes both a continuous function and a program, and the reasoning looks remarkably like a sanitised form of that in classical topology. This is an introduction to ASD for the general mathematician, with application to elementary real analysis.

This language is applied to the Intermediate Value Theorem: the solution of equations for continuous functions on the real line. As is well known from both numerical and constructive considerations, the equation cannot be solved if the function "hovers" near 0, whilst tangential solutions will never be found.

In ASD, both of these failures and the general method of finding solutions of the equation when they exist are explained by the new concept of overtness. The zeroes are captured, not as a set, but by higher-type modal operators. Unlike the Brouwer degree, these are defined and (Scott) continuous across singularities of a parametric equation.

Expressing topology in terms of continuous functions rather than sets of points leads to treatments of open and closed concepts that are very closely lattice- (or de Morgan-) dual, without the double negations that are found in intuitionistic approaches. In this, the dual of compactness is overtness. Whereas meets and joins in locale theory are asymmetrically finite and infinite, they have overt and compact indices in ASD.

Overtness replaces metrical properties such as total boundedness, and cardinality conditions such as having a countable dense subset. It is also related to locatedness in constructive analysis and recursive enumerability in recursion theory.

Paul Taylor is deadly serious about the intersection of logic, mathematics, and computation. I came across this after beating my head against Probability Theory: The Logic of Science and Axiomatic Theory of Economics over the weekend, realizing that my math just wasn't up to the tasks, and doing a Google search for "constructive real analysis." "Real analysis" because it was obvious that that was what both of the aforementioned texts were relying on; "constructive" because I'd really like to develop proofs in Coq/extract working code from them. This paper was on the second page of results. Paul's name was familiar (and not just because I share it with him); he translated Jean-Yves Girard's regrettably out-of-print Proofs and Types to English and maintains a very popular set of tools for typesetting commutative diagrams using LaTeX.

EASTL -- Electronic Arts Standard Template Library

The gaming studio Electronic Arts maintains their own version of the Standard Template Library. Despite the fact this is old news, I checked the LtU Archives and the new site, and there is no mention of EASTL anywhere. There are quite a few good blog posts about EASTL on the Internet, as well as the the following paper, EASTL -- Electronic Arts Standard Template Library by Paul Pedriana:

Gaming platforms and game designs place requirements on game software which differ from requirements of other platforms. Most significantly, game software requires large amounts of memory but has a limited amount to work with. Gaming software is also faced with other limitations such as weaker processor caches, weaker CPUs, and non-default memory alignment requirements. A result of this is that game software needs to be careful with its use of memory and the CPU. The C++ standard library's containers, iterators, and algorithms are potentially useful for a variety of game programming needs. However, weaknesses and omissions of the standard library prevent it from being ideal for high performance game software. Foremost among these weaknesses is the allocator model. An extended and partially redesigned replacement (EASTL) for the C++ standard library was implemented at Electronic Arts in order to resolve these weaknesses in a portable and consistent way. This paper describes game software development issues, perceived weaknesses of the current C++ standard, and the design of EASTL as a partial solution for these weaknesses.

This paper is a good introduction to a unique set of requirements video game development studios face, and compliments Manuel Simoni's recent story about The AI Systems of Left 4 Dead. This paper could be a useful inroad to those seeking to apply newer object-functional programming languages and ideas to game development.

Implicit Phasing for R6RS Libraries

Abdulaziz Ghuloum and R. Kent Dybvig, Implicit Phasing for R6RS Libraries, Proc. ICFP 2007.

The forthcoming Revised Report on Scheme differs from previous reports in that the language it describes is structured as a set of libraries. It also provides a syntax for defining new portable libraries. The same library may export both procedure and hygienic macro definitions, which allows procedures and syntax to be freely intermixed, hidden, and exported.

This paper describes the design and implementation of a portable version of R6RS libraries that expands libraries into a core language compatible with existing R5RS implementations. Our implementation is characterized by its use of inference to determine when the bindings of an imported library are needed, e.g., run time or compile time, relieving programmers of the burden of declaring usage requirements explicitly.

R6RS leaves it up to implementations whether import statements need to explicitly state the meta-level into which bindings should be placed. The authors argue for letting the implementation automatically figure out the required meta-levels, and present an implementation-oriented description of how to do so, that they implemented portably. The paper also includes a well-written and detailed introduction to the issues involved, and since they want the community to adopt their solution, they seem to have worked extra hard to produce a convincing paper ;).

(via vieiro)

Ï€: a pattern language

π - not to be confused with the π-calculus - is a pattern-based language being developed by the Software Technology group at Technische Universität Darmstadt. Quoting from the project website:

There is only one language construct in π: the pattern. Patterns are, simply speaking, EBNF-expressions with an associated meaning; a pattern can be easiest understood as a function with a syntactically complex (context-free) "signature". The non-terminal symbols in the signature are then the parameters of the pattern. A π-program is a sequence of instruction symbols (technically, sentences), each being a sequence of (Unicode) characters. The sentences are then evaluated (executed) in the respective order.

The basic idea here seems similar to the OMeta language, previously mentioned on LtU here, but based on EBNF instead of Parsing Expression Grammars (PEGs).

Pattern definitions in π have the form

declare_pattern name ≔ syntax ⇒ type ➞ meaning;

Here's a trivial example of defining a pattern:

declare_pattern
   integer_potentiation ≔ integer:i %W- "^" %W- integer:j ⇒ integer ➞
   {
      int result = i;
      for (int k = 1; k <= j-1; k++)
         result *= i;
      return result;
   };

The resulting pattern can then be used directly in expressions, such as print(42^6);.

More information about the language, as well as the implementation, can be found at http://www.pi-programming.org. There's an OOPSLA09 paper on π as well, but I haven't been able to find an open access version of it yet.

[Update: the π team has made their OOPSLA article available here]

XML feed