Meta-Programming

Relating FFTW and Split-Radix

Relating FFTW and Split-Radix. Proc. of ICESS'04, the First International Conference on Embedded Software and System, December 9-10 2004, Hangzhou (Zhejiang), China.

This ongoing attempt to reproduce an efficient implementation of FFT using staging and abstract interpretation attempts to answer the question "How can we get the raw performance of hardware without giving up the expressivity and clarity of software?"

Here's how Oleg describes the contribution of this paper,

One may think that generating truly optimal power-of-two FFT is straightforward: we generate the naive radix-2 FFT code, and then optimize it, removing trivial multiplications (x*1), trivial additions (x+0), etc. That was the approach demonstrated previously.

And the bottom line is: the improved code is better than the original, but still worse than the best (FFTW) code. Here's why: if we consider an expression, (sin(pi/4) * x + cos(pi/4) * y), we may think that it can be optimized to sin(pi/4)*(x+y) thus saving one multiplication. Alas, computed sin(pi/4) and cos(pi/4) are not _exactly_ equal. Therefore, no optimizer is able to detect that they are the common factor. The previous paper BTW did handle the case of sin(pi/4) -- and still fell short of FFTW. To achieve optimum, more complex trigonometric transformations have to be applied.

This paper shows that to achieve the best quality of the generated code, we have to use domain knowledge. In case of FFT, we use the fact that it is a linear transform whose factors are roots of unity. This gives us an abstract representation of terms amenable to exact trigonometry. We can essentially symbolically manipulate the terms, and then concretize our abstract representation into the code. In the end, we generated FFT code that exactly matches the number of FP operations of FFTW. Somewhat unexpectedly, when we changed the function that generates the code for general complex multiplication, we obtained `split-radix FFT' -- another well-known FFT technique.

Oleg points out that the point isn't that they managed to reproduced the FFTW results. The crucial point is that we know exactly which identities (i.e., axioms) contributed to the optimum. The search was principled rather heuristic, and the code is generated in only one pass. There are no manipulations on the code: it is generated just right.

MetaKlaim

Gianluigi Ferrari, Eugenio Moggi and Rosario Pugliese

MetaKlaim - a Type Safe Multi-stage Language for Global Computing

This paper describes the design and the semantics of MetaKlaim, an higher order distributed process calculus equipped with staging mechanisms. MetaKlaim integrates MetaML (an extension of SML for multi-stage programming) and Klaim (a Kernel Language for Agents Interaction and Mobility), to permit interleaving of meta-programming activities (like assembly and linking of code fragments), dynamic checking of security policies at administrative boundaries and “traditional” computational activities on a wide area network (like remote communication and code mobility). MetaKlaim exploits a powerful type system (including polymorphic types ´a la system F) to deal with highly parameterized mobile components and to dynamically enforce security policies: types are metadata which are extracted from code at run-time and are used to express trustiness guarantees. The dynamic type checking ensures that the trustiness guarantees of wide are network applications are maintained whenever computations interoperate with potentially untrusted components.

Omega

Ωmega is a new programming language by Tim Sheard which is descended from Haskell and adds new facilities for defining static type constraints, such as allowing "users to write functions at the level of types, and then use those functions in the type of functions at value level". It also has "equality qualified types". See also Programming with Static Invariants in Omega and the manual for more information. Mentioned previously (in passing) on LtU.

Metaphor

Metaphor is a strongly-typed, multi-stage, object-oriented programming language. Metaphor is based on a subset of C# and is extended with multi-stage programming constructs in the style of MetaML or MetaOCaml. Metaphor is implemented as a compiler on the .NET CLR.

Metaphor features a static type system for object-oriented reflection operations (i.e. run-time type analysis). This allows the reflection system to be safely incorporated into the language’s staging constructs and thus allows the generation of code based on the structure of types.

Pugs, Practicing the Theories.

A lot of language theory goes past here on Lambda the Ultimate, but we rarely see that theory directly impacting commercial programmers.

I'm a great fan of theoretical concepts like arrows, but at the same time I'm a self-employed programmer interested in solving my clients' problems.

Pugs is notable in that it profitably uses recent developments such as GADTs and Template Haskell for an implementation of Perl6.

I recently became a regular on the #perl6 irc channel and soon after joined the list of committers.

In just a few days I've seen a lot. I've seen enthusiastic members of the Perl community learning Haskell. I've seen myself learning Perl. I've also seen how daily Perl programmers work with abstractions like monad transformers. I've seen how some structures are easy to extend for programmers new to both the Pugs codebase and Haskell.

The Pugs project was started 64 days ago by Autrijus Tang, as an exercise while reading TaPL. Pugs already includes network and threading primitives. New tests and code are add at an amazing rate, as evidenced by the smoke tests.

I don't know if I'll end up using Perl after Pugs is written, but I am learning how to practice the theory of programming language design and implementation.

A New Approach to Abstract Syntax with Variable Binding

Pitts and Gabbay, A New Approach to Abstract Syntax with Variable Binding, FAC 2001.

In the lambda calculus, the particular choice of variable names - even free variables - is irrelevant. Names serve two purposes:

  • as mnemonics to help human readers understand their role, and
  • to distinguish variables that are meant to be different and unify variables that are meant to be the same.

In a theory of binders, only the latter purpose is relevant. This is why it's so annoying to have to deal with capture-avoiding substitution, the Barendregt variable convention, gensym, and the like. These are all ways of getting around the fact that we've committed to a particular name when we really shouldn't have cared what the name was.

There are several standard ways to deal with this. Generating fresh names with gensym is awkward and forces the implementer to reimplement binding and substitution from scratch for each new object language; not to mention its violation of referential transparency. de Bruijn indices remove the variable names but are hard to read and understand. Higher-order abstract syntax implements binders in the object language as functions in the metalanguage, in order to reuse the built-in mechanisms of binding and substitution; but by mixing higher-order values in algebraic datatypes, structural induction is hard to recover.

This paper introduces a theory of fresh names that restores algebraic reasoning, referential transparency, and structural induction to algebraic datatypes with a HOAS-like notation for introducing binders into an abstract syntax. This is the set-theoretical basis for the authors' work on FreshML and FreshO'Caml, which we've discussed a little bit on LtU in the past.

Educational Pearl: Automata as Macros

Shriram Krishnamurthi's classic macro automaton example, well known from his LL1 presentation, is now written up as an educational pearl for JFP.

As you may recall, the example shows how to use Scheme macros to create an an automata DSL, and concentrates on the importance of proper tail-calls for achieving the desired semantics.

Adobe Releases Adam and Eve

It was with some shock that I read (via Slashdot) about Adobe's Open Source presentation engine (and layout system). In my experience, very few companies take much stock in writing generic systems that leverage the body of computer science.

So, I was quite pleased to read Sean Parent's comments:

The most ambitious library, Adam, stems from the intuition that the logic behind a simple human interface can be distilled to a function:


f(x) -> x'

The code providing this functionality accounts for a third of Adobe's code base and nearly half of the bugs found during development. Obviously these functions are not so simple, at least not as currently expressed. The apparent complexity is due to a high degree of interconnected variables. Event handling code alone does not have enough context for solving such systems.

The next jaw-dropping tidbit came a few lines later:

Still, I am convinced that writing correct, high performance, and feature rich systems can be orders of magnitude simpler than it currently is. By my estimate, 70% of Adobe's current code base could be better represented declaratively. The remaining imperative logic could be largely generic - reused both within and across Adobe's applications. Realizing even a fraction of this potential would open up a world of opportunities. I strongly suspect the proportions are similar throughout the industry.

The website provides an overview of exactly what has been released:

Adam is a modeling engine and declarative language for describing constraints and relationships on a collection of value, typically the parameters to an application command. When bound to a human interface (HI) Adam provides the logic that controls the HI behavior. Adam is similar in concept to a spreadsheet or a forms manager. Values are set and dependent values are recalculated. Adam provides facilities to resolve interrelated dependencies and to track those dependencies, beyond what a spreadsheet provides.

Eve consists of a declarative language and layout engine for constructing an HI. The layout engine in Eve takes into account a rich description of UI elements to achieve a high quality layout - rivaling what can be achieved with manual placement. A single HI description in Eve suffices for multiple OS platforms and languages. This document describes Eve2, the latest version of Eve. Eve2 was developed to work with Adam and to incorporate many improvements that have been requested since Eve1 was written.

It is important to note that Adam and Eve do not constitute a traditional application framework. They are component libraries which can be incorporated into a number of environments. They can be used together, or independently, but must be combined with other facilities to construct an application. Nearly all of the components which comprise Adam and Eve can also be used independently and are documented as part of ASL.

Sadly, my experience with Adobe's products extends only to their Acrobat Reader software, so I can't claim from personal experience that they have achieved their aim of writing stable software.

However, the fact that top-level architects at Adobe are cognizant of declarative paradigms and generic programming gives me new hope in the industry.

Embedded Interpreters

This is a tutorial on using type-indexed embedding projection pairs when writing interpreters in statically-typed functional languages. The method allows (higher-order) values in the interpreting language to be embedded in the interpreted language and values from the interpreted language may be projected back into the interpreting one. This is particularly useful when adding command-line interfaces or scripting languages to applications written in functional languages. We first describe the basic idea and show how it may be extended to languages with recursive types and applied to elementary meta-programming. We then show how the method combines with Filinski's continuation-based monadic reflection operations to define an extensional version of the call-by-value monadic translation and hence to allow values to be mapped bidirectionally between the levels of an interpreter for a functional language parameterized by an arbitrary monad. Finally, we show how SML functions may be embedded into, and projected from, an interpreter for an asynchronous pi-calculus via an extensional variant of a standard translation from lambda into pi.

Another paper from Nick Benton.

Like the previous one this paper is dense and detailed, but this time there are some useful practical techniques that may come handy next time you build a DSL in a functional language.

Code Generation with Python, Cog, and Nant

We've been using C# for a couple of years now, and are getting tired of the verbosity. Especially tired of copy/pasting and changing a couple of identifiers, and I imagine many other people are, too. After seeing some of the macro capabilities of Lisp, we got jealous. After some googling and browsing, I ran across Ned Batchelder's python-based code generation tool, Cog.

A nice description of using coge generation in real life. Might help explain the idea to programmers unfamiliar with the technique.

XML feed