Haskell is not not ML

Haskell is not not ML. Ben Rudiak-Gould, Alan Mycroft, and Simon Peyton Jones. European Symposium on Programming 2006 (ESOP'06).

We present a typed calculus IL ("intermediate language") which supports the embedding of ML-like (strict, eager) and Haskell-like (non-strict, lazy) languages, without favoring either. IL's type system includes negation (continuations), but not implication (function arrow). Within IL we find that lifted sums and products can be represented as the double negation of their unlifted counterparts. We exhibit a compilation function from IL to AM --- an abstract von Neumann machine --- which maps values of ordinary and doubly negated types to heap structures resembling those found in practical implementations of languages in the ML and Haskell families. Finally, we show that a small variation in the design of AM allows us to treat any ML value as a Haskell value at runtime without cost, and project a Haskell value onto an ML type with only the cost of a Haskell deepSeq. This suggests that IL and AM may be useful as a compilation and execution model for a new language which combines the best features of strict and non-strict functional programming.

The authors start from the claim that most of the differences between SML and Haskell are independent of evaluation order. Is it possible, they wonder, to design a hybrid language which in some way abstracts over possible evaluation orders?

This papers leaves the language design for future work, and concentrates on the implementation costs. The results seem positive, so one hopes this project will mature and end the civil war between lazy and eager functional programming...

More information on this project is likely to appear here.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.


This paper isn't easy. Feel free to use LtU for help.

Fortunately for me, the time

Fortunately for me, the time I spent trying to understand Wadler's dual calculus helped me get this one.

Side question: This is one of many papers which use "&" and "∨" for conjunction and disjunction instead of "∧" and "∨". Is there a reason for that?

A guess...

Perhaps to avoid confusion between the conjunction sign (∧) and a capital lambda (Λ)?


This isn't directly related to the paper, but is anyone researching the physics of programming languages? Lazy evaluation make me think about reversible computers and the energy used by computation. Theoretically, you can make the energy lost to heat really small, dependent only on the number of bits in your output. The catch, of course, is that while the computation is in process, you'd need to keep track of a potentially huge number of intermediate states in order to reverse the machine and get your energy back. That seems analogous to the problem of laziness induced space leaks in pure normal-order languages. Is there anything from thermodynamics that we can use to create better programming languages?

If nothing else, it seems like this paper might lead to something closer to Dynamic Eager Haskell than we currently have.


I just came across a very fun paper called "Thermodynamics and Garbage Collection" (.html or .ps.gz) by Henry Baker this weekend. Makes me want to go back and relearn freshman physics.

Amr Sabry or Thorsten Altenkirch quantum computing papers might have some starting points on reversible computers.

Thermodynamics and Garbage Collection

I was going to mention that one... Check it out.


Can someone tell me how the SL->AM and LL->AM pair from this paper differ significantly from translating sml to sml and (some subset of) haskell to sml using something like SMLofNJ.Susp in the "straightforward" way? The SMLofNJ.Susp structure looks like:

structure Susp :
type 'a susp = 'a susp
val delay : (unit -> 'a) -> 'a susp
val force : 'a susp -> 'a

and does what you'd expect from the types.

reminds me of call-by-push-value

The IL described in section 2, and the translation from strict and non-strict languages into the IL are strongly reminiscent of Paul Blaine Levy's call-by-push-value. Both the IL and the CBPV calculus classify expressions so as to disallow certain expressions where you might expect them. For example, the components of a tuple must be non-0 type in IL, while in CBPV they must be values. Unfortunately, at the end of a long day I'm not up to the task of sorting out the similarities and differences.

Last year, motivated by Wadler's dual calculus to bring together strict and non-strict evaluation, I developed my toy language Ambidexter. The current interpreter is worth a look. Rather than "abstract over possible evaluation orders" it puts the two on an equal footing and permits conversions back and forth.