Programmable Concurrency in a Pure and Lazy Language

Programmable Concurrency in a Pure and Lazy Language, Peng Li's 2008 PhD dissertation, is a bit more implementation focused than is common on LtU. The paper does touch on a wide range of concurrency mechanisms so it might have value to any language designer considering ways to tackle the concurrency beast.

First, this dissertation presents a Haskell solution based on concurrency monads. Unlike most previous work in the field, this approach provides clean interfaces to both multithreaded programming and event-driven programming in the same application, but it also does not require native support of continuations from compilers or runtime systems. Then, this dissertation investigates for a generic solution to support lightweight concurrency in Haskell, compares several possible concurrency configurations and summarizes the lessons learned.

The paper's summary explains what I like most about it:

the project ... solves a systems problem using a language-based approach. Systems programmers, Haskell implementors and programming language designers may each find their own interests in this dissertation.

Even if concurrency isn't your thing, section 6.3 describes the author's findings on the pros and cons of both purity and laziness in a systems programming context.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Lazy vs Strict

As expected, laziness is a rather tricky business. Has anyone here on LtU ever put some thoughts into the trade-offs between lazy or strict by default (or knows some work on this)?

I think there isn't much controversy about the usefulness of laziness--it kept Haskell pure for one, Bird et al remark that lazy evaluation can lead to better algorithm complexity. Efficient laziness probably needs support form the run time system, too. I think that Scala has laziness annotations for function arguments. Does anyone have experience using this feature?

Google is your friend...

The following query: lazy vs strict produced the following threads:

Practical Advantages of Lazy Evaluation is a recent, highly relevant thread.

Combining lazy and eager evaluation of terms

Applicative vs Functional (Applicative order is another word for strict order, FTMP)

Question about laziness and algorithmic runtime analysis seems to address one of your questions

Why Functional Programming Matters is an excellent introduction to the topic.

I'd do more, but I have a strict limit of five links. Any more might be flagged as spam, and besides, I'm lazy....

Scala programming lazily

Strictly speaking (ha!) Scala has a call-by-name argument annotation rather than the call-by-need that people generally mean when they say "lazy." Its big advantage is syntactic; you can do call-by-name in other strictly evaluated functional languages with a slightly heavier syntactic burden using functions of type Unit->t. That said, it's some pretty sweet syntactic sugar since it lets you create control structures that look like native syntax. It's the kind of thing that isn't theoretically a huge deal but you sure miss it when you don't have it.

Scala also has an optional true blue lazy annotation on immutable variables. It brings some (but by no means all) of the advantages of Haskell style laziness for some limited use cases.

In general, I think they're quite useful but the mix of non-strict and strict evaluation in an impure language isn't all sunshine. Ordering of side effects is hard enough to figure out in a strictly evaluated environment. Every couple of weeks somebody in the mailing list is confused because they mapped an impure function over a lazy list and didn't get the effects they expected when they were expected.

re: Better Algorithm Complexity

From the thesis:

Besides the complexity of the runtime system, laziness is a generally challenging aspect of Haskell programming, because laziness makes it difficult for the programmer to reason about evaluation order and memory usage. In my experience of developing the concurrency library, my program occasionally run into stack overflow errors and it often takes a long time to find out that the problem is on the order of evaluation for large, purely functional data structures. When the concurrency library manages more than 100,000 threads, its internal data structures, such as task queues, need to be carefully engineered to make sure that operations on such large structures do not cause memory leaks. When the program complexity scales up, it is generally difficult to get everything right even for an experienced Haskell programmer — the strictness annotations and order of evaluation must be all correct, or there will be memory leaks.

This is the general experience of people who program a lot in lazy languages. Whether it is a fundamental problem is unknown. A contemptible observation is that sufficiently large programs seem only to work with added strictness annotations.

It might be a reason why monads are so popular at the moment since they often 'linearize' the computation, resulting in more predictable behavior.

I guess one should be able to prove that for every eager statement of an algorithm, there exists an equivalent (bisimilar) lazy algorithm, and vice-versa. It might sometimes just be hard to find the equivalent lazy algorithm, but that might also be inexperience.

On better algorithm complexity

From the article:

This demonstrates that lazy evaluation is strictly more efficient - in asymptotic terms - than eager evaluation, and that there is no general complexity-preserving translation of lazy programs into an eager functional language.

Yeah, well, except for an interpreter.

Variable lookup in such an

Variable lookup in such an interpreter wouldn't be constant-time though, would it? The premise assumes you don't have a handy array.


Wrong premise I guess. Basic idea: Haskell programs are run on a strict machine, that doesn't make them faster.

I will scan the paper, but my best guess is that if you need O(log(N)) lookup on variables in the eager evaluation, you would need the same for the lazy evaluation. Maybe they play a trick on the mathematical level on the lazy term language which superficially gives them O(1).

[Scanned the abstracts. The claim is that there are lazy counterparts of algorithms which (with memoization) have better complexity than certain impure eager algorithm (I guess without memoization) only translated with severe restrictions. Yeah, well, duh... If I bind your hands with tennis, I am also certain to win the game.]

[Without having read the whole article: I guess their basic statement is "Purity can be exploited to implement memoization easily, therefor will often result in fast and concise algorithms." I am not sure if by memoization sharing of nodes is meant, or the classical optimization of memoizing small values on small functions often found in pure languages.]

Red Herring

I don't think this is a problem. For a particular program, lookups of its fixed set of variable names by different methods can only contribute a constant factor to its execution time. O(log n) complexity is only a problem as n goes to infinity, which it doesn't if the set of variables is held fixed.


It's not looking up variables in an environment that's the problem, it's implementing the thunks without fast arrays or mutation.

Evaluation is disappointing

I think that the concurrency library elements are very interesting, and I think that the concept of programmable concurrency is validated. But as a systems person I find the evaluation method described in section 5.4 disappointing:

  • Given that network and web performance are evaluated, why aren't standard benchmarks used? Usually, the failure to use such benchmarks is driven by deficiencies in the compared implementation that make a realistic evaluation impossible. The problem here is that we cannot tell whether those deficiencies are driven by architecture issues, design issues, implementation issues, or language issues.
  • Concerning FIFO performance, why is a measurement that reveals a performance difference resulting primarily from I/O model interesting? Is the I/O model difference uniquely enabled by Haskell, or is this merely a distraction?
  • There is a long history in the systems community of naive benchmarks on web services: one implements a simple web server that does the easy part of HTTP and only handles files, and then compares this to Apache, which has a rich architecture for connection re-use, scripts, and so forth. Whether those other features are exercised or not, the requirements to support them dramatically alter the architecture of the web server, and this has implications for performance. It is well known that "simple" web servers implemented in C outperform Apache by large factors on such naive benchmarks.
  • There is a similarly long history of naive benchmarks in networking.

Penn is the place where SwitchWare was done, and there is a lot of expertise around on the use and evaluation of functional techniques in systems contexts. Perhaps it was consulted, but I don't see evidence of that in the evaluation section of the dissertation. What these results do say is that further work is warranted to perform a comprehensive evaluation, and that is very good. The question is: will such an evaluation actually be done? Perhaps Peng will undertake more detailed evaluation in future publications. That would be truly wonderful.

In fairness to Peng, I wonder if I am not becoming an old fart. The depth and maturity of evaluation I am looking for is beyond the reasonable scope of a reasonable PhD process for a single student. This is the endemic problem of systems research, and in this respect Peng is merely "caught in the gears" just as every other systems student is. The only alternative seems to be holding your students too long so they can finish, which seems to have become my own abiding sin as an adviser. I keep looking for a middle ground, but I haven't found one.