Composable memory transactions

Composable memory transactions. Tim Harris, Simon Marlow, Simon Peyton Jones, and Maurice Herlihy. Submitted to PPoPP 2005.

Writing concurrent programs is notoriously difficult, and is of increasing practical importance. A particular source of concern is that even correctly-implemented concurrency abstractions cannot be composed together to form larger abstractions. In this paper we present a new concurrency model, based on transactional memory, that offers far richer composition. All the usual benefits of transactional memory are present (e.g. freedom from deadlock), but in addition we describe new modular forms of blocking and choice that have been inaccessible in earlier work.

The work is in the context of Concurrent Haskell (of course). The authors work from the assumption that a purely functional (they use the term declarative) language is a perfect setting for transactional memory. The reason is that mutable state is rare and explicit.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.


By the way, and related to the other discussions going on, notice that the type system is used to distinguish actions the modify the transactional store, IO actions, and pure code.


This was cool. I like the way they manage to restrict direct side-effects in transactions using the type system. I've appreciated qualities like that in embedded DSLs but it's better that they don't need to invent a new sub-language for it.

I would like to see some bigger example programs. What can you do with these new primitives?


It's A4-size ps, unconvertible to PDF according to ghostscript. Why can't people just post sources? :(

If anyone manages to convert it to PDF, drop me a line. I emailed the author and got an out of office reply.

ghostscript managed to handle

ghostscript managed to handle it just fine here.

When will academia wake up and abandon .ps?

I wonder how many in academia realize just how many readers simply ignore their work because it is so inaccessible -- by virtue of being provided only in postscript form, instead of the much more widely and conveniently available pdf form?

Anyway, here is a PDF of that paper. I made it using Babinszki's "'Net Distillery".

I prefer ps to pdf

Actually the only way I can print pdf is to convert it to ps (on Linux). It is done either explicitly or implicitly by the printing machinery.

Especially if I want to print it as a booklet: utilities for suitable rearranging of pages exist for ps (psbook, psnup, psselect).

I've met both ps and pdf files which were problematic. But this article posed no problems at all, the ps is good: understood both by ghostscript and by my printer.

When will everyone wake up and abandon .pdf?

How many in academia realize the number of readers who ignore their work because it is provided only in PDF form instead of the more convenient, indexable and faster-loading HTML?


I prefer PDF, probably because there are few good-looking HTML papers I've read so far.

HTML papers. Oh, that would

HTML papers. Oh, that would be wonderful...

But I dislike PDF far less than PS.

Is there a chance someone wil

Is there a chance someone will want to discuss the paper itself, rather than the file format?! (And yeah, I know PS is a language...)

I'm reading the PDF version t

I'm reading the PDF version the author sent me...

jocaml/polyphonic c#

I notice that the author makes reference to Polyphonic C#, but doesn't address the performance issue involved in the transactional memory model.

Does anyone know of any attempts to port the join calculus to Haskell?

Reference to Polyphonic C#

I am somewhat puzzled by remark in section 7.2: this kind of synchronization, which is hard to build with STM, is extremely easy to build with a chord in Benton et al.’s Polyphonic C#.
It looks almost obvious that STM is powerful enough to build an interpreter for join calculus (and therefore, Polyphonic C#?), so STM supports every construct available in Polyphonic C#.
Or is this requirement for interpretation exactly the meaning of "hard to build"?
Or maybe the authors mean that swaps cannot be composed by mere sequencing? Maybe what's needed for that, is something like andThen combinator, which will run two transactions in sequence, but do not rollback the first if the second fails?

An unrelated idea: is it possible to generalize TVars to input/output channels? I realize that consistency checking might become much more complicated (impossible?), but then again this will remove the need for IO monad completely.

side effects inside transactions

This would seem to solve the PCLSRing problem ( in system calls. Norm Hardy pointed out to me that EROS and KeyKOS are what Jonathan Shapiro calls "atomic-style" kernels: they never context-switch away from a process when it's inside a system call, so they have to deal with PCLSRing on every context switch.

However, page faults (even inside system calls) are a common reason for context switches. So when we hit a page fault inside a system call, we need to rollback the side effects of all the code executed so far in the system call --- except that we must not back out the side effect of paging in the faulting page, or we will never make any progress!

I wonder if the exception mechanism given in this paper is sufficient for that sort of thing; the desired semantics for a page-fault are more like "retry". Otherwise, Haskell's strict type-checking could make this difficult.

The original PCLSR mechanism, according to Bawden's paper, was less restrictive: it didn't insist that the backing-out leave no side-effects undone, only that the remaining effects, when composed with the effects of retrying, would be the originally-requested effects. But KeyKOS insisted that there be no user-visible side-effects.