The Landscape of Parallel Computing Research: A View from Berkeley

(via Tim Bray)

Since real world applications are naturally parallel and hardware is naturally parallel, what we need is a programming model, system software, and a supporting architecture that are naturally parallel. Researchers have the rare opportunity to re-invent these cornerstones of computing, provided they simplify the efficient programming of highly parallel systems.

An interesting survey by a large and multidisciplinary group of Berkeley researchers. I think it is useful background reading for programming language researchers, even if the discussion of programming models is not up to the standards of LtU...

And yes, STM is mentioned (albeit briefly)...

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

What is a "Dwarf"

On page 14 after Figrue 4 there is the following statement about the finite state machine abstraction:

Although 12 of the 13 Dwarfs possess some form of parallelism, finite state machines(FSMs) look to be a challenge, which is why we made them the last dwarf. Perhaps FSMs will prove to be embarrassingly sequential just as MapReduce is embarrassingly parallel. If it is still important and does not yield to innovation in parallelism, that will be disappointing, but perhaps the right long-term solution is to change the algorithmic approach. In the era of multicore and manycore. Popular algorithms from the sequential computing era may fade in popularity. For example, if Huffman decoding proves to be embarrassingly sequential, perhaps we should use a different compression algorithm that is amenable to parallelism.

A sequential process with no parallel decomposition can make effective use of N cores with the steps coresponding to the N steps of the process. There is of course an N step delay to "fill up the pipeline" so to speak. But after that you get one output for each input.

Overall the paragraph seems to me to be misleading and confusing. Finite state machines are a way to decompose processes, not really a "dwarf".

Back to the future

The current excitement about concurrency and parallelism (as exemplified by this proposed line of research at UCB) sounds a lot like the hype that surrounded occam and the transputer 20 years ago. IIRC one of the primary rationales for the transputer was that single-core processors were felt to be reaching performance limits. There was also much talk at the time about matching the natural concurrency of certain problem domains. Now, granted, INMOS was ahead of its time in many ways. But it does make me wonder if, just as the transputer eventually fell out of favor, the current fad for multicore systems will fade once the next big performance breakthrough is made in the single-core arena. It'll be interesting to see what happens.