Implicit parallel functional programming

This discussion over on the Haskell Mailing List might interest those readers following the debates on concurrency issues in the discussion group...

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

TCO for free?

They mentioned emerging multicore systems. The Sony/IBM/Toshiba cell architecture is not disclosed yet, but this patent seems relevant.
[...] Previous cell ID 2330 provides the identity of a previous cell in a group of cells requiring sequential execution, e.g., streaming data. [...]
Is it just me, or this cell looks like an activation record? Will this hardware free us from the stack? TCO for free? :-)

Locally, the stack is still present, but seems to be very short-lived. I might be wrong... I cannot wait to hear from the official presentation.

As seen on Slashdot, Cell Arch Explained.

http://www.blachford.info/computer/Cells/Cell0.html

It's interesting stuff. This guy says he's extrapolating from the patent description, so real details won't be known until the specs are released.

If what he describes is true, then Cell is a halfway point between nvidia GPUs and Intel general purpose CPUs.

As Bjorn Lisper said in the thread linked above, each CPU has a fast small local memory, and you distribute your computations to each CPU.

The software cells sound a lot like the Itanium's Explicitly Parallel Instruction Computing. This would be perfect for declarative parallelism such as the Nested Data Parallelism of the Nepal project.

I think this guy is confused about how it will affect MS's OS dominance, I think MS has been moving to the dotnet virtual machine for just such a dramatic shift in CPU dominance.

I don't like the idea of in-CPU hardware digital restrictions management (DRM), but then, who does other than Sony?

--Shae Erisson - ScannedInAvian.com

It is interesting to see this come up again

Implicit parallelism is one of the classic arguments for functional programming. This goes back to the era of Backus's "Can Programming Be Liberated..." paper. But it turned out to be one of those oft-cited but unfortunately not-anywhere-near-as-easy-as-we'd-like-to-think goals, like "a Sufficiently Smart Compiler."

MapReduce

Implicit parallelism is one of the classic arguments for functional programming.

Applications like Google's MapReduce show that the argument can still be pertinent for specific cases, even if the general case is too hard.

Fifth Generation project

The advocates of functional programming, logic programming, dataflow and whatnot would do well to read up on the previous wave of implicit parallelization in the 1980-1995 (roughly) timeframe, sometimes known as the Fifth Generation project.

As a veteran private of that war, my takeaway message was this: it's the algorithm that matters. A garden variety vectorized scientific computation can exploit huge parallelism, while an ordinary sequential computation (the average compiler, perhaps?) will give you little to work with, regardless of language.

The sea change will, then, really be to re-express the jobs we do for reasonably efficient, correct, maintainable, scalable parallel computation in application domains that resist this. What language constructs will help us do that?

Going for concurrency

The sea change will, then, really be to re-express the jobs we do for reasonably efficient, correct, maintainable, scalable parallel computation in application domains that resist this. What language constructs will help us do that?

In my view, this is part of a bigger change that is needed: concurrency-oriented programming. If people start programming with message-passing concurrency using concurrent components, which is important for reasons of simplicity, security, and robustness, then the stage is set for parallelism as well!

Taking advantage of current parallelism?

Are there any languages which take advantage of the parallel hardware that we already have in our current processors? Things like SSE, running the integer and floating point units concurrently, Hyperthreading, etc?

Some implementations do, at l

Some implementations do, at least. For instance, you can advice MSVC to automatically generate SSE instructions where it thinks that it should do it. In my experience you are better off not using that switch, tho.