The Resurgence of Parallelism

Peter J. Denning and Jack B. Dennis, The Resurgence of Parallelism, Communications of the ACM, Vol. 53 No. 6, Pages 30-32, 10.1145/1743546.1743560

"Multi-core chips are a new paradigm!" "We are entering the age of parallelism!" These are today's faddish rallying cries for new lines of research and commercial development. ... The parallel architecture research of the 1960s and 1970s solved many problems that are being encountered today. Our objective in this column is to recall the most important of these results and urge their resurrection.

A brief but timely reminder that we should avoid reinventing the wheel. Denning and Dennis give a nice capsule summary of the history of parallel computing research, and highlight some of the key ideas that came out of earlier research on parallel computing. This isn't a technically deep article. But it gives a quick overview of the field, and tries to identify some of the things that actually are research challenges rather than problems for which the solutions have seemingly been forgotten.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Alan Kay

I have been dismayed by all the bruhaha surrounding the 'new' mutli-core paradigm for quiet some time myself. Kay was never more correct when he equated the state of computing with pop culture. Then throw in the seemingly endless money supply of large company marketing, and industry bloggers, pundits, and consultants seeking to build their brand, and it's obvious historical perspective doesn't stand a chance.

Well, the closed nature of the ACM and IEEE archives don't help either.

I wonder how well this

I wonder how well this perception actually meshes with other modern work, such as Cliff Click's lock-free hash table. Persistent data structures combined with functional programming, eliminating shared memory writes or perhaps using transactional memory, is all very well, but it's not necessarily the best approach if you're trying to get the most out of your parallel hardware.

scope

By focusing only on MIMD, the author is leaving out entire classes of machines and method - though the article does not claim to canvas the subject, neither does it acknowledge this scope or give nod to systolic architectures, message passing machines, SIMD machines, or the most successful examples in history - GPUs and loosely-coupled clusters of home machines.

(put reply in wrong spot)

(put reply in wrong spot)

Trolling?

The problems of parallel computing have obviously not been solved and it's weird to suggest that they have.

Furthermore, most work is descended from much earlier days, which would suggest that there wasn't much of a point to the article as the community does seem to be aware of them (the biggest leap not from this era, as far as I can tell, are various new models like stm or work stealing nested tasks and algorithmic ideas like randomization, opportunistic computing, reducers, etc. that were, at best, still gearing up at that time). It's almost like the authors are trolling!**

Given the apparent failure of previous attempts for mainstream software***, perhaps the continuance of these ideas is not a good thing. I've certainly given up functional programming and threads -- I'm writing task parallel C++ for my core code (though I wish I had SharC style qualifiers for it) and creating a parallel attribute grammar**** + OpenGL DSL to write my library code.

**Obviously they have made foundational contributions to CS... but perhaps they are forgetting that we learn from their attempts and build off of them rather than just adopt them.

***The challenges in hardware don't seem to be as great -- the switch to multicore provides breathing room relative to sequential innovations -- and often seem to be about incorporating new technology like photonics, worrying about power, or trying to appease the software people such as by putting the GPU near the CPU. There likely are lessons here as multicore just hasn't mattered as much until the power wall became real.

**** Parallel attribute grammars are a great example of something that has been researched before but still has a lot of warts to work through: the book hasn't been closed on them (unless you consider them to be a failed direction).

It seams that until recently

It seams that until recently CPU speed was increasing very rapidly. With the recent move toward multi-core CPU's rather than increased clock speed, parallelism is more relevant than it previously was.

In light of this, solutions that were developed at a time when computers were more CPU constrained, are of greater value today than they have been.

Adin Falkoff, APL and Parallel Processes

To toss in my little bit of APL trivia, I read just last night:

"The first use of the Language [APL] to describe a complete computing system was begun in early 1962 when Falkoff discussed with Dr. W.C. Carter his work in the standardization of the instruction set for the machines that were to become the IBM Systems/360 family. Falkoff agreed to undertake a formal description of the machine language, largely as a vehicle for demonstrating how parallel processes could be rigorously represented." p. 663

APL Session that credits Chairmen: JAN Lee; Speaker Kenneth E. Iverson; Discussant: Frederick Brooks and then has a secondary title: Paper: The Evolution of APL, Adin D. Falkoff;Kenneth E Iverson. ISBN 0-12-745040-8.

Chasing Men Who Stare at Arrays

Is this in any way related

Is this in any way related to this thread?

I guess...

...that this counts as earlier research on parallel computing, if maybe not what Denning & Dennis were talking about.

Catherine's link is to her initial reaction to this forum topic.

You are right. I apologize.

You are right. I apologize.