Parallel/Distributed

Parallel Programming with Control Abstraction

Parallel Programming with Control Abstraction. Lawrence A. Crowl and Thomas J. LeBlanc. ACM Transactions on Programming Languages and Systems 16(3): 524-576, May 1994

Since control abstraction separates the definition of a construct from its implementation, a construct may have several different implementations, each exploiting a different subset of the parallelism admitted by the construct. By selecting an implementation for each control construct using annotations, a programmer can vary the parallelism in a program to best exploit the underlying hardware without otherwise changing the source code. This approach produces programs that exhibit most of the potential parallelism in an algorithm, and whose performance can be tuned simply by choosing among the various implementations for the control constructs in use.

There were some queries regarding parallel programming constructs, so this paper may be of interest.

Making Asynchronous Parallelism Safe for the World

(link)

Guy L. Steele, Thinking Machines Corporation, 1990

    We need a programming model that combines the advantages of the synchronous and asynchronous parallel styles. Synchronous programs are determinate (thus easier to reason about) and avoid synchronization overheads. Asynchronous programs are more flexible and handle conditions more efficiently.

    Here we propose a programming model with the benefits of both styles. We allow asynchronous threads of control but restrict shared-memory accesses and other side effects so as to prevent the behavior of the program from depending on any accidents of execution order that can arise from the indeterminacy of the asynchronous process model.

This paper and the ones below may be of special interest in light of the current discussion of modern parallel computer architectures.

New Chip Heralds a Parallel Future

Missing no chance to stand on my soapbox about the need for easy PL retargeting, I bring you insights from Paul Murphy about our parallel-processing, Linux future.

[T]he product has seen a billion dollars in development work. Two fabs...have been custom-built to make the new processor in large volumes....To the extent that performance information has become available, it is characterized by numbers so high that most people simply dismissed the reports....

The machine is widely referred to as a cell processor, but the cells involved are software, not hardware. Thus a cell is a kind of TCP packet on steroids, containing both data and instructions and linked back to the task of which it forms part via unique identifiers that facilitate results assembly just as the TCP sequence number does.

The basic processor itself appears to be a PowerPC derivative with high-speed built-in local communications, high-speed access to local memory, and up to eight attached processing units broadly akin to the Altivec short array processor used by Apple. The actual product consists of one to eight of these on a chip -- a true grid-on-a-chip approach in which a four-way assembly can, when fully populated, consist of four core CPUs, 32 attached processing units and 512 MB of local memory.

Paul follows up with a shocker.

I'd like to make two outrageous predictions on this: first that it will happen early next year, and secondly that the Linux developer community will, virtually en masse, abandon the x86 in favor of the new machine.

Abandonment is relative. The new processor will emulate x86 no problem, as Paul notes. In the PowerPC line, already today we have Linux for PowerPC complete with Mac OS X sandbox. From a PL standpoint, however, this development may cattle-prod language folks off their x86 back ends and into some serious compiler refactoring work. I hope so!

XML feed