Chapel: "Solving the parallel programming problem"

Chapel, the Cascade High-Productivity Language:

Chapel strives to improve the programmability of parallel computers in general and the Cascade system in particular, by providing a higher level of expression than current parallel languages do and by improving the separation between algorithmic expression and data structure implementation details.

The design of Chapel is guided by four key areas of language technology [HIPS 2004]:

  1. Multithreaded parallel programming in the style of Multilisp, Split-C, or Cilk.
  2. Locality-aware programming in the style of HPF and ZPL.
  3. Object-oriented programming helps in managing complexity.
  4. Generic programming and type-inference simplify the type systems presented to users.

I only saw this mentioned once here, so I thought it was worth posting. Will it solve all the problems?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Looks very "modern"

But also looks very complicated and seems to lack any internal logic.

Still, in any case -- congratulations for at least trying to build something usable; this alone seems to me to be a good achievement for the field. :)


Warning: Comments of this sort will be deleted in the future.

The post is content-free, doesn't say anything constructive, and is implicitly critical of "the field", (whatever is meant by that) without any rationale. The "congratulations" in this context are meaningless (and, indeed, insulting).

I should rephrase.

The fundamental difficulty in parallel programming seems to be the fact that there exisits no usable formal computational model for parallel algorithms.

The cited language doesn't seem to take any steps in solving this difficulty, so I can't honestly say that the language is groundbreaking in any way.

Other than that, however, it seems to be a brilliant piece of work.

What do you mean "model"?

I'm not sure what you mean by "formal computational model" here. Do you mean something analagous to the lambda calculus as a model for ML/Haskell/Scheme? In that case, there are plenty of such formal models. [1]

Or do you mean a denotational model, where we describe the meanings of parallel programs with mathematical structures? If that, then you could look at work like CSP, which has a denotational model and is certainly about parallel computation.

[1] See, for example, LambdaS


I believe that the π-calculus is generally considered to provide a formal comptutational model for parallel computations that is analogous to the λ-calculus. CSP provides something similar, but is not quite as general

Part of the DARPA HPCS Project

The Chapel language is one of three being developed for the DARPA High Productivity Computing Systems project, for next-generation petaflop supercomputers. The others are Fortress (previously discussed on LtU) and X10.

Chapel seems to have heavy Modula-3 influence.


I worked on the hardware side of Cray's Cascade system, and I heard about Chapel when they were first drafting the HIPS paper on it.

I'm a PL neophyte: I recently became interested in FP such as Haskell. So, I am curious as to the community's opinion on whether Chapel (or Fortress or X10 which I haven't looked at) would be good for developing scalable parallel applications. It definitely appears aimed towards massively multiprocessor systems; what about SMPs, Sun's Niagara, or the future server/desktop multicore processors from Intel, AMD, or IBM? Is there a cost-effective and efficient method for taking advantage of the parallelism available?

On a side note, I recently attended a lecture on Cilk. It sounds quite useful for a SMP; however, I don't see it scaling to Cray-like systems.

Cilk == C, Cilk-NOW ?= FP

This document talks about a functional subset of Cilk, which sounds more interesting to me than scary straight-up Cilk, based on C.

"It is important to note that Cilk-NOW provides these features only for Cilk-2 programs which are essentially functional. Cilk-NOW does not support more recent versions of Cilk (Cilk-3 and Cilk-4) that incorporate virtual shared memory, and in particular, Cilk-NOW does not provide any kind of distributed shared memory. In addition, Cilk-NOW does not provide fault tolerance for its I/O facility."

Not quite FP?

Cilk-NOW does seem interesting in that it extends Cilk's work-stealing behavior across machines in addition to processors on the same machine.

This document talks about a functional subset of Cilk, which sounds more interesting to me than scary straight-up Cilk, based on C.

Cilk-NOW is still, like Cilk, a strict superset of C. I believe they mean functional in a different sense:

The Cilk-NOW runtime system, as described in this paper and as currently implemented, supports the Cilk-2 language which is essentially functional in that it does not have support for a global address space or parallel I/O. More recent incarnations of Cilk for MPPs and SMPs have support for a global address space using "dag-consistent" distributed shared memory, and we are currently working on extensions for parallel I/O.

The *other* functional

Oh, weird. Dogged!