Parallel/Distributed

Links (Wadler)

Wonder what Wadler is up to?
My latest research interest is a programming language for web application development, building on my experience with XML, Java, and Haskell.

A short introduction and a set of slides are available (both PDF).

Wonder about the language name?

A quarter of a century ago, Burstall and others at Edinburgh introduced an influential programming language, Hope, named after Hope Park Square, located near the University on the Meadows. This note proposes a research effort to design a new programming language for the web, Links, named after the Brunts-field Links, located at the other end of the the Meadows and site of the world’s first public golf course.

And here's why you should be interested in this work,

Other languages for web programming include Xtatic from Pierce, Scala from Odersky, and Xen and Cw from Microsoft. Links will benefit from fruitful interactions with these efforts. However, Links differs crucially in that it adopts database ideas from Kleisli and systems principles from Erlang, taking it well beyond the capabilities of these other languages.

Composable memory transactions

Composable memory transactions. Tim Harris, Simon Marlow, Simon Peyton Jones, and Maurice Herlihy. Submitted to PPoPP 2005.

Writing concurrent programs is notoriously difficult, and is of increasing practical importance. A particular source of concern is that even correctly-implemented concurrency abstractions cannot be composed together to form larger abstractions. In this paper we present a new concurrency model, based on transactional memory, that offers far richer composition. All the usual benefits of transactional memory are present (e.g. freedom from deadlock), but in addition we describe new modular forms of blocking and choice that have been inaccessible in earlier work.

The work is in the context of Concurrent Haskell (of course). The authors work from the assumption that a purely functional (they use the term declarative) language is a perfect setting for transactional memory. The reason is that mutable state is rare and explicit.

Use Continuations to Develop Complex Web Applications

An introductory article from IBM developerWorks on Cocoon, continuation-based (sometimes called "modal") web applications, and such.

If you've ever developed a non-trivial Web application, you know that development complexity is increased by the fact that Web browsers allow users to follow arbitrary navigation paths through the application. No matter where the user navigates, the onus is on you, the developer, to keep track of the possible interactions and ensure that your application works correctly. While the traditional MVC approach does allow you to handle these cases, there are other options available to help resolve application complexity. Developer and frequent developerWorks contributor Abhijit Belapurkar walks you through a continuations-based alternative that could simplify your Web application development efforts.

via comp.lang.scheme

Croquet Project Releases Initial Developer Release

(via Slashdot). The initial developer release of Croquet, the Smalltalk-based distributed computing environment, is now available.

It's different enought that I don't really know what to make of it - is this the future? Or a testbed for ideas that will find more durable expression in another context?

Erlang tutorial

Seems like a nice tutorial.

Includes the expected chapters on concurrent programming and robustness, of course.

Grid Computing & the Linda Programming Model

(via Phil Windley)

This Dr. Dobbs article describes how tuplespaces can serve as an alternative to web-service message passing APIs.

A nice explantion of the Linda programming model.

Subcontinuations

Subcontinuations. Robert Hieb, R. Kent Dybvig, Claude W. Anderson. Lisp and Symbolic Computation 1993.

...traditional continuations are not useful in the presence of concurrency, because the notion of the rest of the computation represented by a continuation does not in general make sense...

Subcontinuations may be used to control tree-structured concurrency by allowing nonlocal exits to arbitrary points in a process tree and allowing the capture of a subtree of a computation as a composable continuation for later use.

Strangely enough this paper wasn't discussed here, even though it was mentioned in the discussion group a long time ago. Now's the time to rectify this omission.

Acute: high-level programming language design for distributed computation

Acute: high-level programming language design for distributed computation

This work is exploring the design space of high-level languages for distributed computation, focussing on typing, naming, and version change. We have designed, formally specified and implemented an experimental language, Acute. This extends an OCaml core to support distributed development, deployment, and execution, allowing type-safe interaction between separately-built programs. It is expressive enough to enable a wide variety of distributed infrastructure layers to be written as simple library code above the byte-string network and persistent store primitives, disentangling the language runtime from communication.

This requires a synthesis of novel and existing features:

  • type-safe interaction between programs, with marshal and unmarshal primitives;
  • dynamic loading and controlled rebinding to local resources;
  • modules and abstract types with abstraction boundaries that are respected by interaction;
  • global names, generated either freshly or based on module hashes: at the type level, as runtime names for abstract types; and at the term level, as channel names and other interaction handles;
  • versions and version constraints, integrated with type identity;
  • local concurrency and thread thunkification; and
  • second-order polymorphism with a namecase construct.
The language design deals with the interplay among these features and the core. The semantic definition tracks abstraction boundaries, global names, and hashes throughout compilation and execution, but still admits an efficient implementation strategy.

For more info, see the Main site, from which you can view papers and sample code.

Parallel Programming with Control Abstraction

Parallel Programming with Control Abstraction. Lawrence A. Crowl and Thomas J. LeBlanc. ACM Transactions on Programming Languages and Systems 16(3): 524-576, May 1994

Since control abstraction separates the definition of a construct from its implementation, a construct may have several different implementations, each exploiting a different subset of the parallelism admitted by the construct. By selecting an implementation for each control construct using annotations, a programmer can vary the parallelism in a program to best exploit the underlying hardware without otherwise changing the source code. This approach produces programs that exhibit most of the potential parallelism in an algorithm, and whose performance can be tuned simply by choosing among the various implementations for the control constructs in use.

There were some queries regarding parallel programming constructs, so this paper may be of interest.

Making Asynchronous Parallelism Safe for the World

(link)

Guy L. Steele, Thinking Machines Corporation, 1990

    We need a programming model that combines the advantages of the synchronous and asynchronous parallel styles. Synchronous programs are determinate (thus easier to reason about) and avoid synchronization overheads. Asynchronous programs are more flexible and handle conditions more efficiently.

    Here we propose a programming model with the benefits of both styles. We allow asynchronous threads of control but restrict shared-memory accesses and other side effects so as to prevent the behavior of the program from depending on any accidents of execution order that can arise from the indeterminacy of the asynchronous process model.

This paper and the ones below may be of special interest in light of the current discussion of modern parallel computer architectures.

XML feed