Concurrency with Scheme

My new laptop has one of those multi-core CPUs. And, as Herb Sutter suggests I want to start doing some serious concurrent programming with Scheme.

Since I'm relatively new to Scheme, I decided to ask the LtU experts about effective/real world ways to fully exploit those two CPU cores with Scheme. Here's what I come with after a little research:

  • Termite is an Erlang-like distributed programming system written in Scheme. It includes a parallel map and different message-passing mechanisms. This looks pretty cool.
  • Kali Scheme (abandoned?) is a distributed implementation of Scheme
  • MultiLisp is a Scheme implementation with some special constructs for easy parallelization. Seems to me to be a little bit outdated
  • Multiple processes on multiple cores is on the wishlist of Gambit Scheme

So here're my questions to the LtU Scheme experts:

  • What is your preferred way to exploit multicores/multiprocessors from Scheme?
  • Do you know of any other Scheme implementations that ease concurrent programming? Any libraries?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

JVM Schemes

Most Scheme implementations which run on the JVM should probably exponse Java's underlying concurrency model.

Ex. JScheme's Special Forms, Exception Handling, and Multi-threading

JVM Lisps

The JVM is a reasonably good platform for exploiting mulit-core boxen, but it's a, um, challenging platform for functional languages like Scheme.

This discussion talks about Clojure, a dialect of Lisp for the JVM with some nice concurrency support. More on topic with the original question, though, it talks a bit about the lack of tail call optimization on the JVM and the effect it has on Scheme and other functional languages.

Two major flavors of Scheme on the JVM are SISC and Kawa. Each addresses the lack of TCO, each with its own trade-off. Both SISC and Kawa have multi-threading libraries [1] [2].

Petite Chez

Petite Chez supports exacly what you want, i.e. processes on multiple cores with shared memory.

In other implementations, you'll need to start several instances
and let them communicate via the network (i.e. the same solution as ParallelPython uses).

PLT Scheme

PLT Scheme has done some interesting work with kill-safe concurrency abstractions, "custodian" (ie. processes) memory quotas for protecting a running program from misbehaving modules/plugins, etc. It's worth checking out.

[Admin]

Please note that this topic can become too focused on the details of particular implementations for LtU. This type of discussion is better served on comp.lang.scheme. Try to restrict the discussion here for aspects that are of general interest. Thanks.

Concurrency vs. parallelism

John Reppy's book on Concurrent ML distinguishes between "concurrency" as a technique for organizing software, and "parallelism" as a technique to execute concurrent program threads at the same time in hardware. Concurrent programming is discussed as a tool for writing "applications with naturally concurrent structure," where "flexibility in the scheduling of computation is required." An example is interactive software, where concurrency can help make applications more responsive to their users. In fact, it is often acceptable to make an application slower in raw-throughput measures if it lowers response time to the user; i.e., it's often OK for software to be slow as long as the user never has to wait.

What are you after, really, concurrency or parallelism? PLT Scheme and Scheme48 don't exploit parallelism, but they do provide concurrent programming libraries modeled after Concurrent ML. And on the flipside, native OS threads provide parallelism, but are often too heavyweight for programs that want to create tons of small, short-lived threads.

On that note..

.. all compliant scheme's provide some form of concurrency due to call/cc. Scheme was built to explore the actor model, which is a concurrency model. I'm pretty sure he's talking about parallelism, as he wants to exploit his dual core-edness.

Sequential Concurrency and Concurrent Sequences

Indeed. The first (only?) implementation of Pict, a sort of "applied" pi calculus wasn't parallel at all.

On a different note, as Rob Pike and Doug McIlroy argue, concurrency may be beneficial for simplifying the implementation of even purely sequential things. E.g. this (though nine years later he wrote this.)

A slightly different take

John Reppy's book on Concurrent ML distinguishes between "concurrency" as a technique for organizing software, and "parallelism" as a technique to execute concurrent program threads at the same time in hardware.

In Tackling the Awkward Squad (p28), Simon Peyton Jones makes the distinction similarly but with, I think, an important difference.

To him parallelism means that things execute simultaneously without any effect on the semantics. Parallelism should not introduce any non-deterministic behavior - running the code in parallel or sequentially should always produce the same outcome. The example he gives is "e1 + e2." As long as the expressions e1 and e2 are referentially transparent they can either be evaluated in parallel or sequentially before adding without affecting the result.

Concurrency, on the other hand, means multiple "threads" can have side effects that affect each other, especially I/O. Concurrent programs are inherently and by design non-deterministic. Whether concurrent threads run simultaneously or just fake it is an implementation detail.

[Edit: by either definition, it seems like the OP is looking for parallelism]

Both, actually

Sorry for the late response (flu).

Regarding the distinction between concurrency and parallelization, well, I'm interested in both of them. (Although to be fair I must say that I'm not interested at all in threading libraries: I'm too old to spend time threading ;-) )

From my point of view Scheme is a good language for doing things in parallel. I'm thinking of a parallel version of "map" (a pmap), on a parallel evaluation of the "cond" test clauses, "let" bindings, etc. This approach (and its consequences) are detailed here: CrystalScheme

I've also found (and I'm currently digesting) about
Concurrent Scheme and PolyScheme

What I'm thinking of is that concurrent/parallel Scheme was an interesting research field in the nineties, but no implementations exist nowadays. Which is a real pity, I think, because it's now when those implementations would be more beneficial.

Thanks for your links (and Merry Xmas).