The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software

Herb Sutter has this fascinating article discussing the notion that programmers haven't really had to worry much about performance or concurrency because of Moore's Law. However the CPU makers are running out of ways to make CPUs faster, in terms of raw Mmhz, and instead will be reduced to opting for multi-cpu and multi-core approaches.

As a consequence he believes that concurrency will become a first class concern for more developers (including language designers) and language there will be a resurgence in interest in what he calls 'efficient languages'.

I'm currently doing a lot more work with highly concurrent GUI applications in Java and discovering that there's a shortage of sensible guidelines out there on designing applications in the face of pervasive multi-threading or even ways of re-organising standard design patterns like MVC to handle inter-thread communications.

Since the only mainstream language I know of with good concurrency features built in is Java 5 I'm left wondering how people handle these issues in other languages.

Does anyone know of any research about designing for concurrency in rich client applications? Ideally in a style that minimises the amount of locking involved?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

CSP for Java

"how people handle these issues in other languages"
Libraries.

See Peter Welch's CSP libraries for Java and C++.

And Synchronous Active Objects & Active Objects Provide Robust Event-driven Applications pdf

CSP, occam, JCSP

Nice to see that at least a few folks are aware of Peter Welch's excellent work in the area of Java concurrency! It always amuses me when questions like the grandparent show up. As Brinch-Hansen pointed out a number of years ago, most "modern" programming language implement ideas of concurrency that are decades out of date (at the time he was complaining specifically about Java threading).

Milner, Hoare, and the other researchers behind the likes of CCS, pi-calculus, and CSP have developed a mature, robust concurrency theory. The practical implications of that theory have been well-tested by actual implementation. Occam was revolutionary for its time, and still supports a better concurrency model than 90% of the languages out there. And yet, half the posts on this thread seem to involve implementors that have rediscovered ideas the CCS/CSP/occam community knew years ago. I guess it's similar to the way that LISPers are often complaining about the many languages that are gradually evolving into LISP :)

I've never been able to figure out why CCS and CSP haven't received more attention. Part of it might be that the ideas never really made it to the US (aside from the concurrency model built into Plan 9/Alef and Inferno/Limbo at Bell Labs). CSP is definitely much more well known in Europe. But even there, it seems to be ignored for actual implementations. I'm not sure why. It's not like the field has died. There is still research in these areas going on, both theoretically and practically. Welch's JCSP and CCSP have brought these ideas to "mainstream" languages (I believe that the newest edition of Doug Lea's book on Java concurrency even mentions JCSP). At the same time, Welch and Barnes are pushing ahead with occam-pi, a new version of occam that adds mobility in the style of the pi-calculus to occam's existing CSP-like concurrency model. And yet it still seems that most people have never even heard of applying CCS/CSP style concurrency to actual implementations. NIH-syndrome maybe?

the State department has declared me stateless

I'm pretty sure the concurrency libraries in Java 5 were designed by Doug Lea. You might find some useful design patterns in his book.

Since the only mainstream language I know of with good concurrency features built in is Java 5 I'm left wondering how people handle these issues in other languages.

I'm not sure how constrained you are to particular languages, but there is certainly a lot of incredible work out there, both in research and practice, on languages and design for concurrent programs. You should definitely check out Erlang for ideas. They have accomplished amazing things with a few simple ideas. I believe they have roots in Concurrent ML, or at least some striking similarities.

I haven't yet had a chance to read CTM, but I've heard very good things about it. (The author is a regular on LtU so he could tell you more.) Another area that I've been meaning to explore is actor semantics (ACM link, sorry).

It was eye-opening to me when I was chatting with grandfather last year, and he was telling me stories about concurrency bugs he had encountered as an engineer in the 50s. This area is not new. Concurrency has been a rich source of bugs since the first multi-user operating systems!

Does anyone know of any research about designing for concurrency in rich client applications? Ideally in a style that minimises the amount of locking involved?

One of the lessons Joe Armstrong et al have been trying to push is that it's very hard to get programs right with shared state across threads. By eliminating shared state and restricting communication between processes to message-passing, they claim, concurrent programs become easier to get right and to understand, and also rely much less on locking and semaphores.

Yet another area where people are doing interesting work on interactive programs is functional reactive programming. This style of programming deals with interactive, event-driven GUI programs again in a way that eschews shared, mutable state.

Erlang and CML

I'm not sure how constrained you are to particular languages, but there is certainly a lot of incredible work out there, both in research and practice, on languages and design for concurrent programs. You should definitely check out Erlang for ideas. They have accomplished amazing things with a few simple ideas. I believe they have roots in Concurrent ML, or at least some striking similarities.

The difference between CML and Erlang is that CML features synchronous message passing, and Erlang has asynchronous message passing. I prefer the CML model, personally, because it's a little easier to program to.

Erlang VM is single processor only

Note that with Erlang you may have tens of thousands of Erlang processes, but they still all run inside one VM process. If you have multiple CPUs at your disposal, current Erlang implementations do not take advantage of them.

Be interesting to know if any Erlang researchers have attempted to address this.

Erlang and multiple CPUs

If you have multiple CPUs at your disposal, current Erlang implementations do not take advantage of them. Be interesting to know if any Erlang researchers have attempted to address this.

Some work has been done on this, but nothing that has made it into the Erlang VM. However, since one can run an Erlang process[1] in a separate VM[2] almost as easily as in the current VM, separate Erlang VMs can be linked together on a single machine to take advantage of multiple processors. Perhaps not ideal, but it apparently works pretty well, and also allows OS-level management of the separate VMs (e.g. assigning to specific CPUs, nice level).

[1] Erlang's idea of a process, that is (not a separate OS process)

[2] whether running on the same machine or some other machine

Non-ACM link to actors paper

Here's a publically accessible link to the actor semantics paper you mentioned.

OT, but you were discussing concurrency with whom?

Wow! Most people's discussions about computing with grandfathers extend as far as things like how you need to press the left mouse button after you've moved the little pointy thing over the bit you're interested in and usually end with "they didn't have this sort of thing when I was a lad".

Another shared-state horror story

On the mailing list of one one of the concurrency oriented languages, E. CTM uses Oz, and its sibling Alice just hit version 1.0

The authors of CTM and a few others have called shared-state concurrency "the wrong default" and proposed teaching simpler to reason about models to replace it, namely declarative concurrency and message-passing concurrency (the model of Erlang and E). There was also an interesting post by Armstrong on "concurrency-oriented programming" on lambda a few months back.

Why concurrency?

Personally, I hope to use multiple processors for the same single task rather than splitting my task into threads.

What interests me is nested data parallelism, and other approaches to realizing that long time dream of FP, transparent parallelism.

Any other good approaches to use of multiple processors for the same task?

--
Shae Erisson - www.ScannedInAvian.org

Declarative concurrency: the forgotten paradigm

Nested data parallelism is a special case of declarative concurrency. This is by far the easiest paradigm of concurrent programming (see chapter 4 of CTM). It keeps programs simple. It has no race conditions or deadlocks. It has the good properties of functional programming. It can even be combined with lazy evaluation to good effect.

The only limitation is that declarative concurrency does not allow programs that expose nondeterminism. If you look closely, though, you will see that this apparently severe limitation is not a limitation at all! Exposed nondeterminism is only really needed in small parts of most programs. Restructuring your program to minimize it will actually improve your program's structure (and reduce the chance of concurrency bugs). So I can recommend declarative concurrency as a good starting point for designing concurrent programs.

It is a tragedy that almost no existing languages support declarative concurrency (I know of Oz and Alice, are there any others?). It is even more tragic that no mainstream language, AFAIK, supports declarative concurrency. This is a fundamental flaw that may take decades to fix. You can still program in a declarative concurrent style, though, which I highly recommend even though most languages make it cumbersome.

declarative concurrent style

Just a thought: a tutorial (in the style of Dan Friedman) explaining DCS using a mainstream language could be helpful.

It is a tragedy that almost n

It is a tragedy that almost no existing languages support declarative concurrency (I know of Oz and Alice, are there any others?). It is even more tragic that no mainstream language, AFAIK, supports declarative concurrency. This is a fundamental flaw that may take decades to fix. You can still program in a declarative concurrent style, though, which I highly recommend even though most languages make it cumbersome.

I recently picked up CTM, but haven't yet found time to read much of it beyond the first chapter and a quick skim through, so please excuse my ignorance. Is the lack of support for declarative concurrency in mainstream languages something which could be remedied by writing a new library, or does it require specific compiler/VM support? For instance, could someone write a few new Java classes and make concurrent Java much simpler?

Declarative concurrency in Java

Is the lack of support for declarative concurrency in mainstream languages something which could be remedied by writing a new library, or does it require specific compiler/VM support?

You could write a library, but declarative concurrency would still be cumbersome. The problem is that declarative concurrency needs low-level support on the level of individual assignments and conditional checks. You would have to insert library calls for all these operations. And what's more, declarative concurrency really shines if you can use higher-order programming to build abstractions, like in functional programming. This can be done in Java too, but it's also cumbersome ("nesting [inner classes] more than two levels invites a readability disaster" -- Arnold and Gosling).

With small changes to the compiler and VM, declarative concurrency is much easier to support in Java. There is an experimental extension of Java, called FlowJava, that does this.

Declarative concurrency in Java

Someone had asked about declarative concurrency in Java a month or so back, and the notion intrigued me.

For the sake of people who know Java and want to understand how declarative dataflow variables work, I thought of a simple class that could be used to wrap all variable to provide the same effect. (I haven't run this, but I believe it would work; mainly intended as an explanatory device.)

public class DataflowVariable(){
    private Object value;

    public DataflowVariable (){}

    public DataflowVariable (Object value){
       this.value = value;
    }
  
    public synchronized void bindValue (Object value){
       if (this.value != null) 
        throw new DeclaritiveBindingException();

       this.value = value;
       notifyAll();  // wakes up all threads waiting  
                     // for this value
    }

    public Object getValue (){
      if(value == null)
        wait();  // block until value is bound
      return value;
    }
}

Obviously this would be cumbersome, and type safety would be out the window without a WHOLE lot more work, but this should show a Java programmer how declarative dataflow variables basically work.

[added synchronized to bindValue]

Type safety

Actually, it occured to me moments later that type safety could be added simply with the use of generics in Java 5.

I guess I haven't gotten over my disappointment with the generics implementation and am still blocking it out. ;-)

Trying to turn the safety off

I guess I haven't gotten over my disappointment with the generics implementation and am still blocking it out.
Same here, so I am really tempted to use a variant of piL type system for my toy implementation.

All in all, I found that a type system of Java is so inadequate for this particular application, that it's more getting in the way than helping.

Java limitations

All in all, I found that a type system of Java is so inadequate for this particular application, that it's more getting in the way than helping.

Yeah, I'm really hoping for good things from Links.

One big problem with declarative concurrency in Java, is that my implementation assumes that "value" is a Value Object, i.e. it is read only.

I think if someone followed this exercise all the way through, the resulting programs would look nothing like normal Java and would confuse most Java programmers. ;-)

Neither dynamic nor static enough

I think if someone followed this exercise all the way through, the resulting programs would look nothing like normal Java and would confuse most Java programmers. ;-)
No smiley necessary, I showed some applications of this code to several Java programmers and they find the code highly unintuitive. Even after I de-de Bruijned it :-)

Another readability problem is that I have to pass everything between agents as either Object arrays or Maps, wildly casting in all directions.

Java threads might be expensive

Funny, I have written a class that is (almost) isomorphic to DataflowVariable just on 24th January as a part of simple join calculus implementation.

Two differences: getValue() is synchronized and instead of explicit bindValue() it has an inner class implementing a channel.

The most important difference, though, is that I use it only for unit testing - the whole system is asynchronous (running multiple agents on any number of threads more than zero), but JUnit is not well suited for that, so I had to come up with this blocking adapter.

Duelling duals

Two differences: getValue() is synchronized and instead of explicit bindValue() it has an inner class implementing a channel.

If I understand correctly, this sounds "dual" to my class: for yours there may be multiple writes to the channel, but only one thread may read a given value.

Mine is the opposite: there can be unlimited reads, but only value is bound.

Unit tests for asynchronous code

Yes, I have to retract that isomorphism's claim :-(

I believe nevertheless that synchronized is still needed :-)

To compensate a bit for bytes I wasted, here is a topic: how are people testing their asynchronous code?

There might be theoretical limits to negative testing (like something should never happen) in asynchronous context, but even positive tests are tricky enough.

Concurrent Testing

how are people testing their asynchronous code?

I haven't found an entirely satisfactory solution to this, but the best I have so far is to fake it out with a synchronous test fixture and break it down to cases (If X happens before Y, what should happen, etc.)

Unfortunately, this can be convoluted enough that for simple cases that I'm inclined to skip the test and just see how it runs. ;-)

Dataflow variable vs. single-assignment variable

I was reading some more CTM today, and realized that my implementation doesn't really cut it as a demo of Oz dataflow variables in Java.

On pg. 336, a distinction is made between dataflow variables and single-assigment variables. The former, based on their definition, can be assigned to multiple times, subject to unification.

So, again by their definition, my class should be called SingleAssignmentVariable.

Still, for people who want to get a feel for the basics of declarative concurrency, the example may still be of use.

Transients in Java

...require playing with identities, for which I didn't find a satisfactory solution. Or else you have to forget about small-effort embedding and roll out a full-scale interpreter.

Lock-freedom

Practical Lock-Freedom is a wonderful paper I ran across recently. My first encounter with lock-free techniques came from reading about Synthesis OS. However, no organizing principle or high-level abstraction is outlined or given in the paper (reasonable as that was far from the focus of it), which gave the impression that designing a lock-free data structure was a hit-or-miss process (which I believe it was at the time). The former paper, on the other hand, does a good job of living up to its name.

What about Ada?

Sync. Message Passing is the default model.

Exactly

Message passing through processes has been the solution for quite a while...I've always wondered why concurrent programming was such a problem...

I guess the easiest name for the problem would be state machine implimentations, but not man CS students really learn those these days.

--
kruhft

Scala Is Interesting

Scala is worth a look into. In particular it integrates with Java very cleanly and completely and also demonstrates an Erlang like concurrency approach in the paper An Overview of the Scala Programming Language (PDF) in Section 9 - Autonomous Components

Thinking in Scala

On the pragprog/LoTY list, I proposed Scala for 2004, but ended up with little time to follow through. Still, I did write up Scala solutions to the exercises in many of the first several chapters of Bruce Eckel's "Thinking in Java".

This was in part a response to the appearance (that Bill Clementson referred to) of a mysterious weblog called "THECLAPP", which contained solutions to those exercises, in both Java and Common Lisp. My speculation that "THECLAPP was an acronym, with the CL standing for Common Lisp", led to my favorite comment, from the author of that mysterious weblog: "It's not an acronym. :)" -- Larry Clapp.

Scala Concurrency on Java Uninteresting

Thanks for that Isaac.

Does anyone have similar references for the .NET platform? (This is actually where my interest in Scala lies.)

Actors and Futures

I suggest taking a look at Actors and Futures. Actors unify threads and objects and simplify communication issues, and futures can transparently minimize locking. I use them in Io.

Concurrent Languages for performance

The article focuses on the need for concurrency in order to get performance out of new CPUs with inbuilt parallelism. Which non-mainstream languages (other than C/C++/Java/MS products) actually support threads that can simultaneously run on multiple CPUs/CPU cores?

My understanding is that neither haskell, ocaml nor erlang do this at present. How about mozart/oz? alice?
Any others?

Concurrency for performance in Mozart

How about mozart/oz?

Good question! The way to get parallel performance in Mozart is to use its transparent distribution support (see chapter 11 of CTM or this NGC article). Put different parts of the computation on different OS processes, which are mapped by the OS to different processors. Because of transparent distribution, this can be done almost without changing the program. You "slice" the program into parts and make sure that each part runs on a different processor. An example of a tool that uses this approach is Mozart's built-in parallel search engine, which transparently parallelizes constraint problems. Another example is the parallel agent simulator used in the ICITIES project to do large-scale Web evolution simulations on a cluster using Mozart. BTW, this simulator clearly shows the performance advantages of lightweight threads with dataflow synchronization versus barrier synchronization.

We find that this works very well (see, e.g., an early comparison with Java that is still worth looking at). It does not give fine-grained parallelism, but in our experience fine-grained parallelism is not worth the trouble. At one time, Konstantin Popov modified Mozart's virtual machine to use OS threads, but this was much more trouble than it was worth. So we abandoned this approach. Another approach we tried was to lower the communication overhead between OS processes by sharing physical memory pages between processes. We had an implementation running on Solaris. But this also was more trouble than it was worth.

This work was done before hyperthreaded or multicore CPUs. I don't know whether the new CPUs will change these conclusions.

Flawed

IMHO, the author makes flawed assumptions about why clock speed is trailing off. Our programs will still get faster without multicore/hyperthreading in CPUs. However, taking advantage of multicore CPUs will still be important.

I wrote more about this on PerlMonks.

Extremes - more CPUs than threads?

The advertised roadmap of the Cell aka Grid CPU (by IBM, Sony, and Toshiba) starts with two cores, and goes to sixty four cores on a single die.

Since four socket motherboards aren't terribly expensive right now, how can the desktop software industry be prepared to take advantage of two hundred and fifty six CPUs in less than ten years?
Each application could spawn enough threads to use it all, but that won't scale down to four CPU setups, and it won't scale up to workstations with more than four sockets.

I suspect that spawning ever more threads is the wide and well-traveled road to concurrency hell. I think that declarative concurrency is the little-known path to code that can profitably use as many CPUs as the computation or algorithm allows.

'Threads vs declarative' feels to me like explicit loops in Java or C vs higher order functions like map.


I've used parallel arrays in Haskell just a bit, and it is far more elegant than the MPI or PVM approach. On the downside, GHC's parallel array support can only use one CPU, (though SMP support is planned for the future) so I have yet to declaratively use my dual processor desktop.

-- Shae Erisson - ScannedInAvian.com

Extremes - more CPUs than threads?

I don't know what age you're living in, but today's software uses many processes that could obviously benefit from multiple cores. You have to remember that most people run more than one application at a time (athough the ones that you create are CPU bound) and will greatly benefit from such advances (if enough onboard CPU cache is available).

I'm starting to feel old, but he best multi-processor paradigm invented in the past 30(ish) years was pipes...learn them and love them and feel the power of multiple processes...

--
kruhft

Pipes

Assuming you mean Unix-like pipes and processes, they're equivalent to asynchronous message passing concurrency, particularly in languages where the threads/processes don't share (mutable) state, such as Erlang.

There are a lot of things that pipes are suitable for, but they don't scale that well (processes are expensive).

This is misleading...

"Modern graphics cards are fully programmable and could be used as a general-purpose CPU if you're willing to put the effort into it"

Their instruction set is heavily oriented towards graphics purposes, though. I'm unsure whether you could sensibly (and sanely ) use is it as a general-purpose CPU.

Hence "Willing to put the effort into it"

I didn't say it would be easy, just possible. It will take some effort.

Yeah, well

I basically wonder whether it's a good idea at all ;)

transputers, multi-core dies, SIMD, Beowulf clusters...

Graphics processors are a special case of transputers/SIMD/etc. So I think nvidia, ati, et al should switch to making general-purpose highly parallel coprocessors. If I could buy a PPU (parallel processing unit) that's as powerful and as cheap as the current cards, I'd have a lot of extra power available for declarative concurrency (see other threads) or scientific computing of the Beowulf cluster persuasion.

In addition, PPUs would (hopefully) encourage fast open source OpenGL implementations.

-- Shae Erisson - ScannedInAvian.com

isn't this where the cell pro

isn't this where the cell processor is going?

Hum?

"So I think nvidia, ati, et al should switch to making general-purpose highly parallel coprocessors."

Why should they stop producing GPUs?

How does this relate to open source OpenGL?

PPUs would (hopefully) encourage fast open source OpenGL implementations.

These days OpenGL implementation simply assembles packets of data that are passed to the video card. That's a bit of an oversimplificiation, but the gist is true. The GPU takes care of transformation, backface culling, clipping, rasterization, etc. Seems like real issue that GPU manufacturers are tight-fisted about handing out their specs.

I think that the idea would b

I think that the idea would be that you implement OpenGL on hardware as well (those PPUs), but that the spec of them would be open (so that people can actually program them themselves instead of via an API like DirectX or OpenGL).

Probably Not

It all depends on how mathmatically similar your application is to a 3D engine . . .

In general, if you want to speed up an application by throwing hardware at it, you're probably better off upgrading the processor/RAM/whatever.

I do remember some serious conversations on the Beawolf cluster mailing list about using 3D cards as a secondary CPU in clusters. At the time, IIRC, it wasn't possible to use them as anything other than graphics, but recent chipsets are more general. The upside to this is that once you have the code to run the cards like that, and you have a sufficiently fast bus (like PCI Express), you can get a big speed gain just by throwing another card in.

Still, very few applications are likely to make the extra development time worthwhile. It just amazes me that this is possible.

General-purpose computation on GPU's

There's a weblog (www.gpgpu.org) on the topic of general-purpose computation on GPU's. Evidently, there are numerical applications where it does make sense to do the computation on the GPU.

Sorry, mix of terms, I meant

Sorry, mix of terms, I meant general-purpose as in "really general-purpose", not parallel number crunching ;)

There is a *lot* of very interesting and useful stuff you can do on modern GPUs, that's without question!

Depends on the OS

On Linux, at least, threads in the same process are scheduled across multiple CPUs, so any language that uses multiple OS-level threads will take advantage of multiple CPUs. Python, for example.

python has a global intepreter lock

or at least I think it still does. This means that python code cannot execute simultaneously in multiple threads. You have to embed C in your python to make use of multiple cpus.

Used to be a patch

"Free Threading" Patches for Python. Don't know if it still applies though.

Don't forget Single Assignment C

sac is pretty cool.
How many threads would you like? Just tell the compiler and it'll do it for you.

sac is proprietary

I am disappointed to report that sac is proprietary, and I could not even find a package that includes source code for the compiler. It's unfortunate that the original poster neglected to mention this, because sac sounds really cool, and I wish I could use it.

Instead of sac use...

Hi!

google for zpl (http://www.cs.washington.edu/research/zpl/home/index.html) , upc (http://upc.nersc.gov/) or titanium ( http://www.cs.berkeley.edu/projects/titanium/). (Or Nesl (http://www-2.cs.cmu.edu/~scandal/nesl.html ) if you want a functional language.)

All with source code and scalabe to thousands of CPU's...

Heiko

Trestle toolkit

The Trestle toolkit designed at the Systems Research Center (SRC) of (then) Digital Equipment Corp. (subsequently Compaq and now HP) by Greg Nelson and Mark Manasse provided a powerful and extensible toolkit for building user interfaces for multithreaded applications. Trestle used a technique of locking level to help programmers deal with concurrency internally. It also associated a concept of "event time" with selections, to help eliminate race conditions where the end user's input events are interpreted by the wrong window or widget. Trestle was written in Modula-3, which was a Java-like language jointly developed in the late 1980s by SRC and Olivetti Research. Trestle is documented in:

Trestle Reference Manual
Mark S. Manasse and Greg Nelson
DEC SRC Research Report 68, December 1991
http://www.hpl.hp.com/techreports/Compaq-DEC/SRC-RR-68.pdf

and

Trestle Tutorial
Mark S. Manasse and Greg Nelson
DEC SRC Research Report 69, May 1, 1992
http://www.hpl.hp.com/techreports/Compaq-DEC/SRC-RR-69.pdf

Join calculus

There are several implementations of join calculus (closely related to pi-calculus). Among the languages implemeting join calculus are Ocaml (JoCaml), Java, C#. For more information, please see

http://research.microsoft.com/~fournet/papers/join-tutorial.pdf

Does anyone know of any resea

Does anyone know of any research about designing for concurrency in rich client applications?

That problem has been solved a long time ago in operating sytems research. I only wish that people would start to realize that heavy weight communication systems (those done through the OS using hardware at the lowest levels) are better than software designed approaches.

In retrospect, I recall the example from OS class that showed the amount of code that allowed for thread level locks on two variables (in C). Simply using two semaphores would have sufficed (although without the possibilty of deadlocks) but the amount of code required for multiple software semaphores was beyond comprehension...although usable. Practicality is one thing, perfection is the other, but should concurrency really be handled at the user level when the OS can handle such things perfectly well by themselves?

--

kruhft

locking

why would you go through the overhead of invoking the operating system to perform synchronization? (particularily given that the original post was about the recent reccurance of actual hardware concurrency, and not the somewhat self-destructive practice of faking it on a uniprocessor)

the MTA had explicit synchronization state on a per-word basis, so that any of the thousands of hardware threads could serialize with another should it be necessary. and when a thread blocked, the only resourced left (temporarily) unused was its file, other threads could still use the rest of the machine.

i have some hope that as things progress, the rich substrate of 'dead' ideas will be mined again

Multiple assignment in Python

One of the first ideas when I started to use Python was that you could automatically create concurrent programs using the LHS , opperator (simplified):

x, y, z = (f a bc),(g a b c), (h a b c)

I've seen multiple versions of various concurrency operations, but this seemed the cleanest to me (at least much nicer than let* :)

--
kruhft

Not really a concurrency operator

It assigns a value (in this case, a tuple), to a pattern. Consider:

t = 1, 2, 3
x, y, z = t

ML etc. also have pattern-matching assignment, but more flexible and valid for more types than in Python.

In some cases, such forms could be used to automatically find code that could potentially be executed in parallel, but it doesn't really make it significantly easier than in side-effect free code in general.

Note that the purpose of let* is also different, related to lexical scope rather than concurrency. In fact, let* is the form of let with the least potential for concurrency.

Concurrency

[I think the beginning of this thread was more about paralellism (adding hardware to get answers faster) than about concurrency (language constructs to take time and communication into account).]

I haven't read Concepts, Techniques, and Models of Computer Programming yet but I want to mention two languages with concurrency that isn't based on a shared store, so I guess each of these languages must be exhibiting either declarative concurrency or message-passing concurrency. Perhaps someone who has understood the distinction between those models will classify these languages:

- ToonTalk

- Timber

Pi => message-passing?

I don't have my copy of CTM handy, so I cannot check the letter of the definitions, but since both actions of TImber and birds/nests of ToonTalk look like Pi channels, I would say they both use message passing concurrency.

BTW, it's a pity that I didn't save the electronic copy before CTM's publication... The hard copy is, uh, quite hard to bring it around.

[on edit: The semantic layers of Timber:The reactive layer is a system of concurrent objects with internal state that react to messages passed from the environment (input), and in turn send synchronous or asynchronous messages to each other, and back to the environment (output).]

[on edit: Bridging pi and ToonTalk]