"A Theory of Interprocess Communication" -- Leslie Lamport

Unless I flubbed the search (entirely possible), this LtU relevant classic has not been registered properly here (and I think ought to be). From 1986 and the pen of Leslie Lamport:

The Mutual Exclusion Problem: Part I -- A Theory of Interprocess Communication

The Mutual Exclusion Problem: Part II -- Statement and Solutions [same link]

A taste:

2.1 Physical Considerations. For our results to be meaningful, our formalism must accurately reflect the physical reality of concurrent processes. We therefore feel that it is important to justify the formalism on physical grounds. We do this in terms of the geometry of space-time, which lies at the foundation of all modern physics. [....]

The reader may find the introduction of special relativity a bit farfetched, since one is rarely, if ever, concerned with systems of processes moving at relativistic velocities relative to one another. However, the relativistic view of time is relevant whenever signal propagation time is not neglibibly small compared with the execution time of individual operations, and this is certainly the case in in most multiprocess systems.

Love 'em or hate 'em the papers are real art and, in my opinion, illustrate by example a good way of thinking about computing. A way of thinking not as much practiced as it ought to be.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

The concurrency constructs

The concurrency constructs in clojure are driven by more or less the same world view. Rich Hickey's talk on the subject is interesting.

Buridan's take on concurrency

Blockquoth Lamport:

… the relativistic view of time is relevant whenever signal propagation time is not neglibibly small compared with the execution time of individual operations, and this is certainly the case in in most multiprocess systems

Compare to Peter Denning's Choice Uncertainty Principle:

A half-signal input can cause the flipflop to enter a “metastable state” for an indeterminate time that may exceed the clock interval by a large amount. The flipflop eventually settles into a stable state, equally likely to be 0 or 1... Unambiguous choices between near-simultaneous signals must be made in many parts of computing systems, not just at interrupt flipflops...

We can summarize the preceding analysis as the choice uncertainty principle: “No choice between near-simultaneous events can be made unambiguously within a preset deadline.” The source of the uncertainty is the metastable state that can be induced in the chooser by conflicting forces generated when two distinct signals change at the same time.

It is a mistake to think the choice uncertainty principle is limited to hardware. Suppose your software contains a critical section guarded by semaphores. Your proof that the locks choose only one process at a time to enter the critical section implicitly assumes that only one CPU at a time can gain access to the memory location holding the lock value. If that's not so, then occasionally your critical section will fail no matter how careful your proofs. Every level of abstraction at which we prove freedom from synchronization errors always relies on a lower level at which arbitration is solved. But arbitration can never be solved absolutely.

About the only place where I've seen this cited is in Concurrent Composition and Algebras of Events, Actions, and Processes by Mark Burgin and Marc L. Smith (previously mentioned on LtU about 2½ years ago):

The goal of this work is to introduce a common formalized framework for current research in this area and to eliminate shortcomings of existing models of concurrency. The main problem with the majority of these models is that they use an oversimplified model of time. Firstly, they consider only two types of time scales — linear time and branching time, while the system theory of time (Burgin, 2002) implies a necessity to have a more flexible representation of temporal concurrent processes. With the advent of multi-core processors, and the prevalence of cluster and grid computing, reasoning about true concurrency is no longer an avoidable concern. Secondly, all events and actions in these models of concurrency do not have duration, while in reality to build reliable technological systems, such as computers and networks, it is necessary to take into account duration of events and actions ( Denning, 2007). Thirdly, it is generally assumed that moments of time when events occur are exactly specified and distinguishable from one another, while in reality researchers and engineers encounter an impossibility to distinguish two near-simultaneous events (Lamport, 1984; Denning, 2007). As a result, the arbitration problem is one of the most important for concurrent systems (Denning, 1985).

what does that mean?

Denning's quote there about 'never be solved absolutely' makes it sound to me like everything we currently used is probabilistically flawed; sorta like hashes will eventually clash, but people ignore that. Is that the point? Or is there some fix that current hardware uses?

Probabilistic solution

These are analogue circuits that are used to build digital gates. There is no definite fix for the problem so the only solution is to reduce the probability of error to an acceptable threshold. There is a reasonable discussion of the problem and solution on wiki. Every digital hardware device that you use has a small probability of failure, but the actual probabilities of failure are so small that we never notice the problem.

But the two quotes as made are not discussing the same issue. The point that Lamport is making is that in a system with non-negligible propagation time you cannot use a model of time that has instantaneous transitions with finite delays between them. It could be argued that the uncertainty principle referred to in the second quote is a specific instance of this problem; but there is no direct comparison as one is a much wider principle that encompasses all of the other and then some.

For example, if you consider a discrete system in which there is no uncertainty principle but the propagation delay is still large in comparison to the switching time then the relativistic view would still be relevant.

how fast is your computer?

Re: comment-62391: …everything we currently use is probabilistically flawed?

The Wikipedia article on Metastability in Electronics or the FPGA FAQ entry on Metastability get a little too technical. Those of us seeking a more popular-sciency introduction to the subject would be better served by Computers without Clocks by Ivan E. Sutherland (of Sketchpad fame) and Jo Ebergen, published in Scientific American circa August 2002. In an asynchronous nod to the LtU readership, Sutherland and Ebergen depart momentarily from their main topic to throw in the following bit of PLT trivia:

Sun is by no means the only company investigating asynchronous circuits. A group at Philips Research in the Netherlands has developed an asynchronous error corrector for a digital compact cassette player and an asynchronous version of a popular microcontroller for portable devices. The asynchronous microcontroller has been incorporated into pagers sold by Philips. The success of the Philips research group comes from three factors. First, the team learned how to create products rapidly using a programming language called Tangram that simplifies hardware design.

(What's a cassette player?)

An HTML-ed bootleg copy of the article is hosted by Moscow State University's Institute for Studies of the Nature of Time (a.k.a. “temporology”) and a scan of the original publication can be found at USC.