User loginNavigation |
Parallel/DistributedBranching Time vs. Linear Time: Semantical PerspectiveSumit Nain and Moshe Vardi, Branching Time vs. Linear Time: Semantical Perspective, invited ATVA'07 paper.
In revisiting the notion of process equivalence, which is a fairly central part of concurrency theory, Nain and Vardi end up arguing in favor of a purely trace-based notion of equivalence and the use of linear-time logics. This in turn leads to a rejection of bisimulation as a tool for establishing process equivalences:
They take pains to point out that they are not claiming that bisimulation or CTL should be abandoned or are not useful. Rather their emphasis is on the fact that bisimulation is not a contextual equivalence and is therefore not appropriate for establishing equivalence between (for example) a specification and its implementation. As they say in the conclusion of the paper:
By Allan McInnes at 2009-04-26 22:55 | Parallel/Distributed | Semantics | 26 comments | other blogs | 13643 reads
Capabilities for External UniquenessPhilipp Haller and Martin Odersky have submitted Capabilities for External Uniqueness to OOPSLA'09.
A prototype plugin for the Scala compiler can be found on the same page. By James Iry at 2009-04-09 16:26 | Object-Functional | Parallel/Distributed | Type Theory | 3 comments | other blogs | 10073 reads
Detecting Data Race and Atomicity Violation via Typestate-Guided Static Analysis
Detecting Data Race and Atomicity Violation via Typestate-Guided Static Analysis. Yue Yang, Anna Gringauze, Dinghao Wu, Henning Rohde. Aug. 2008
The correctness of typestate properties in a multithreaded program often depends on the assumption of certain concurrency invariants. However, standard typestate analysis and concurrency analysis are disjoint in that the former is unable to understand threading effects and the latter does not take typestate properties into consideration. We combine these two previously separate approaches and develop a novel typestate-driven concurrency analysis for detecting race conditions and atomicity violations. Our analysis is based on a reformulation of typestate systems in which state transitions of a shared variable are controlled by the locking state of that variable. By combining typestate checking with lockset analysis, we can selectively transfer the typestate to a transient state to simulate the thread interference effect, thus uncovering a new class of typestate errors directly related to low-level or high-level data races. Such a concurrency bug is more likely to be harmful, compared with those found by existing concurrency checkers, because there exists a concrete evidence that it may eventually lead to a typestate error as well. We have implemented a race and atomicity checker for C/C++ programs by extending a NULL pointer dereference analysis. To support large legacy code, our approach does not require a priori annotations; instead, it automatically infers the lock/data guardianship relation and variable correlations. We have applied the toolset to check a future version of the Windows operating system, finding many concurrency errors that cannot be discovered by previous tools. Typestates extend the ordinary types by recoding the state of objects and allowing the safety violations that stem from operations being invoked on objects that are in the wrong state. By Ehud Lamm at 2009-04-07 05:56 | Parallel/Distributed | Software Engineering | Type Theory | 2 comments | other blogs | 10287 reads
C++ FuturesThe next C++ standard is supposed to include futures in the libraries. Futures allow assignment to a value that is to be calculated at a later time - assigning a promise in lieu of the final value until such time as it becomes available. The current thread carries on until the value is actually needed, at which time the value is made available or a the thread goes into a wait state. Being a library facility, instead of a built-in language facility as in Oz or Alice-ML, the use of futures is not nearly as transparent. Still, one should be able to get a similar effect. In an article on Broken promises–C++0x futures, Bartosz Milewski criticizes the C++ implementation for it's lack of composability.
My personal opinion is that what he is really saying is that concurrency through futures (which in the simple case would be a declarative form of concurreny) don't easily do message-based concurrency. I suppose one could look at CTM and describe how to do Erlang type concurrency with nothing more than dataflow variables. But I think the bigger problem is that we assume that there is a single approach to solving the concurrency problem. We might as well say that STM is good, but it fails to deliver declarative or message concurrency. I personally think languages, either through libraries or built-in language facilities, should provide multiple ways of dealing with concurrency and distribution. Although I don't use C++ anymore, I'm glad that they are integrating lightweight concurrency models into the language. (Surprised we don't have a category dedicated to concurrency, so I'll post this under parallel/distributed.) By Chris Rathman at 2009-03-04 16:13 | Parallel/Distributed | 66 comments | other blogs | 39335 reads
Using Promises to Orchestrate Web Interactions
Phil Windley posted a few useful links about this topic following WWW2008.
Bonus question: How is this item connected to the one I posted earlier today? Verifying Compiler Transformations for Concurrent Programs
Verifying Compiler Transformations for Concurrent Programs. Sebastian Burckhardt, Madanlal Musuvathi, Vasu singh.
ompilers transform programs, either to optimize performance or to translate language-level constructs into hardware primitives. For concurrent programs, ensuring that a transformation preserves the semantics of the input program can be challenging. In particular, the emitted code must correctly emulate the semantics of the language-level memory model when running on hardware with a relaxed memory model. In this paper, we present a novel proof methodology for proving the soundness of compiler transformations for concurrent programs. Our methodology is based on a new formalization of memory models as dynamic rewrite rules on event streams. We implement our proof methodology in a first-of-its-kind semi-automated tool called Traver to verify or falsify compiler transformations. Using Traver, we prove or refute the soundness of several commonly used compiler transformations for various memory models. In this process, we find subtle bugs in the CLR JIT compiler and in the JSR-133 Java JIT compiler recommendations. The goal is to reason about the effects that different memory models may have on the validity of transformations. Program execution is modeled as an event stream, with the memory model being able to alter the event stream by swapping or eliminating events. Each concurrent execution thread produces a separate event stream. The event stream produced by the execution of the concurrent program is the (possibly altered) result of merging the event streams of each component. The validity of transformation can thus be proved relative to a specific memory model (i.e., a set of stream rewrite rules). Traver lives here. By Ehud Lamm at 2009-01-10 06:45 | Implementation | Parallel/Distributed | Semantics | 2 comments | other blogs | 11253 reads
Programmable Concurrency in a Pure and Lazy LanguageProgrammable Concurrency in a Pure and Lazy Language, Peng Li's 2008 PhD dissertation, is a bit more implementation focused than is common on LtU. The paper does touch on a wide range of concurrency mechanisms so it might have value to any language designer considering ways to tackle the concurrency beast.
The paper's summary explains what I like most about it:
Even if concurrency isn't your thing, section 6.3 describes the author's findings on the pros and cons of both purity and laziness in a systems programming context. By James Iry at 2008-12-15 03:00 | Functional | Implementation | Parallel/Distributed | 11 comments | other blogs | 18010 reads
Functional building blocks as concurrency patternsWhile teaching INGI1131, my concurrent programming course, I have become even more impressed by a concurrent paradigm, namely functional programming extended with threads and ports, which I call multi-agent dataflow programming. This paradigm has many good properties:
This paradigm seems to be exactly what is needed for both small and big parallel systems (both multicore and Internet, tight and loose coupling). I am surprised that it is not used more often. What do you think? Does it deserve a bigger role? By Peter Van Roy at 2008-11-28 12:47 | Parallel/Distributed | 79 comments | other blogs | 34975 reads
Local Rely-Guarantee ReasoningLocal Rely-Guarantee Reasoning, Xinyu Feng. Accepted for publication at POPL 2009.
In the beginning there was Hoare logic, which taught us how to reason about sequential, imperative programs. Then, Owicki and Gries extended Hoare logic with some additional rules that enabled reasoning about some concurrent imperative programs. This was good, but there were a lot of "obviously correct" concurrent programs that it couldn't handle. So Owicki-Gries logic begat two children. The elder child was Jones's introduction of the rely-guarantee method. The intuition here is that if you have two subprograms M1 and M2, and M1 will work in an environment with a working M2, and M2 will work in an environment with a working M1, then when you put the two together you have a working M1 and M2. This is a really powerful reasoning method, but unfortunately it's not terribly modular. The younger child of Owicki-Gries was concurrent separation logic. The intuition behind it is that if you can divide the heap into disjoint (logical) pieces, and only let one process access each chunk at a time, then you can't have any race conditions. This is a very simple principle, and permits modular, compositional reasoning about concurrent programs -- even pointer programs. But there are programs that can't be proven in this style. So the obvious thing to want is the ability to combine these two styles of reasoning. Unfortunately, this is hard -- there have been several logics proposed to do this, each of which does a bit better than the last. Feng's is the latest, and the best I've seen so far. (Though concurrency is not really my area.) An interesting point is that these kinds of reasoning principles, while invented for the concurrent world, are also interesting for reasoning about modular sequential programs. This is because when you create imperative abstractions, it's helpful to be able to give up knowledge about exactly when state changes can happen. So you need the same sorts of techniques to handle this kind of conceptual nondeterminism that you need for the actual nondeterminism of parallel hardware. By neelk at 2008-10-05 17:17 | Parallel/Distributed | Theory | 6 comments | other blogs | 9452 reads
Intel Ct: C for Throughput ComputingIntel is working on a C++ extension called Ct to simplify multicore programming and data parallelism more properly than they did with previous library efforts like TBB.
Ct is not intended to be a C++ dialect or replacement but rather a tricky integration of a runtime engine with existing C++ compiler environments.
|
Browse archives
Active forum topics |
Recent comments
39 weeks 4 hours ago
39 weeks 8 hours ago
39 weeks 8 hours ago
1 year 9 weeks ago
1 year 13 weeks ago
1 year 14 weeks ago
1 year 14 weeks ago
1 year 17 weeks ago
1 year 22 weeks ago
1 year 22 weeks ago