archives

Local Reasoning for Storable Locks and Threads

Local Reasoning for Storable Locks and Threads. Alexey Gotsman; Josh Berdine; Byron Cook; Noam Rinetzky; Mooly Sagiv.

We present a resource-oriented program logic that is able to reason about concurrent heap-manipulating programs with unbounded numbers of dynamically-allocated locks and threads. The logic is inspired by concurrent separation logic, but handles these more realistic concurrency primitives. We demonstrate that the proposed logic allows for local reasoning about programs that exhibit a high degree of information hiding in their locking mechanisms. Soundness is proved using a novel thread-local fixed-point semantics.

How to make the static represent the dynamic?

Seemingly a fair number of bugs in software stem from the complexity of the dynamic nature of runtime: concurrency issues, rare this-then-that-equals-infinite-loop issues, etc. How can a programming language be designed to try to make the static source more directly either reveal the runtime possibilities, or better constrain them to avoid confusion and thus bugs? Some languages do that by choosing better defaults (Erlang, Haskell, etc.) but obviously they don't constrain away all the differences between source and runtime. Presumably no language could, and still be all that useful. But how close could one usefully get?

Sure, having an IDE that generates UML interaction diagrams for all the possible concurrent API call interleavings might in some sense help, but tools along those lines really feel like mistakenly allowing grape juice in the room at all.

Since sundry tools for miscellaneous languages do already exist, I am more interested in how to refine/constrain core languages rather than ancillary tools. But, how could one at least provide tools to help e.g. automatic deadlock detection for CSP, or design-by-contract to effectively constrain the runtime state?