User loginNavigation |
SemanticsParameterized Notions of ComputationParameterized Notions of Computation, Robert Atkey, JFP 2008.
Once you've programmed with monads for a while, it's pretty common to start defining parameterized families of monads -- e.g., we might define a family of type constructors for IO, in which the program type additionally tracks which files the computation reads and writes from. This is a very convenient programming pattern, but the theory of it is honestly a little sketchy: on what basis do we conclude that the indices we define actually track what we intend them to? And furthermore, why can we believe that (say) the monadic equational laws still apply? That's the question Atkey lays out a nice solution to. He gives a nice categorical semantics for indexed, effectful computations, and then cooks up lambda calculi whose equational theory corresponds to the equations his semantics justifies. The application to delimited continuations is quite nice, and the type theories can also give a little insight into the basics of how stuff like Hoare Type Theory works (which uses parameterized monads, with a very sophisticated language of parameters). On a slightly tangential note, this also raises in my mind a methodological point. Over the last n years, we've seen many people identify certain type constructors, whose usage is pervasive, and greatly simplified with some syntactic extensions -- monads, comonads, applicative functors, arrows, and so on. It's incredible to suggest that we have exhausted the list of interesting types, and so together they constitute a good argument for some kind of language extension mechanism, such as macros. However, all these examples also raise the bar for when a macro is a good idea, because what makes them compelling is precisely that the right syntax yields an interesting and pretty equational theory in the extended language. By neelk at 2009-02-11 21:40 | Category Theory | Lambda Calculus | Semantics | Type Theory | 16 comments | other blogs | 15810 reads
Verifying Compiler Transformations for Concurrent Programs
Verifying Compiler Transformations for Concurrent Programs. Sebastian Burckhardt, Madanlal Musuvathi, Vasu singh.
ompilers transform programs, either to optimize performance or to translate language-level constructs into hardware primitives. For concurrent programs, ensuring that a transformation preserves the semantics of the input program can be challenging. In particular, the emitted code must correctly emulate the semantics of the language-level memory model when running on hardware with a relaxed memory model. In this paper, we present a novel proof methodology for proving the soundness of compiler transformations for concurrent programs. Our methodology is based on a new formalization of memory models as dynamic rewrite rules on event streams. We implement our proof methodology in a first-of-its-kind semi-automated tool called Traver to verify or falsify compiler transformations. Using Traver, we prove or refute the soundness of several commonly used compiler transformations for various memory models. In this process, we find subtle bugs in the CLR JIT compiler and in the JSR-133 Java JIT compiler recommendations. The goal is to reason about the effects that different memory models may have on the validity of transformations. Program execution is modeled as an event stream, with the memory model being able to alter the event stream by swapping or eliminating events. Each concurrent execution thread produces a separate event stream. The event stream produced by the execution of the concurrent program is the (possibly altered) result of merging the event streams of each component. The validity of transformation can thus be proved relative to a specific memory model (i.e., a set of stream rewrite rules). Traver lives here. By Ehud Lamm at 2009-01-10 06:45 | Implementation | Parallel/Distributed | Semantics | 2 comments | other blogs | 11194 reads
A Framework for Comparing Models of Computation
A Framework for Comparing Models of Computation by Edward A. Lee and Alberto Sangiovanni-Vincentelli, 1998.
We give a denotational framework (a “meta modelâ€) within which certain properties of models of computation can be compared. It describes concurrent processes in general terms as sets of possible behaviors. A process is determinate if, given the constraints imposed by the inputs, there are exactly one or exactly zero behaviors. Compositions of processes are processes with behaviors in the intersection of the behaviors of the component processes. The interaction between processes is through signals, which are collections of events. Each event is a value-tag pair, where the tags can come from a partially ordered or totally ordered set. Timed models are where the set of tags is totally ordered. Synchronous events share the same tag, and synchronous signals contain events with the same set of tags. Synchronous processes have only synchronous signals as behaviors. Strict causality (in timed tag systems) and continuity (in untimed tag systems) ensure determinacy under certain technical conditions. The framework is used to compare certain essential features of various models of computation, including Kahn process networks, dataflow, sequential processes, concurrent sequential processes with rendezvous, Petri nets, and discrete-event systems.The generality of the approach looks very impressive. Can anyone share first-hand experience with this framework? Would be great to see E compared to Oz! By Andris Birkmanis at 2008-09-11 15:02 | Parallel/Distributed | Semantics | 5 comments | other blogs | 8699 reads
Compiler Validation through Program AnalysisAnna Zaks and Amir Pnueli (2008). CoVaC: Compiler Validation by Program Analysis of the Cross-Product. In Proc. 15th Symp. Formal Methods. From the introduction:
and
UpgradeJ: Incremental Typechecking for Class UpgradesUpgradeJ: Incremental Typechecking for Class Upgrades, Gavin Bierman, Matthew Parkinson and James Noble.
There has been an energetic discussion of API evolution in the forum, so when I saw this paper I thought it might be of interest to LtU readers. By neelk at 2008-08-31 12:55 | OOP | Semantics | Software Engineering | 2 comments | other blogs | 7089 reads
Revisiting CoroutinesRevisiting Coroutines, by Ana Lucia de Moura and Roberto Ierusalimschy:
Coroutines seem to get fairly short riff in the literature, and they have only been discussed on LTU, a couple of times. Given coroutines have such a straightforward mapping to hardware, I hope they get more attention. Coroutines show up in many different places. For instance, the inter-process communication (IPC) facilities of microkernels, like EROS, are a faithful implementation of asymmetric coroutines, with an important difference. Essentially, yield and resume must both take an explicit coroutine argument naming the coroutine respectively yield to and resume. If the coroutine to yield to is left implicit, as it is in most treatments I've seen, then coroutines become less composable since yield returns control to the innermost resume which, given abstract types, might be the wrong one. This problem is discussed in Section 5.6, "Avoiding Interference Between Control Actions". The paper recommends tagging coroutines to match up resume/yield pairs, but the EROS IPC system provides a more direct encoding via a "resume" capability, which is a single-use coroutine used to return control directly to a client. Each subsequent invocation of the object synthesizes a new resume capability. Taking this to the extreme implies that yield and resume can be unified into a single "invoke" operation which accepts a coroutine argument to be used in a subsequent invoke operation. Indeed, these are "symmetric coroutines". This paper suggests that symmetric coroutines are harder to understand due to the actors/CPS-like nature of the control flow. The Disciplined Disciple CompilerDisciple is an explicitly lazy dialect of Haskell which includes: The wiki page has more information, unfortunately there's no paper yet summarizing the ideas and results. Their effect system is quite interesting. Some of the ideas recently discussed here are implemented in Disciple. via Haskell Cafe By Daniel Yokomizo at 2008-03-20 14:19 | Functional | Semantics | Type Theory | 10 comments | other blogs | 13611 reads
Eriskay: a Programming Language Based on Game SemanticsEriskay: a Programming Language Based on Game Semantics. John Longley and Nicholas Wolverson. GaLoP 2008.
It's always interesting to see a new programming language strongly based on some mathematical formalism, because a language gives you a concrete example to match the abstract semantic definitions to, and game semantics is something that I've been curious about for a while. One particularly interesting feature is that the core language has a restricted model of the heap, which controls the use of higher-order store in such a way that cycles are prohibited. This is enforced with a notion called "argument safety", which essentially prohibits storing values of higher type into fields which come from "outside" the object. This is somewhat reminiscent of the ownership disciplines found in OO verification systems like Boogie, which enforce a tree structure on the ownership hierarchy. It would be very interesting to find out whether this resemblance is a coincidence or not. (Samson Abramsky has some lecture notes on game semantics for the very curious.) Foundations for Structured Programming with GADTsFoundations for Structured Programming with GADTs, Patricia Johann and Neil Ghani. POPL 2008.
I found this to be a really interesting paper, because the work had an opposite effect on my opinions from what I expected. Usually, when I see a paper give a really clean semantics to an idea, I end up convinced that this is actually a good idea. However, Johann and Ghani gave a really elegant treatment of GADTs, which had the effect of making me think that perhaps Haskell-style GADTs should not be added as is to new programming languages! This is because of the way they give semantics to GADTs as functors When Is A Functional Program Not A Functional Program?When Is A Functional Program Not A Functional Program?, John Longley. ICFP 1999.
I like this paper because it shows how some of the most abstract bits of formal logic (realizability models of higher order logic) suggest actual programs you can write. As a barely-related aside, even a brief look into this realizability stuff also taught me a good deal of humility -- for example, it seems that at higher types, what you can represent mathematically depends on the specific notion of computation you're using. So the mathematical consequences of the Church-Turing thesis are rather more subtle than we might initially expect. (Andrej Bauer discusses a related point in more detail in this blog post.) |
Browse archives
Active forum topics |
Recent comments
23 weeks 18 hours ago
23 weeks 21 hours ago
23 weeks 21 hours ago
45 weeks 2 days ago
49 weeks 4 days ago
51 weeks 1 day ago
51 weeks 1 day ago
1 year 1 week ago
1 year 6 weeks ago
1 year 6 weeks ago