Theory

Process calculi for transactions

Transactions are a hot topic in programming languages, especially with some exciting recent work on providing language support for STM ("Software Transaction Memory"). A new paper, A Concurrent Calculus with Atomic Transactions, by Acciai, Boreale & Dal Zilio, provides an extension of CCS, which they call ATCCS ("Atomic Transactions CCS"), with support for the primary operations of STM.

I was not aware of work on modelling transactions in process calculi, but the bibliography cites six works, five from the last three years, and a Montanari&co paper from 1990:

  • 6. L. Bocchi, C. Laneve and G. Zavattaro. A Calculus for Long Running Transactions. In Proc. of FMOODS, LNCS 2884, 2003.

  • 8. R. Bruni, H.C. Melgratti and U. Montanari. Nested Commits for Mobile Calculi: extending Join. In Proc. of IFIP TCS, 563-576, 2004.

  • 9. R. Bruni, H.C. Melgratti and U. Montanari. Theoretical Foundations for Compensations in Flow Composition Languages. In Proc. of POPL, ACM Press, 209-220, 2005.

  • 15. V. Danos and J. Krivine. Reversible Communicating System. In Proc. of CONCUR, LNCS 3170, 2004.

  • 16. V. Danos and J. Krivine. Transactions in RCCS. In Proc. of CONCUR, LNCS 3653, 2005.

  • 18. R. Gorrieri, S. Marchetti and U. Montanari. ACCS: Atomic
    Actions for CCS. Theoretical Computer Science,
    72(2-3):203-223, 1990.

I haven't read these papers, but from the summary in the concluding paper of the ABZ paper, there seem to be some interesting ideas floating about here. Will repay a closer look, I think...

Proofs are Programs: 19th Century Logic and 21st Century Computing

Proofs are Programs: 19th Century Logic and 21st Century Computing

As the 19th century drew to a close, logicians formalized an ideal notion of proof. They were driven by nothing other than an abiding interest in truth, and their proofs were as ethereal as the mind of God. Yet within decades these mathematical abstractions were realized by the hand of man, in the digital stored-program computer. How it came to be recognized that proofs and programs are the same thing is a story that spans a century, a chase with as many twists and turns as a thriller. At the end of the story is a new principle for designing programming languages that will guide computers into the 21st century.

This paper by Philip Wadler was a Dr Dobbs article in 2000 and has a matching a Netcast interview.

This nifty paper starts with Frege's Begriffschrift in 1879 and goes to Gentzen's sequent calculus, Church's untyped and then typed lambda calculus, and then brings it all together to describe the Curry-Howard correspondence. For the grand finale, Wadler connects this to Theorem Provers and Proof Carrying Code and gives commercial examples where it all pays off.
This is an enjoyable story and a fun introduction to type theory and the Curry-Howard correspondence.

For more of Wadler's writings along these lines check out his History of Logic and Programming Languages paper collection.

edit: fixed the dr dobbs article link

Transactional Memory with data invariants (draft sequel to the STM-Haskell paper)

Transactional memory with data invariants
From the abstract:
This paper introduces a mechanism for asserting invariants that are maintained by a program that uses atomic memory transactions.
The idea is simple: the programmer write always E where E is an expression that should be preserved by every atomic update for the remainder of the program's execution. We have extended STM Haskell to dynamically check always statements atomically with the user's updates: the result is that we can identify precisely which update is the first one to break an invariant.
This seems connected to Typed Contracts for Functional Programming by Ralf Hinze, Johan Jeuring, and Andres Löh (noticed on the blog of Dominic Fox).
Maybe this year design-by-contract is the hot subject?

I haven't gotten far enough into either of these papers to have much opinion, but the motivational paragraph at the beginning of the Typed Functional Contracts paper grabbed my attention instantly, and I know I want more STM in my applications, so I look forward to a few enjoyable hours.

Towards Applicative Relational Programming

This 10-page 1992 article, Towards Applicative Relational Programming by Ibrahim & van Enden, has just appeared on ArXiv, which asked the question of how to combine functional and relational programming.


This is fairly well-trodden ground now, with approaches such as Miller's lambda-Prolog and Saraswat's constraint-lambda calculus being well establsihed, but this paper offers a rather different approach, based on the Henkin-Monk-Tarski approach of cylindric algebras, which were devised as a means of formalising predicate logic in equational logic - I'm familiar with them in the context of modal logic, since they offer a means for handling quantifiers by means of modalities, where [] is used to express forall and <> is used to express exists.


The treatment is nice, and is recommended to LtUers interested in having an arsenal of techniques for bridging the declarative divide.

A Tail-Recursive Machine with Stack Inspection

A Tail-Recursive Machine with Stack Inspection. John Clements and Matthias Felleisen, TOPLAS 2004.

Security folklore holds that a security mechanism based on stack inspection is incompatible with a global tail call optimization policy... In this article, we prove this widely held belief wrong. ... Our machine is surprisingly simple and suggests that tail calls are as easy to implement in a security setting as they are in a conventional one.

I don't believe we've discussed this paper before, although it was mentioned in this thread. Tail calls have been a topic of discussion here several times. [1][2][3]

Rho calculus

The Rho-calculus is a calculus of pattern matching that embeds the lambda-calculus in a very simple manner, and also naturally accomodates a number of extensions of the lambda-calculus. It encodes the results of pattern matching in a manner that ensures confluence of the whole calculus.

It was proposed by Horatiu Cirstea and Claude Kirchner in 1998, and Matching Power, a 2001 RTA paper, is maybe the nicest introduction to the calculus. Kirchner's research group maintains a list of papers.

Postscript There was an LtU classic story on the rho-calculus as well...

Semantic Distance: NLP Not a Resource Sink

Following the story on Mind Mappers and other RDF comments of late, I thought this NLP slide show (PDF) should get a story. Dr. Adrian Walker offers an interesting perspective in a friendly crayon-colored format, including a critique of RDF. Source site Internet Business Logic has other offerings including an online demo.

Module Mania: A Type-Safe, Separately Compiled, Extensible Interpreter

Module Mania: A Type-Safe, Separately Compiled, Extensible Interpreter

To illustrate the utility of a powerful modules language, this paper presents the embedded interpreter Lua-ML. The interpreter combines extensibility and separate compilation without compromising type safety. Its types are extended by applying a sum constructor to built-in types and to extensions, then tying a recursive knot using a two-level type; the sum constructor is written using an ML functor. The initial basis is extended by composing initialization functions from individual extensions, also using ML functors.

This is an excellent example of how the ML module language doesn't merely provide encapsulation but also strictly adds expressive power. It also demonstrates how a dynamic language (Lua) can be embedded in the statically-typed context of ML. Finally, it demonstrates that none of this need come at the expense of separate compilation or extensibility. Norman Ramsey's work is always highly recommended.

ClassicJava in PLT Redex

Classic Java

This collection is an implementation of (most of) ClassicJava, as defined
in "A Programmer's Reduction Semantics for Classes and Mixins," by Matthew
Flatt, Shriram Krishnamurthi, and Matthias Felleisen; in _Formal Syntax and
Semantics of Java_, Springer-Verlag LNCS 1523, pp. 241-269, 1999. A
tech-report version of the paper is also available at
<http://www.ccs.neu.edu/scheme/pubs/#tr97-293. The implementation is
written in PLT Redex, also available through PLaneT. Please consult that
package's documentation for further details.

This might be interesting to folks curious about how to formalize a real language, or about how PLT Redex works in practice.

XML feed