Theory

Interactive Tutorial of the Sequent Calculus

Interactive Tutorial of the Sequent Calculus by Edward Z. Yang.

This interactive tutorial will teach you how to use the sequent calculus, a simple set of rules with which you can use to show the truth of statements in first order logic. It is geared towards anyone with some background in writing software for computers, with knowledge of basic boolean logic. ...

Proving theorems is not for the mathematicians anymore: with theorem provers, it's now a job for the hacker. — Martin Rinard ...

A common complaint with a formal systems like the sequent calculus is the "I clicked around and managed to prove this, but I'm not really sure what happened!" This is what Martin means by the hacker mentality: it is now possible for people to prove things, even when they don't know what they're doing. The computer will ensure that, in the end, they will have gotten it right.

The tool behind this nice tutorial is Logitext.

Seven Myths of Formal Methods Revisited

Software Engineering with Formal Methods: The Development of a Storm Surge Barrier Control System - Seven Myths of Formal Methods Revisited (2001), by Jan Tretmans, Klaas Wijbrans, Michel Chaudron:

Bos is the software system which controls and operates the storm surge barrier in the Nieuwe Waterweg near Rotterdam. It is a complex, safety-critical system of average size, which was developed by CMG Den Haag B.V., commissioned by Rijkswaterstaat (RWS) – the Dutch Ministry of Transport, Public Works and Water Management. It was completed in October 1998 on time and within budget.

CMG used formal methods in the development of the Bos software. This paper discusses the experiences obtained from their use. Some people claim that the use of formal methods helps in developing correct and reliable software, others claim that formal methods are useless and unworkable. Some of these claims have almost become myths. A number of these myths are described and discussed in a famous article: Seven Myths of Formal Methods [Hal90]. The experiences obtained from using formal methods for the development of Bos will be discussed on the basis of this article. We will discuss to what extent these myths are true for the Bos project.

The data for this survey were collected by means of interviews with software engineers working on the Bos project. These include the project manager, designers, implementers and testers, people who participated from the beginning in 1995 until the end in 1998 as well as engineers who only participated in the implementation phase, and engineers with and without previous, large-scale software engineering experience.

This paper concentrates on the experiences of the software engineers with formal methods. These experiences, placed in the context of the seven myths, are described in section 3. This paper does not discuss technical details about the particular formal methods used or the way they were used; see [Kar97, Kar98] for these aspects. Moreover, formal methods were only one technique used in the development of Bos. The overall engineering approach and the way different methods and techniques were combined to assure the required safetycritical quality, are described [WBG98, WB98]. Testing in Bos is described in more detail in [GWT98], while [CTW99] will give a more systematic analysis of the results of the interviews
with the developers.

Discussion of formal methods and verification has come up a few times here on LtU. In line with the recent discussions on the need for more empirical data in our field, this was an interesting case study on the use of formal methods. The seven myths of formal methods are reviewed in light of a real project:

  1. Myth 1: Formal methods can guarantee that software is perfect
  2. Myth 2: Formal methods are all about program proving
  3. Myth 3: Formal methods are only useful for safety-critical system
  4. Myth 4: Formal methods require highly trained mathematicians
  5. Myth 5: Formal methods increase the cost of developmen
  6. Myth 6: Formal methods are unacceptable to users
  7. Myth 7: Formal methods are not used on real, large-scale software

Extensible Programming with First-Class Cases

Extensible Programming with First-Class Cases, by Matthias Blume, Umut A. Acar, and Wonseok Chae:

We present language mechanisms for polymorphic, extensible records and their exact dual, polymorphic sums with extensible first-class cases. These features make it possible to easily extend existing code with new cases. In fact, such extensions do not require any changes to code that adheres to a particular programming style. Using that style, individual extensions can be written independently and later be composed to form larger components. These language mechanisms provide a solution to the expression problem.

We study the proposed mechanisms in the context of an implicitly typed, purely functional language PolyR. We give a type system for the language and provide rules for a 2-phase transformation: first into an explicitly typed λ-calculus with record polymorphism, and finally to efficient index-passing code. The first phase eliminates sums and cases by taking advantage of the duality with records.

We implement a version of PolyR extended with imperative features and pattern matching—we call this language MLPolyR. Programs in MLPolyR require no type annotations—the implementation employs a reconstruction algorithm to infer all types. The compiler generates machine code (currently for PowerPC) and optimizes the representation of sums by eliminating closures generated by the dual construction.

This is an elegant solution to the expression problem for languages with pattern matching. This paper was posted twice in LtU comments, but it definitely deserves its own story. Previous solutions to the exression problem are rather more involved, like Garrigue's use of recursion and polymorphic variants, because they lack support for extensible records which makes this solution so elegant.

Extensible records and first-class cases unify object-oriented and functional paradigms on a deeper level, since they enable first-class messages to be directly encoded. Add a sensible system for dynamics, and I argue you have most of the power people claim of dynamic languages without sacrificing the safety of static typing.

Foundations of Inference

Foundations of Inference, Kevin H. Knuth, John Skilling, arXiv:1008.4831v1 [math.PR]

We present a foundation for inference that unites and significantly extends the approaches of Kolmogorov and Cox. Our approach is based on quantifying finite lattices of logical statements in a way that satisfies general lattice symmetries. With other applications in mind, our derivations assume minimal symmetries, relying on neither complementarity nor continuity or differentiability. Each relevant symmetry corresponds to an axiom of quantification, and these axioms are used to derive a unique set of rules governing quantification of the lattice. These rules form the familiar probability calculus. We also derive a unique quantification of divergence and information. Taken together these results form a simple and clear foundation for the quantification of inference.

For those of us who find ourselves compelled by the view of probability as a generalization of logic that is isomorphic to (algorithmic, as if there were any other kind) information theory, here is some recent i-dotting and t-crossing. The connection to Curry-Howard or, if you prefer, Krivine's classical realizability is something I hope to explore in the near future.

A Semantic Model for Graphical User Interfaces

Nick Benton and Neel Krishnaswami, ICFP'11, A Semantic Model for Graphical User Interfaces:

We give a denotational model for graphical user interface (GUI) programming using the Cartesian closed category of ultrametric spaces. [..] We capture the arbitrariness of user input [..] [by a nondeterminism] “powerspace” monad.

Algebras for the powerspace monad yield a model of intuitionistic linear logic, which we exploit in the definition of a mixed linear/non-linear domain-specific language for writing GUI programs. The non-linear part of the language is used for writing reactive stream-processing functions whilst the linear sublanguage naturally captures the generativity and usage constraints on the various linear objects in GUIs, such as the elements of a DOM or scene graph.

We have implemented this DSL as an extension to OCaml, and give examples demonstrating that programs in this style can be short and readable.

This is an application of their (more squiggly) LICS'11 submission, Ultrametric Semantics of Reactive Programs. In both these cases, I find appealing the fact the semantic model led to a type system and a language that was tricky to find.

A Monadic Framework for Delimited Continuations

A Monadic Framework for Delimited Continuations (PDF), R. Kent Dybvig, Simon Peyton Jones, Amr Sabry. TR, June 2005.

Delimited continuations are more expressive than traditional abortive continuations and they apparently seem to require a framework beyond traditional continuation-passing style (CPS). We show that this is not the case: standard CPS is sufficient to explain the common control operators for delimited continuations. We demonstrate this fact and present an implementation as a Scheme library. We then investigate a typed account of delimited continuations that makes explicit where control effects can occur. This results in a monadic framework for typed and encapsulated delimited continuations which we design and implement as a Haskell library.

A fascinating paper about delimited control. I'm very much a newbie to delimited control, but this paper has been enormously helpful - despite the title. ;)

The basic idea of the paper is to represent the execution context as a sequence containing prompts (control delimiters) and the (partial) continuations between prompts. This model is formalized with an operational semantics, which was insightful even though it's the first operational semantics I've studied.

The authors then present an implementation of the model in terms of call/cc in Scheme. The basic idea here is to always perform user code after aborting to a context near the bottom of the stack, just above a call to an underflow function - this means that even though we use undelimited call/cc, we only ever capture our (small, partial) execution context. The whole execution context (the "metacontinuation") is maintained as a sequence data structure in a global variable (basically, a list containing prompts and Scheme continuations). The underflow function destructures the metacontinuation, and executes (returns to) the partial continuations stored in it. Pushing a prompt adds a delimiter to the metacontinuation, capturing a delimited continuation splits the metacontinuation at a delimiter, and composing a continuation appends to the metacontinuation.

I haven't even gotten to the later parts of the paper yet, but this model and the Scheme implementation alone is worth a look.

(The paper seems to be a reworked version of A Monadic Framework for Subcontinuations, discussed previously.)

Levy: a Toy Call-by-Push-Value Language

Andrej Bauer's blog contains the PL Zoo project. In particular, the Levy language, a toy implementation of Paul Levy's CBPV in OCaml.

If you're curious about CBPV, this implementation might be a nice accompaniment to the book, or simply a hands on way to check it out.

It looks like an implementation of CBPV without sum and product types, with complex values, and without effects. I guess a more hands-on way to get to grips with CBPV would be to implement any of these missing features.

The posts are are 3 years old, but I've only just noticed them. The PL Zoo project was briefly mentioned here.

Imperative Programs as Proofs via Game Semantics

Imperative Programs as Proofs via Game Semantics, Martin Churchill, James Laird and Guy McCusker. To appear at LICS 2011.

Game semantics extends the Curry-Howard isomorphism to a three-way correspondence: proofs, programs, strategies. But the universe of strategies goes beyond intuitionistic logics and lambda calculus, to capture stateful programs. In this paper we describe a logical counterpart to this extension, in which proofs denote such strategies. We can embed intuitionistic first-order linear logic into this system, as well as an imperative total programming language. The logic makes explicit use of the fact that in the game semantics the exponential can be expressed as a final coalgebra. We establish a full completeness theorem for our logic, showing that every bounded strategy is the denotation of a proof.

This paper increases the importance of gaining a more-than-casual understanding of game semantics for me, since it combines two of my favorite things: polarized type theories and imperative higher-order programs.

Functor is to Lens as Applicative is to Biplate: Introducing Multiplate

Functor is to Lens as Applicative is to Biplate: Introducing Multiplate is an interesting paper by Russell O'Connor which shows that certain types are isomorphic to quantification over certain type classes. This isomorphism then naturally leads to a generalization of Uniplate/Compos into Multiplate, which allows for rather generic traversals.

The BiplateType datatype defined in the Uniplate library is similar to a lens (a.k.a. a functional reference). This paper proves that Biplate generalizes lens similarly to how applicative functor generalizes functor. This paper gives an alternative definition of BiplateType using a nested data type that better captures the invariants for Biplates than Uniplate's original definition. In previous work, van Laarhoven conjectures an isomorphism between this nested data type and a data type found in Compos library. This paper complete his proof of the isomorphism by using free theorems for types polymorphic over applicative functors. This result means that, morally speaking, Compos and Uniplate use isomorphic representations for their core data type for generic traversals of simple recursive data types. For mutually recursive data types, Compos and Uniplate providing two different extensions to this common core functionality. Compos requires the user to rewrite a mutually recursive data type as a GADT, while Uniplate's Biplate class, is less powerful but only requires multiparameter type classes and does not require rewriting one's data type. This paper proposes a third extension to support mutually recursive datatypes that is as powerful as Compos, as easy to use as Biplate, and more portable than both of them. This proposal, called Multiplate, only requires rank 3 polymorphism.

(Full disclosure: Russell is a post-doc at McMaster, working with me and Bill Farmer. While trying to figure out how to make certain repetitive traversals of MathScheme ASTs into a generic framework, he 'stumbled upon' multiplate, and then kept pursuing the ideas, with the multiplate package and this nice bit of theory as the end result.)

Tractatus Digito-Philosophicus

Tractatus Digito-Philosophicus, part of the project Wittgenstein for programmers by Harrison Ainsworth (whose blog is very much recommended to LtUers).

This is a somewhat odd venture: a translation of Wittgenstein's Tractatus into the domain of software development.

The software intellect – its basic conceptual forms – is rooted in the early 20th century, the 1910s, 1920s, 1930s. That is where the work of Church and Turing, lambda calculus and computability, comes from. And it is also the time of the Vienna Circle, logical positivism, and Wittgenstein's early work, the ‘Tractatus Logico-Philosophicus’.

One might notice one day that software seems pointedly related to its original philosophical contemporaries. It is fundamentally a logical construction. It is like a Wittgensteinian logical proposition, but instead of describing the world, software constructs the imagination. There is a clear isomorphism. All terms related to describing map to terms related to constructing, and similarly for world and imagination. It seems a simple transformation will take Wittgenstein to software.

So an interesting project emerges: translate the Tractatus into software terms! The result is sometimes obscure, but sometimes clearer than the original, and most is (still) quite odd and intriguing (which is perhaps the main virtue anyway) . . .

(So far it is only partial and unfinished.)

XML feed