Theory

A Dialogue on Infinity

A Dialogue on Infinity, between a mathematician and a philosopher. Alexandre Borovik and David Corfield.

A new blog... From the first post:

The project concentrates on one of the principal purposes of the Exploring the Infinite Program:

To understand the nature of and the role played by conceptualizations of infinity in mathematics.

It will be shaped as a dialogue between a mathematician (AB) and a philosopher (DC) and will address one of the central paradoxes of mathematics:

why are most uses of infinity in mathematics restricted to the recycling of a small number of “canonical” and ubiquitous structures?

...To put the study of infinity on a firm basis, we first have to discuss the issue of the identity and “sameness” of mathematical objects: infinity of what?

This is pretty far out for LtU, but I suspect it will interest some more philosophically inclined readers. They will look at a number of disciplines, including computer science.

(I feel like maybe even "Theory" is not theoretical for this. Therefore I am also calling it "Fun".)

How to write your next POPL paper in Coq

If this sounds like a worthy goal, or if you are simply interested in the use of proof assistants for rogramming language research, you don't want to miss upcoming tutorial.

OCaml Light: A Formal Semantics For a Substantial Subset of the Objective Caml Language

OCaml Light: a formal semantics for a substantial subset of the Objective Caml language.

OCaml light is a formal semantics for a substantial subset of the Objective Caml language. It is written in Ott, and it comprises a small-step operational semantics and a syntactic, non-algorithmic type system. A type soundness theorem has been proved and mechanized using the HOL-4 proof assistant, thereby ensuring that the proof is free from errors. To ensure that the operational semantics accurately models Objective Caml, an executable version of the semantics has been created (and proved equivalent in HOL to the original, relational version) and tested on a number of small test cases.

Note that while we have tried to make the semantics accurate, we are not part of the OCaml development team - this is in no sense a normative specification of the implemented language.

From a team including Peter Sewell (Acute, HashCaml, Ott).

I continue to believe that things are heating up nicely in mechanized metatheory, which, in the multicore/multiprocessor world in which we now live, is extremely good news.

On the origins of Bisimulation, Coinduction, and Fixed Points

Davide Sangiorgi, On the origins of Bisimulation, Coinduction, and Fixed Points.

The origins of bisimulation and bisimilarity are examined, in the three fields where they have been independently discovered: Computer Science, Philosophical Logic (precisely, Modal Logic), Set Theory.

Bisimulation and bisimilarity are coinductive notions, and as such are intimately related to fixed points, in particular greatest fixed points. Therefore also the appearance of coinduction and fixed points are discussed, though in this case only within Computer Science. The paper ends with some historical remarks on the main fixed-point theorems (such as Knaster-Tarski) that underpin the fixed-point theory presented.

There is a wealth of interesting information in this paper. Alas, it is not very easy to read, and the exposition can be improved. So this is not for beginners or outsiders, but if you are familiar with the topic the historical discussion will be of interest.

Lifting Abstract Interpreters to Quantified Logical Domains

Lifting Abstract Interpreters to Quantified Logical Domains. Sumit Gulwani, Bill McCloskey, Ashish Tiwari. July 2007.

Today, abstract interpretation is capable of inferring a wide variety of quantifier-free program invariants. In this paper, we describe a general technique for building powerful quantified abstract domains that leverage existing quantifier-free domains. For example, from a domain that abstracts facts like "a[1] = 0", we automatically construct a domain that can represent universally quantified facts like "Forall i: (0 <= i < n) => A[i]=0)". The principal challenge in building such a domain is that, while most domains supply over-approximations of operations like join, meet, and variable elimination, working with the guards of quantified facts requires under-approximation...

Using our generic construction, we build a number of abstract interpreters on top of domains for linear arithmetic, uninterpreted function symbols (used to model heap accesses), and pointer reachability. Our experiments on a variety of programs using arrays and pointers (including several sorting algorithms) demonstrate the feasibility of the approach on challenging examples.

Validity Invariants and Effects

Validity Invariants and Effects, Yi Lu, John Potter and Jingling Xue. ECOOP 2007.

Object invariants describe the consistency of object states, and are crucial for reasoning about the correctness of object-oriented programs. However, reasoning about object invariants in the presence of object abstraction and encapsulation, arbitrary object aliasing and re-entrant method calls, is difficult. We present a general framework for reasoning about object invariants based on a behavioural abstraction that specifies two sets---the validity invariant, representing objects that must be valid before and after the behaviour, and the validity effect, describing objects that may be invalidated during the behaviour. The overlap of these two sets is critical because it captures precisely those objects that need to be re-validated at the end of the behaviour. When there is no overlap, no further validity checking is required. We also present a type system based on this framework. This system uses ownership types to confine dependencies for object invariants, and restricts permissible updates to track where object invariants hold even in the presence of re-entrant calls, but otherwise object referenceability and read access are not restricted.

I really liked this paper, but I think it might need a few preliminary explanations. There's a style of verification of OO programs based on "object invariants", which is the idea that you ensure that each object has an invariant, which every method maintains. Then verification is local, in the sense that you can verify each class's invariants independently. (This is used in the Boogie methodology used by Spec#, for instance.)

However, there are a couple of wrinkles. First, aliasing: every object's invariant depends on some of the objects in its fields, and you don't want random aliases letting strangers modify your representation objects underneath your feet. So you introduce a notion of ownership to help track who has permission to mess with each object. Second, reentrancy: suppose the middle of a method body has temporarily broken the object's invariant, and you call another method on the object? You don't a priori know the call is safe.

The type system the authors have introduced here tracks ownership and possibly-dangerous reentrant calls. The really clever part is that instead of just rejecting programs that fail these checks, they log all of the places where things break. So instead of saying "yes" or "no", the type system says "yes" or "manually verify the following things". So it's a labor-saving device for a verification methodology.

A Natural Axiomatization of Church's Thesis

A Natural Axiomatization of Church's Thesis. Nachum Dershowitz and Yuri Gurevich. July 2007.

The Abstract State Machine Thesis asserts that every classical algorithm is behaviorally equivalent to an abstract state machine. This thesis has been shown to follow from three natural postulates about algorithmic computation. Here, we prove that augmenting those postulates with an additional requirement regarding basic operations implies Church's Thesis, namely, that the only numeric functions that can be calculated by effective means are the recursive ones (which are the same, extensionally, as the Turing-computable numeric functions). In particular, this gives a natural axiomatization of Church's Thesis, as Gödel and others suggested may be possible.

While not directly dealing with programming languages, I still think this paper might be of interest, since our field (and our discussions) are often concerned with computability (or effective computation, if you prefer).

The idea the Church's Thesis can be proven is not new, and indeed there seems to be a cottage industry devoted to this issue (for a quick glance search the paper for references to Gandy and work related to his).

Even if the discussion of ASMs is not your cup of tea, it seems like a good idea to keep in mind the distinctions elaborated in the conclusions section between "Thesis M" (the "Physical C-T Thesis"), the "AI Thesis", and the "standard" meaning of the C-T thesis.

Another quote (from the same section) that may prove useful in future LtU discussions: We should point out that, nowadays, one deals daily with more flexible notions of algorithm, such as interactive and distributed computations. To capture such non-sequential processes and non-classical algorithms, additional postulates are required.

Derivatives of Regular Expressions

Derivatives of Regular Expressions, Janusz Brzozowski, Journal of the ACM 1964.

Kleene's regular expressions, which can be used for describing sequential circuits, were defined using three operators (union, concatenation and iterate) on sets of sequences. Word descriptions of problems can be more easily put in the regular expression language if the language is enriched by the inclusion of other logical operations. However, in the problem of converting the regular expression description to a state diagram, the existing methods either cannot handle expressions with additional operators, or are made quite complicated by the presence of such operators. In this paper the notion of a derivative of a regular expression is introduced and the properties of derivatives are discussed. This leads, in a very natural way, to the construction of a state diagram from a regular expression containing any number of logical operators.

This is one of my favorite papers. It describes a very cute algorithm for building deterministic finite automata directly from a regular expression. The key trick is the idea of a derivative of a regular expression with respect to a string, which is a non-obvious but fun idea.

Note: This is an ACM DL link; I couldn't find the paper freely available online. :(

Festschrift for John C Reynolds's 70th Birthday

The Festschrift for John C Reynolds's 70th Birthday is available as Special Issue of Theoretical Computer Science, Vol 375(1-3), 1 May 2007. Many interesting papers! Get it while it's hot.

Local Reasoning for Storable Locks and Threads

Local Reasoning for Storable Locks and Threads. Alexey Gotsman; Josh Berdine; Byron Cook; Noam Rinetzky; Mooly Sagiv.

We present a resource-oriented program logic that is able to reason about concurrent heap-manipulating programs with unbounded numbers of dynamically-allocated locks and threads. The logic is inspired by concurrent separation logic, but handles these more realistic concurrency primitives. We demonstrate that the proposed logic allows for local reasoning about programs that exhibit a high degree of information hiding in their locking mechanisms. Soundness is proved using a novel thread-local fixed-point semantics.

XML feed