An Innocent Model of Linear Logic

An Innocent Model of Linear Logic by Paul-André Melliès was referenced by Noam in a serendipitious subthread of the "Claiming Infinities" thread.

Here's the abstract:

Since its early days, deterministic sequential game semantics has been limited to linear or polarized fragments of linear logic. Every attempt to extend the semantics to full propositional linear logic has bumped against the so-called Blass problem, which indicates (misleadingly) that a category of sequential games cannot be self-dual and cartesian at the same time. We circumvent this problem by considering (1) that sequential games are inherently positional; (2) that they admit internal positions as well as external positions. We construct in this way a sequential game model of propositional linear logic, which incorporates two variants of the innocent arena game model: the well-bracketed and the non well-bracketed ones.

The introduction goes on to refer to to André Joyal's "Category Y with Conway games as objects, and winning strategies as morphisms, composed by sequential interaction," and points out that "it is a precursor of game semantics for proof theory and programming languages," and is "a self-dual category of sequential games." The foreword mentions that the paper goes on to give "a crash course on asynchronous games" and then "constructs a linear continuation monad equivalent to the identity functor, by allowing internal positions in our games, [which] circumvents the Blass problem and defines a model of linear logic."

Jacques Carette called this paper mind-blowing. My mind-blow warning light already exploded. I'm posting this paper because I know a number of LtUers are interested in these topics, and this way I can buttonhole one of them the next time I see them and ask them to explain it to me. ;)

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

So, one of the prettiest

So, one of the prettiest things about classical propositional logic is the duality structure of de Morgan's laws -- ~(A v B) == ~A & ~B and A == ~~A. One of the prettiest things about intuitionistic propositional logic is its computational interpretation of its propositions and proofs as the types and terms of the simply-typed lambda calculus.

So a natural question is whether there's a way of putting two beautiful things together at the same time. The answer is that it almost works. We know, proof-theoretically, that if you add call/cc to your programming language, you also get a type theory for classical logic. However, it turns out to be very hard to give categorical models for these calculi -- if we start with a cartesian closed category (which can interpret propositional intuitionistic logic) and postulate the existence of a double-negation functor isomorphic to the identity functor (so as to capture the principle that A is the same as not-not-A), then we can show the category is degenerate -- all arrows between two objects are equal. (That is, we get a Boolean lattice.) This sucks! While there are fancier models you can give for these calculi (which are related to Noam's thesis work, and to the CPS transform), another approach is to look at why the degeneracy happened.

And it turns out that the reason for it is the presence of the principles of contraction and weakening. In terms of cut-elimination in the sequent calculus, the basic problem is that a proof may have two copies of hypotheses in its left and right contexts (i.e., Γ, Γ |- Δ Δ) , which are eliminated via cut with two different proofs Γ |- Δ, and then the copies may be eliminated via contraction. So a cut-elimination procedure has to equate those two proofs -- which means that a denotational model that equates terms up to beta-eta will equate all classical proofs.

So Girard proposed classical linear logic, which forbids contraction and weakening, with the idea that this would let you have an involutive negation (so that A is isomorphic to ~~A), and also have a computational interpretation. And this worked great -- with a caveat. The caveat is that the simple computational models of linear logic were of parallel programming languages, whereas the simply-typed lambda calculus is a sequential programming language. So the analogy is not quite as tight as we might want.

And so this explains why this paper is interesting -- it supports involutive negation and a sequential semantics, which gives us a way of having both shiny things at once. :)

Mix

So a cut-elimination procedure has to equate those two proofs

This is the standard story, it's in Lafont's appendix B of Proofs and Types (cf. this reading list): cut-elimination with contraction and weakening leads to the identification of proof. But the argument depends certain Girardian theses about what it means to prove a sequent, ones that forbid the mix rule: if d proves Γ |- Δ and d' proves Γ' |- Δ', then mix (d,d') proves Γ,Γ' |- Δ,Δ', which one may realise in a proof theory as being a juxtaposition of proofs.

Having mix interferes with the Andreoli-style focussing explanation of proof search, but it allows you to find representations for cut-elimination in the presence of weakening and contraction that avoids the need for degenerate identifications of proofs. It's pretty intuitive too.

Why not mix?

Having mix interferes with the Andreoli-style focussing explanation of proof search, ...

Could you elaborate on this, or give a pointer? In my understanding, juxtaposition of proofs (modelling nondeterminism during cut-elimination) is perfectly compatible with focalization. Are you thinking specifically of proof search, or is there some other Girardian thesis involved?

Designs

I mangled my point somewhat. Girard introduced the mix rule with his coherence space semantics for linear logic, but disliked it, because it didn't square with the logical intuitions he had for linear logic. In Locus Solum says that ludics provides a correct characterisation for logic because "there is no natural way to mingle designs, and sometimes -in the presence of empty ramifications- no way at all" (from the mix entry in the glossary).

Ludics isn't exactly proof search, and the issue with mingling isn't exactly one of focus, but I guess we can at least say there is another Girardian thesis involved.

Re: Designs

To me this seems like an issue very particular to ludics. But I don't know, sometimes it's hard to get into Girard's head.

My point is that: Classical cut-elimination is inherently non-deterministic. One way of dealing with non-deterministic proof reduction is to reify the non-determinism with the mix rule, i.e., a proof is actually a set of proofs, representing all the possible results of cut-elimination. Focalization avoids the need for mix by making proof reduction deterministic, basically through continuation-passing style. But if you want to go back and add non-determinism as a side-effect, there's nothing preventing you (e.g., I did this in my thesis).

(By the way, note that this kind of non-determinism -- where we explicitly collect all of our non-deterministic choices -- is very different from the parallelism Neel was talking about, where our choices don't matter...and the latter is important.)

Sequential vs parallel

Neelk writes:

The caveat is that the simple computational models of linear logic were of parallel programming languages, whereas the simply-typed lambda calculus is a sequential programming language.
Can you elaborate? In which sense do you view linear as parallel and lambda as sequential? And why do you seem to consider it a drawback to be parallel rather than sequential?

The sense in which I meant

The sense in which I meant that being classical and linear seemed to call for parallelism is the Blass problem referred to in the abstract -- namely, a game semantics with strictly interleaved moves between players doesn't give enough equations to model all the equations on proofs cut elimination tells us should hold. But a semantics which allows parallel moves does do the job.

On the other hand, for an intuitionistic linear logic, this sequential semantics does yield enough equations. (I note the model in this paper is sequential, too, so the adjective "simple" in my comment really meant "the first batch of models people published".[*] )

I don't regard being parallel as being a bad thing: it's merely a difference from the intuitionistic case, and it's natural to tighten correspondences if you can. Of course, often mathematics (as well as NVidia) encourage us to be parallel, either via parallel-or for PCF or temptingly cheap GPUs. :)

[*] Or maybe this paper just isn't simple! It mentions homotopy, and legitimate work-related reasons to learn algebraic topology are relatively rare in PL, and perhaps should be valued.