From shift and reset to polarized linear logic

By now, shift/reset should be as popular as call/cc was ten years ago. Some think these control operators are even more important in practice than call/cc, and should be directly supported by PLs. I believe, this paper by Chung-chieh Shan will be interesting to many who loves logic and Curry-Howard isomorphism.

From shift and reset to polarized linear logic

Abstract:

Griffin pointed out that, just as the pure lambda-calculus corresponds to intuitionistic logic, a lambda-calculus with first-class continuations corresponds to classical logic. We study how first-class delimited continuations, in the form of Danvy and Filinski’s shift and reset operators, can also be logically interpreted. First, we refine Danvy and Filinski’s type system for shift and reset to distinguish between pure and impure functions. This refinement not only paves the way for answer type polymorphism, which makes more terms typable, but also helps us invert the continuation-passing-style (CPS) transform: any pure lambda-term with an appropriate type is beta-eta-equivalent to the CPS transform of some shift-reset expression. We conclude that the lambda-calculus with shift and reset and the pure lambda-calculus have the same logical interpretation, namely good old intuitionistic logic. Second, we mix delimited continuations with undelimited ones. Informed by the preceding conclusion, we translate the lambda-calculus with shift and reset into a polarized variant of linear logic that integrates classical and intuitionistic reasoning. Extending previous work on the lambda-µ-calculus, this unifying intermediate language expresses computations with and without control effects, on delimited and undelimited continuations, in call-by-value and call-byname settings.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Cool stuff! By the way, why

Cool stuff!

By the way, why do you use the pre tag instead of blockquote?

Spoiled by TeX tutorials :-)

I've read that people find it easier to read text with no more than 66 characters per line... Also, I've seen some people using pre on LtU, so decied it's a new fashion. Though you may be right, it does not look ok...

Replaced pre by blockquote, so now it has about 150 characters per line on my laptop :-)

Line length

I am not sure I understand. I get the long lines when pre is used. With blockquote lines wrap where they should...

OT: It's in the eye of the beholder

I guess it depends on the resolution of the display and other settings. At 1400x1050 I get really wide and shallow articles - 150 characters per line is not an exaggeration. The current state of the web is not very friendly towards displays with such resolution (and no, setting bigger fonts in preferences does not work on all sites, as many sites prefer to enforce absolute sizes - LtU being a rare exception). Ah, nevermind...

Chung-chieh Shan's homepage

By the way, I find Chung-chieh Shan's homepage a feast... Come and join, if you did not visit it before.

You might find there some previously reviewed papers and even familiar names :-)

A Type-Theoretic Foundation of Continuations and Prompts

Relevant seems also this presentation by Amr Sabry et ales: A Type-Theoretic Foundation of Continuations and Prompts.

Just some slogans:

  • Dynamic scope + control => state
  • Do not sweep the top-level continuation under the rug
  • A continuation is not a function
  • A little bit of dynamic scope can help
  • Regular continuations augmented with one dynamically-scoped continuation can express all other effects including state
  • Type-and-effect systems or monads are essential sometimes
  • Logic helps !
  • Subtractive logic is the right way to think about advanced control operators (some of them at least)

[on edit: I didn't hear about subtractive logic before, Extending Intuitionistic Logic with Subtraction looks like a good introduction.]

Common misspelling

Gosh, for the last time, it's
"Peirce's Law"! :-)
--
Paul Steckler

Starter paper for shift/reset?

Googling shows the seminal paper is tied up with the ACM (pay to view, etc.). Is there another introduction worth reading?

Hmm

Googling shows the seminal paper is tied up with the ACM (pay to view, etc.). Is there another introduction worth reading?

Do you mean Abstracting Control? Or A Functional Abstraction of Typed Contexts?

Link changed

The linked paper has moved to here.

Hmm

I have worked through the first three sections, although I haven't yet really dug into the fourth. I found the exposition extremely helpful on some things I'm currently working on.

However, I have a question about the conclusion of Section 3, that "the logical interpretation of shift and reset is just intuitionistic logic." What does this really mean?

To be honest, I'm not sure that I buy it. (Although that may well be my own ignorance.) To me, it feels like glib reductionism, although certainly not to the same degree as the egregious examples cited by Peter Van Roy. I can believe that it is an intuitionistic logic of some kind, but surely it's also more than classical intuitionistic logic...

Multimodal intuitionistic logic

To be more precise, the logical interpretation of shift and reset is a multimodal intuitionistic logic. Oleg and I explain this stance a bit better, I think, in our more recent TLCA 2007 paper. But we have yet to write the paper in which we concretely use multimodal type-logical grammar to implement shift and reset.

Consider Yourself Lucky

Griffin [22] pointed out that, just as the pure λ-calculus corresponds to intuitionistic logic, a λ-calculus with first-class continuations corresponds to classical logic. We study how first-class delimited continuations [13], in the form of Danvy and Filinski’s shift and reset operators [10, 11], can also be logically interpreted.

I am not sure I even buy the first paragraph, or really understand why it would be interesting to investigate. Guess I'll read up on that reference.

[Scanned the reference; I've seen it before. I don't buy it.]

Curry-Howard correspondence

Well, I really need to respond to CC Shan's comment, but that's going to take a few days of work. I'll handle yours first as it's a lot easier.

The idea is not too difficult: to be precise, if you start with the simply-typed lambda calculus without computational effects, a given type is inhabited if and only if that type is also a theorem of intuitionistic propositional logic, which does not accept the law of the excluded middle as legitimate. If you add call/cc, which can be typed with Peirce's law, then a type is inhabited if and only if it's a theorem of classical propositional logic.

Basically, these languages are sound and complete proof systems for the respective logics. As to why this is useful, I have some intuition, but it's vague and incomplete.

Of course, the simply-typed lambda calculus is strongly normalizing, (approximately speaking, expressions always terminate with an answer), and adding computational effects such as general recursion throws a monkey wrench into things.

For example, in Haskell, the simplest fixpoint operator has the following type:

fix :: (a -> a) -> a
fix f = let x = f x in x

bot :: a
bot = fix id

By simply claiming fix id as your "proof", you can prove anything you want. (i.e. everything) So you have to tread carefully, and you need to prove that your "proof" always terminates before it's type can really be accepted as a theorem.

Even though computational effects create problems, it's not the end of the Curry-Howard correspondence. In the simply typed lambda calculus with general recursion, for example, if your type is not a theorem of intuitionistic logic, then there are circumstances where your expression will not terminate. (Although the inverse statement is obviously not true.)

I hope that's enough to convince you as to why it's interesting, however I myself would love to hear about further practical applications of all this. Somebody more knowledgeable should correct me where I'm wrong, and expand on these statements.

I think

You should also read the extra comment in the conclusion of [22].

[Accepting Peirce Law is accepting the excluded middle, as far as I understood.]

[If you add call/cc, which can be typed with Peirce's law, then a type is inhabited if and only if it's a theorem of classical propositional logic. .... sigh ]