Propositions as Types

Propositions as Types, Philip Wadler. Draft, March 2014.

The principle of Propositions as Types links logic to computation. At first sight it appears to be a simple coincidence---almost a pun---but it turns out to be remarkably robust, inspiring the design of theorem provers and programming languages, and continuing to influence the forefronts of computing. Propositions as Types has many names and many origins, and is a notion with depth, breadth, and mystery.

Philip Wadler has written a very enjoyable (Like busses: you wait two thousand years for a definition of “effectively calculable”, and then three come along at once) paper about propositions as types that is accessible to PLTlettantes.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Soliciting comments

As a PLTlettantes, I welcome this.
He's soliciting comments of the draft on his blog

Proposition *is* a type

Proposition is a type of Actor. Furthermore Conjunction, Disjunction, Predication, etc. are subtypes of Proposition. Types for propositions play a crucial role in disallowing Gödelian "self-referential" propositions that lead to contradictions in mathematics. See LtU discussion: Mathematics self-proves its own Consistency (contra Gödel et. al.)

PS. An Actor can be queried as to whether it implements a type; but it doesn't have to answer!

PPS. Nevertheless, the article linked at the beginning of this LtU topic is an excellent introduction to the Curry-Howard approach :-)

Jaśkowski's natural deduction is superior to Gentzen's for CS

For the purposes of Computer Science, Jaśkowski's natural deduction is superior to the one developed independently and concurrently by Gentzen. Someone should rework using Direct Logic (which is an extension of Jaśkowski natural deduction) the results in the article linked at the beginning of this LtU topic.

See the following: Inconsistency Robustness in Foundations

Why not Curry-Howard?

I think the Curry-Howard correspondence is beautiful and this paper gives a nice presentation and summary, but I'm not sure I agree with the implications at the end that CH implies that type-theory is on the right track. Many parts of mathematics have a beautiful everything-just-fits feel. The conclusion has an almost mystical tone to it.

What could be wrong with using CH as a way to identify proof and computation? The main problem I see is that it doesn't - it identifies proof with typed computation. The paper has a section on the fundamental role of lambda calculus as a model for computation, but then shifts without much persuading to the typed lambda calculus to move forward with CH. I think CH shows that you don't need both a type system and a logic at your core. But isn't it conceptually cleaner to separate the logical and computational aspects of lambdas by working with an untyped lambda calculus and a curry-style type system / logic?

The lambda calculus is inadequate for computation

The lambda calculus is inadequate for computation. See LtU discussion Computation is not subsumed by deduction

Consequently there is no direct correspondence between logic (bounded nondeterminism) and computation (unbounded nondeterminism).

Modern Logic Programs superior to Curry-Howard correspondence

Modern Logic Programs with Jaśkowski-style natural deduction are superior to Curry-Howard correspondence (lambda calculus programs with Gentzen-style natural deduction).

See LtU discussion: Inconsistency Robustness in Logic Programs

Nuprl is set up exactly like

Nuprl is set up exactly like that: an untyped lambda calculus paired with a Curry-style type system. You can give the same expression different types, and a given derivation can involve different typings of the same expression. (But I've never used Nuprl. I'm not sure that it ends up very different from Church-style systems like Coq in practice).

So you can write (\x. x x)

So you can write (\x. x x) and then give it types? Do you have a reference? My brief scanning lead me to CTT which seemed to have lambdas with typed domains.

I was thinking of the typing

I was thinking of the typing rules given in Appendix B of Karl Crary's phd thesis. That's the only place I know that presents typing rules in the usual format, the Nuprl book uses a more informal presentation.

Thanks

I can't read that with my phone while I'm traveling this weekend, but I'll check it out.

Yikes, that's a lot of

Yikes, that's a lot of rules! Thanks for the link. It looks like this is maybe slightly different than what I had in mind, because functions are only ever defined with a type around. The presence of subtyping rules gets you something very close, though.

I agree with you that

I agree with you that studying untyped computation is a worthwhile activity.
I also agree that type systems and logic are best studied under a single model.
However, I still disagree with the conclusion.


> But isn't it conceptually cleaner to separate the logical and computational aspects of lambdas by working with an untyped lambda calculus and a curry-style type system / logic?

First, CH does not identify proofs and computation, but proofs and (typed) terms. And I guess those terms better stay typed, since typechecking is already undecidable for Curry-style System F, and for most type systems you'd want to use in proofs. Hence, even though the separation you describe makes sense, it does not seem technically possible.

CH identifies proof normalization with typed computation, so we only need one — which one? I find typed computation much easier to work with (concretely) than proof normalization — maybe because of my habits and interests, maybe because of notations, maybe because of deeper intrinsic reasons. But beta-reduction predates natural-deduction proof normalization by 30 years, and this qualifies at least as anecdotical evidence.
Notationally, reading proof normalization steps makes my head hurt until I see the corresponding proof terms.
Intrinsically, untyped computation helps understanding typed computation.

I also like to use natural-deduction-inspired principles to design language cores, even if the applications are unrelated to logic. Defining types, introduction and elimination rules is a powerful design principle — Harper advocates it in his last book from the very beginning.


> Many parts of mathematics have a beautiful everything-just-fits feel. The conclusion has an almost mystical tone to it.

I agree that CH might only be as beautiful as those parts of mathematics. (I hope that the beauty in those areas is described as beautifully).

First, CH does not identify

First, CH does not identify proofs and computation, but proofs and (typed) terms. And I guess those terms better stay typed, since typechecking is already undecidable for Curry-style System F, and for most type systems you'd want to use in proofs. Hence, even though the separation you describe makes sense, it does not seem technically possible.

The role of the logic in my proposed approach is just to reason formally about functions. It doesn't need to be a decidable type discipline.

I agree that CH might only be as beautiful as those parts of mathematics. (I hope that the beauty in those areas is described as beautifully).

Yes, and I should have mentioned that Wadler's latest paper was excellent as always before picking my pet nit.

typed calculus = (untyped calculus, logic)

... isn't it conceptually cleaner to separate the logical and computational aspects of lambdas by working with an untyped lambda calculus and a curry-style type system / logic?

Isn't this exactly what CH brings to the table? If we start from a typed calculus, we can erase the types, projecting down to an untyped calculus, and (adjointly) if we start from an untyped calculus we can synthesize a "Maybe logic" (alternatively: given an appropriate context, we can synthesize a logic0), injecting into a typed calculus. Unless I'm mistaken, CH allows one to work with whichever side of this equation one prefers, then translate the results for that minority1 of one's colleagues who mistakenly prefer the other side.

A concrete example: If we wish to match a regular expression, there are two basic approaches: we can interpret the expression against the input directly, or we can compile it down to an automaton, which is then run against the input. When we do the latter, we usually want to partition the transitions into classes, in order to build a finite machine that operates on an infinite domain (eg strings). This partitioning is already a rudimentary logic (ie typing), and when one then calculates "input in this partition in an initial state can wind up in these partitions in a final state" (conversely: "in order to end in this partition, we must start from these"), that is also logical manipulation — which at a higher level in PLs is often expressed via types. However, both the untyped (monotyped?) interpretation and the typed (if only implicitly, via its transition table) automaton should be equivalent.

0. which logic? note that CH is more general than presented here by Wadler; eg Simmons shows that although (lambda calculus, Gentzen deduction) is one Curry-Howard pair, (combinators, Hilbert deduction) is another; we should expect to find (X, Jaśkowski deduction) would be yet a third.
1. shall we call them "bitter-enders" for their insistence upon breaking eggs at the wrong end?

That doesn't work

... and (adjointly) if we start from an untyped calculus we can synthesize a "Maybe logic" (alternatively: given an appropriate context, we can synthesize a logic0), injecting into a typed calculus.

If this were true, the mysticism surrounding CH might be warranted, but I don't think there's anything so deep to be found. Having properly typed functions is essential to the correspondence. There isn't any principled way to start with untyped lambda calculus and get Gentzen deduction to fall out. You have to start with the simply typed lambda calculus or some other typed calculus.

And even then, the thing that falls out isn't very surprising (at least in retrospect). CH just tells you that you can squint your eyes and view your terms as proofs. This is a rather shallow correspondence in the sense that there isn't a semantic component; there is no notion of truth carried over to the type/program side.

maybe I'm misreading Simmons

maybe I'm misreading Simmons, "Derivation and Computation", CUP 2000, in particular page 80 (which largely parallels page 60's development of typed/untyped combinator computations; he sketches an untyped->typed recursive algorithm on p60, but leaves the equivalent for lambda calculus as exercise 4.13 (answer p274)):

4.15 THEOREM Type erasure
(◻) t- ▹▹ t+ ↦ (◻ε) t ▹▹ t
converts each typed computation ◻ into an untyped computation ◻ε where these have the same arboreal code. Furthermore, given an untyped computation
(◻*) t ▹▹ t*
where the subject t is the erasure of a typed term t-, there is a unique typed computation ◻ (as above) with ◻ε=◻* (and hence t*=t.

the parenthetic comment about adjoints does appear to have been a misunderstanding of pp.72-74, in which there's a demonstration of a pair of adjoint functions (annotation and deletion) but in retrospect these are converting between hilbert and natural derivations, and I agree that discovering that one logic refines another tells us far more about booleans than it does about computation.

Hmmm

I agree a similiar result to this one would hold for simply typed lambda calculus. I thought you were talking about starting from an untyped calculus and synthesizing a type system or logic, but now I see you're talking about starting with a type system in mind and then trying to recover the type for a particular term. You don't have uniqueness in general type systems (id: Nat -> Nat and id: forall a. a -> a), though. And I'm not seeing the connection to CH. ... Yeah, I'm lost.

typed calculus = (untyped calculus, logic)

I'm talking about being able to zip an untyped calculus (program tree) with a logic (proof/typing tree) and get a typed calculus, or unzipping the typed calculus into an untyped calculus and a logic (your original query?), or calculating any one of the three given the other two.
[for what it's worth: (a) upon further reading, the Natural Deduction in the text is explicitly not Gentzen, and (b) as it's obvious I'm having difficulty with the material, I've backed up to chapter 3, and according to the preview, Simmons won't start dealing with non-uniqueness until chapter 6]

Here's a connection to CH: (in which it appears that 'annotation' and 'deletion' are relevant; I was confused because in splitting the untyped terms from the logic, Simmons prefers the opposite projection to what I am used to: he erases the terms to keep the types)

By Theorem 3.6 each C-derivation
(∇) Γ ⊦ t : τ
is uniquely determined by the root pair (Γ,t). In fact, this pair is an uncoupled version of the arboreal code for ∇. By Theorem 3.10 the C-derivation is uniquely determined by Γ and the H-derivation
(∇δ) Γδ ⊦ τ
and every H-derivation arises in this way. Thus, for a given context (declaration list) there is a matching between

        combinator terms t        H-derivations ∇ [*]

(although not all combinator terms can occur). Imprecisely we say that combinator terms and H-derivations are 'the same thing'.

As far as I can see at this point, Simmons is more interested in carrying things from the computational side over to the logic side, in particular:

Why should the material of Chapters 1 and 2 appear in the same book? One reason is that combinator reduction can be used to simplify H-derivations.
Suppose we have an H-derivation
(∇) Γ ⊦ τ
given by its arboreal code ∇. By annotating this we produce a C-derivation
(∇) Γ ⊦ t : τ
for some subject t. As indicated, this new derivation has essentially the same arboreal code. The only difference is that identifiers are used in place of position indexes. Erasure gives us an untyped term tε. This has no trace of the types in the original H-derivation. By analysing tε we may produce a reduction
(□) tε ▷▷ t*
to an untyped term t*. But now a use of subject reduction followed by deletion produces a new H-derivation
(∇⋅□) Γ ⊦ τ
which in some sense is simpler than ∇ ...

which I would normally interpret in the opposite sense, going from proofs to programs, to get "executing a valid program yields a valid result".

(a conclusion which is a comfort to those of us who wish to use types at compile time instead of at run time, and not such a comfort to those of us who have experienced that a well-formed program may very well go wrong in quite mysterious ways if it happens to be executed on a system where something else is capable of corrupting, say, its free list)
* should this be ∇δ?

It's not really untyped.

As you can only admit terms that will type with the typed calculus, then the admissable terms of the untyped calculus are exactly that of the typed calculus. This would appear to be type-inference when 'zipping' the types, and type-erasure when unzipping them. What you zip to the lambda calculus is a first order logic in the case of HM, and type inference itself maps neatly into this logic, with the exception of renaming. Nominal first order logic and nominal unification (as used in alpha-prolog) allows HM type inference to be written in three simple lines.

type Gamma (var X) A :- member (pair X A) Gamma.
type Gamma (app M N) B :- type Gamma M (arrow A B), type Gamma N A.
type Gamma (lam x.M) (arrow A B) / x#Gamma :- type (pair x A)::Gamma M B.

What I have done for my project is write the renaming as a separate process, so that the types become simple first order logic terms, and form a prolog like database. This separates the inference process from the checking, so that the logic terms for the whole program can be inferred (and cannot fail), and then type-checking is executing the resulting logic program with a query requesting the (top-level) type of the whole program. This works because I am using compositional principal typings. The programmer can add additional logic statements to act as constraints to the database, which gives similar functionality to dependent types and type classes.

The conclusion has an almost

The conclusion has an almost mystical tone to it.

However much I enjoyed the conclusion, the mysticism surrounding CH tends to evaporate upon noticing that we have characteristic functions on both sides: logical propositions (whose truth is a map to Bool) naturally share a great deal of structure with computational types (whose membership is a map to Bool).

I would guess that the correspondence was fairly obvious

but at the same time its importance was much less obvious. So the credit rightly goes to those who both recognized it and, realizing its importance, explored it. Those who merely recognized it failed to leave a trace. It is bit like San Francisco Bay, the Spanish sailed past it and knew there was a bay there for a very long time before someone actually got around to exploring it and discovered it was far more interesting than they had imagined.

Modern Logic Programs: proofs using Jaśkowski natural deduction

Modern Logic Programs produce proofs using Jaśkowski natural deduction in computations in which each computational step is logically inferred.

Consequently,
* Proofs are more natural than those using Gentzen natural deduction (featured in Wadler's paper linked at the beginning of this LtU topic).
* The proving process (with explicit assertions and goals) is more general than what can be implemented in the lambda calculus

How far can it be taken?

Coincidentally I have been looking at this recently, so it's nice to see a good paper on the subject.

What I am interested in is can this be taken further? Looking at Twelf, it seems like the type expressions could be directly translated to Prolog. What is interesting is that Twelf implements an infinite tower of nested universes, but prolog is flat. If we ignore the Russell paradox, can all the meta reasoning in Twelf be expressed as Prolog too? It is clear that we would need a prolog implementation supporting PTTP (prolog technology theorem proving, which has sound unification, model elimination, and iterative deepening). In fact it would be quite easy to provide meta-interpreters for different search strategies and inference.

simpler question.

Does anyone know (or know of any papers discussing) if higher-order-logic is necessary to reason about a programming language that does not have lambda-abstraction (like Prolog for example)?

Skepticism vs mysticism

The Curry-Howard correspondence is interesting. I greatly respect those who unearthed it, and don't begrudge research about it. But I've never belonged to its cult. CH to me is a result whose profundity is greatly exaggerated. I don't see it as relating logic to computation, but rather, relating two different ways of reasoning about computation. What's profound, to me, in comparing logic and computation is their change of emphasis (which seems symptomatic of something I've never quite been able to grasp). Russell's paradox is a big deal, while an analogous predicate that recurses infinitely when applied to itself is simply not a big deal. Wadler, in naming correspondences, gets around to "normalisation of proofs as evaluation of programs"; but that looks to me like an illustration of the logic/computation mismatch, because normalization of proofs is comparatively not all that big a deal, whereas evaluating programs is the whole point of the exercise.

Yes, CH underscores that types are a way of reasoning; they are, I've become increasing convinced over the years, a fundamentally perverse way of reasoning, esoteric in much the same sense as INTERCAL etc., only more insidious. They starts out seeming reasonable, and twist the mind until it's tied up in knots (reminiscent of the urban legend about boiling a frog). I find it ironic that Wadler mentions QM in the second sentence of his introduction, since I see QM as a classic example of a theory that perpetuates itself by making itself hard to think one's way out of.

Example?

Could you offer an example of a "knotted type" (or a type that, as you say, "ties [your mind] up in knots")? I'm just curious how you see this approach going wrong, in particular.

I assume that you aren't arguing that you can derive contradictions from type theory; is that right, or is that your objection?

No contradictions here

I assume that you aren't arguing that you can derive contradictions from type theory

You are correct, I'm not arguing that. I'd be tempted to argue the other way around: that the absence of contradictions in type theory is an example of type theory obfuscating things. (Iirc someone extended the Curry-Howard correpondence to relate Felleisen's lambda-control-calculus to classical logic. But I digress.)

As for types, hm. When I blogged on this (back in 2011), I see I cited a couple of discussions on LtU, one of them about the expression problem, and gave a shout-out to Haskell's type system. (S'pose I should provide a link; here.)

Computational Trinitarianism

I found this, which seems to answer my earlier question about how to take the isomorphism further between types and logic, and throws in an isomorphism to category theory for good measure:

http://ncatlab.org/nlab/show/computational+trinitarianism

Seems a good resource for looking at the relations between logic, type theory and category theory. What seems interesting to me is that the common usage seems to be a bit muddled between theories. For example universal-quantification is part of logic, and the type theory equivalent is the dependent-product-type. So when we see universally-quantified types, what is going on? It would appear to be an ad-hoc combination of ideas from different theories? Is that right?

First- and Second- order polymorphism

Universally-quantified types correspond in logic to second-order polymorphism, while the usual first-order quantification indeed is a form of dependent types (as found in Logical Frameworks). There is no confusion, but programmers need second-order quantification more often than first-order, while logicians often content themselves with first-order logics (despite its well-known lack of expressiveness to model most mathematical theories).

System F was proposed around 72 simultaneously by Reynolds, who was looking for a formal way to model type-polymorphic programs, and Girard, who was working on formal systems powerful enough to express second-order arithmetic.

Type theory extends logic?

Clearly the confusion is mine, which is not unusual, that site appears to only be comparing first-order-logic to dependent-type-theory. I found a nice description of the relationships:

VL(PL) = { '¬' , '∧' }
VL(FOL) = VL(PL) ∪ { '=' , ' ∀1 ' } where ∀1 quantifies over individuals
VL(SOL) = VL(FOL) ∪ { ' ∀2' } where ∀2 quantifies over properties (of individuals)
VL(HOL) = VL(FOL) ∪ { ' ∀n' } where ∀n quantifies over yet higher-order properties
VL(TT) = VL(_OL) ∪ { ' λ' } where _OL is a _-order logic (usually _ > 0)

From: http://math.stackexchange.com/questions/477559/relationship-between-propositional-logic-first-order-logic-second-order-logic

So to clarify, you are pointing out that System-F corresponds to Second-order-logic? On that basis what type system corresponds to first-order (or propositional) logic? Are there any type systems that correspond to higher-order logics (but are not dependent)?

Edit: It appears I am still confused, as the universal-quantifier in HM types seems to directly correspond to first-order logic. I think this confusion is more about naming, as what the mathematicians are calling type-theory appears to be specifically dependent-type-theory. Do the mathematicians not consider HM types a 'type theory'? Is this perhaps something to do with the definition of what is a theory?

What theory contains both types and universal quantification, I know System-F has been suggested, but is that a theory in the mathematical sense?

Predicate logic vs. propositional logic

I think you're thrown by not having the difference between the predicate/propositional versions of "second order logic" in your matrix.

Propositional logic = Simple types
First order logic (quantifiers over individuals) = Dependent types
Second order propositional logic (quantifiers over propositions) = System F
Second order predicate logic (SOL above; quantifiers over individuals and over predicates) = Lambda H (which I don't know much about)

Dependent types more expressive

I thought more programs could be typed with dependent types than System-F, is this wrong? Or is it a different property to order, so you can simple dependent types, or higher-ordered dependent types? Yet there seems to be a relationship between first-order quantification and dependent-product-types. Is there such a thing as a higher-dependent-product isomorphic to higher-order-quantification?

Yes, when someone says

Yes, when someone says they're using dependent type theory, they don't mean FOL. They mean something like Coq in which propositions which can depend values and propositions. I don't know of anyone that uses FOL as a type system.

isomorphism

Right, but there's an isomorphism between N-order logic and dependent types so really they are the same thing. For example it looks like twelf uses the equivalent lambda-prolog as its core language. Which makes sense as pattern-unification is decidable for restricted second order terms.

I think I understand so far, what still puzzles me is that if dependent-product-types are isomorphic to first-order-quantification, do you need quantification as well? It looks like the answer is no, as above Type Theory = n-order logic plus lambdas. I think the problem is with different explanations of the isomorphism using different terminology. So although Type Theory is n-order logic plus lambdas it uses its own 'language'.

What is confusing me is that in Type Theory a dependent product type is the equivalent of quantification, yet System-F and simple polymorphic types use universal-quantification. I guess the conclusion is that System-F is an extension of logic, and not a type-theory at all? Maybe this is because to Girard was a Logician?