## An Intensional Type Theory: Motivation and Cut-Elimination

An Intensional Type Theory: Motivation and Cut-Elimination, Paul C. Gilmore.

By the theory TT is meant the higher order predicate logic with the following recursively defined types:

(1) 1 is the type of individuals and [] is the type of the truth values;
(2) [Ï„1, ..., Ï„n] is the type of the predicates with arguments ofthe types Ï„1, ..., Ï„n.

The theory ITT described in this paper is an intensional version of TT. The types of ITT are the same as the types of TT, but the membership of the type 1 of individuals in ITT is an extension of the membership in TT. The extension consists of allowing any higher order term, in which only variables of type 1 have a free occurrence, to be a term of type 1. This feature of ITT is motivated by a nominalist interpretation of higher order predication. In ITT both well-founded and non-well-founded recursive predicates can be defined as abstraction terms from which all the properties of the predicates can be derived without the use of non-logical axioms.

The elementary syntax, semantics, and proof theory for ITT are defined. A semantic consistency proof for ITT is provided and the completeness proof of Takahashi and Prawitz for a version of TT without cut is adapted for ITT; a consequence is the redundancy of cut.

Russell showed Frege's original foundations of mathematics inconsistent when he invented Russell's paradox -- does the set of all sets that do not contain themselves contain itself? This paradox arises because Frege's logic contained an unrestricted comprehension principle -- you could create a set from any predicate. This allows you to cook up vicious cycles such as in Russell's paradox, and the traditional method (invented by Russell) for avoiding these cycles is to allow quantification in a predicative hierarchy: when you comprehend a predicate at level i, you create a set at level i+1, and a predicate at a given level can never mention sets at a higher level.

This paper claims that you can allow unrestricted impredicative quantification if you keep careful track of Frege's sense-reference distinction, and distinguish between predicates and names of predicates. This (if it really works -- I haven't done more than skim the paper yet) would be a different method of using a predicative hierarchy to avoid the paradoxes.

## Comment viewing options

### Great paper

I have read this paper in dept - and it is well worth it. It really draws from Montague's ideas on Intensional type theory, and makes it relevant for both logic and programming.

For a functional programmer, there is one definite drawback: the whole exposition is relational rather than functional. In many places, this is not so relevant, but now and then that really makes a big difference.

There is however one thing that bothers me: although Gilmore is upfront that some terms may have 2 types, it seems to me that some (complicated) terms may in fact have much more than 2 types, which really would be troublesome.

However, I do believe that the clear distinction between extension and intension that Gilmore expands on is crucial, and has real applications in programming. We already do have nominal and structural type theories, this is just a more precise way of looking at this, and of unifying these ideas.

### algorithms for detecting equivalence among such things

Note that another tradition of sets-that-can-include-themselves generalization may be of interest to LtU readers: hypersets. You can make an analogy between a recursive type like

data Tree a = Leaf a | Branch (Tree a) (Tree a)

and the set of leaf objects and branches of instances of itself. This gets interesting not only from the mathematics-of-infinities abstract issues that the Gilmore paper focuses on, but from practical algorithmic considerations of efficiently detecting equivalences among the finite concrete instances of hypersets. Consider, e.g., something like

data Tree a = Leaf a | Branch (Tree a)
(Leaf a | Branch (Tree a) (Tree a))

And see, e.g., a bullet-point summary (from the perspective of analyzing states of hardware, but that isn't completely alien to the problem of analyzing the states of software) at

www.bioinformatics.nyu.edu/BLISS/nyubissim.pdf

and an older paper on how the ideas and algorithms apply to incremental maintenance of type data in compiler-ish systems

http://www.cs.bu.edu/techreports/2000-006-hashconsing-recursive-types.ps

and my new draft preprint on how to do the incremental equivalence calculation reliably efficiently, both for the (harder) unlabelled-edge case that the hardware people tend to care about and for the labelled-edge case that is common in recursive types in programming languages

http://www.wryttyndyffyryntly.com/preprint/hccycles-2007-03-14.pdf

(And somebody please tell me if it was inappropriate for me to stretch the topic to plug this. Stretching a LtU discussion to practicalities of compiling cyclic types doesn't seem too dangerous, but I haven't seen a lot of people making plugs like this, so I'm not sure.)

### Consistent and Complete?

Guess it can't include arithmetic then.

### Sure

Plain old predicate logic is consistent and complete too. It's when you add additional axioms to represent the natural numbers that you lose completeness. Nothing unusual here. And there are of course interesting first-order theories that are complete, like the first-order theory of real numbers.