## Type theory question...

I've been thinking (and reading) about dependent types and related topics (Calculus of constructions, ML type theory, and so on).

First, I'd like to say how really clear and useful I found the second chapter of ATTAPL ("Dependent Types" by Aspinall and Hofmann). Their presentation of the CoC really clarified things for me. The extra overhead of the Prf type constructor and the distinction it draws between Π-types and ∀ terms was pretty instructive to me (I'd previously seen the CoC with only λ and ∀, as it's presented here). Obviously I've hardly gotten my feet wet, but as an introduction, I highly recommend this article.

But I'm left with a question that I hope someone here can answer. I'm confused by the standard terminology for Π- and Σ-types, "dependent products" and "dependent sums," respectively. For the life of me I can't fathom what these types have to do with anything I've normally thought of as a product or a sum. The product seems to be a function type, and the sum seems (against all common sense) to be a product! And in fact, to make matters even worse, what I generally think of as a "sum" (logical disjunction or variant types) seems to be most easily defined in terms of dependent product.

I guess I'm not alone in finding this confusing. This post on the types list is relevant, but doesn't really help me understand the logic (no pun intended) behind the standard terminology, and the observation that "the dependent product type was in fact a sum type" really highlights my lack of insight.

If we were used to classical logic we would be a bit surprised here: classically, the â€œexistential quantifierâ€ is thought of as a generalisation of â€œdisjunctionâ€.

This might also be relevant (from here), but honestly I'm not even sure anymore! I really hope someone can clarify this for me, even if only from a historical perspective...

[Ehud, I'm not really sure if this kind of thing is appropriate for the front page... Any guidance?]

## Comment viewing options

### I'm not really sure if this

I'm not really sure if this kind of thing is appropriate for the front page... Any guidance?

I guess the forum is more appropriate at this point.

### The reason this terminology

The reason this terminology is confused is because there's a difference in how you interpret the forall and exists quantifiers in intuitionistic and classical logic.

So, suppose we're talking about just plain old first-order logic withe the quantifiers ranging over the natural numbers. So in the formula forall x. P(x), the variable x ranges over the natural numbers.

Now, in intuitionistic logic, you can read forall x. P(x) as saying, "Given any natural number x, I have a method for constructing an object of type P(x)." This should make you think of functions -- a function of type nat -> foo is a method for constructing objects of type foo for natural numbers. So you can interpret universal quantification in intuitionistic logic as a generalization of functions. So that's why pi-types are called "dependent function types".

Now, let's think about the intuitionistic existential. exists x. P(x) means "I have an unspecified number x, along with a proof that it has property P(x)." So, the natural proof term for existential quantification is going to be a pair, consisting of the actual number that x is, and the proof term for P(x). That's why sigma-types are called "dependent product types".

However, the quantifiers are understood differently in classical logic.

In classical logic, the universal quantification forall x. P(x) means that if it holds you can assume that P(n) holds for all the possible n's. That's like saying that a universal quantification is equivalent to the infinite conjunction P(0) and P(1) and P(2) and P(3) and .... Since conjunction (ie, and) and product are linked, pi-types get called "dependent products".

The classical existential says exists x. P(x) means that P(x) holds, only you don't know which particular x. So you can understand it as a giant disjunction P(0) or P(1) or P(2) or .... Since disjunction (ie, or) and sum types are linked, sigma-types get called "dependent sums".

And of course authors use whatever terminology they were trained to use, so there's no consistency whatsoever in the literature. For my own part, I say "pi-types" and "sigma-types", simply to avoid any possible confusion.

If your heart is truly set on using dependent sum or dependent product, then use the intuitionistic terminology, unless your language has call/cc or some other control operator that makes its Curry-Howard interpretation classical.

I hope this helped. Bad terminology is no fun. :(

### Thanks!

Yes, that helps enormously! Now I see that I was vaguely headed in the right direction, but I doubt I ever would have figured it out. Thank you so much.

### (Non-dependent) sums

And in fact, to make matters even worse, what I generally think of as a "sum" (logical disjunction or variant types) seems to be most easily defined in terms of dependent product.

I think you could define A + B as Σx:Bool. if x then A else Bâ€”but I guess that begs the question, as it requires conditionals.

### Category Theory

For another perspective, the categorical perspective, one analogy to support this terminology is the following.

If Δ:C→CxC is the functor that duplicates its argument then: -+= -| Δ -| -x=, i.e. the sum type is left adjoint to Δ and the product type right adjoint. For universal and existential quantification you need a bunch of fibration stuff, but ultimately you get ∃ -| π* -| ∀ where π* is called the weakening functor (as it interprets the operation of weakening the context). For dependent products and sums you need a bit more stuff to make what is called a comprehension category and you get Σ -| (Pf)* -| Π where (Pf)* are also called "weakening" functors for each f.

If you're interested in this the Prospectus here gives a fairly good introduction and overview.

### Better terminology

The constructions one gets here are called indexed sum and index product, which have no conflict with dependent product and dependent function space.

### Dependent Sum and Product

That's irrelevant; my point was that they reinforce the naming "dependent sum and product". Also, it is actually consistent for Σx:A.B (versus Σx:A.B(x))to be AxB (and Πx:A.B to be BA) think about it as arithmetic: Adding a B for each A leads to AxB and multiplying a B for each A leads to BA. Further (this is essentially from the adjoints) you expect a "sum" to have injections and a "case" analysis and a "product" projections and a "tupling" as Σ and Π do respectively. Even further, the fact Σx:A.B is a pair is also consistent with the typical "tagged" implementation.

Overall, my opinion is that dependent sum and product are the right terms; they are also the only terms I believe I've seen used for them (or by far the more common).

### Fair enough

In fact, Martin-Loef's 1971 "An intuitionistic theory of types" used "cartesian product" to name the Pi type, and "disjoint union" to name the Sigma type.

### Types

In fact, Martin-Loef's 1971 "An intuitionistic theory of types" used "cartesian product" to name the Pi type, and "disjoint union" to name the Sigma type.

I do not think that is correct.

The disjoint union of A and B, A+B, is the interpretation of A | B
A & B is interpreted as the cartesian product, AxB.

The universal quantifier, forall(x:A)B(x) is interpreted as an [indexed] cartesian join, Pi(x:A).B(x), of a family of types. The interpretation is called either a Pi-type, or a dependent function type or a dependent product type.

The existential quantifier, exists(x:A)B(x), is interpreted as an [indexed] disjoint union, Sigma (x:A).B(x) of a family of types. The interpretation is called either a Sigma-type, or dependent product type, or dependent sum type. In special cases, the Sigma-type can be either a cartesian join, AxB, when B does not depend on A, *or* a disjoint union when A is a two-element type and B is a family of two types.

### Sources

I have the transcript of the text in Sambin&Smith's "25 Years of Constructive Type Theory" here. Those are exactly the phrases Martin-Loef uses.

### Confusing

I have the transcript of the text in Sambin&Smith's "25 Years of Constructive Type Theory" here. Those are exactly the phrases Martin-Loef uses

Could you provide an extended quote ?

Even if he does, the usage is confusing to say the least. How is one supposed to distinguish between what had commonly been called in math "the disjoint union" and "cartesian product" (before the M-L type theory advent) on one hand, and dependent product(function type)/dependent sum/"independent" cartesian join/"independent" disjoint union on the other when one has only two labels for four things ? How helpful is such conflation of terms ?

### Distinction

I've not time right now to type up the context, but the difference between the cases is whether they are the union/product of pairs of types or of families of types.

### Terminology

difference between the cases is whether they are the union/product of pairs of types or of families of types

It's obvious that the context might (or might not) help to clarify the confusing terminology. My point is that the 'cartesian join'/'disjoint union' terminology when applied to dependent types is not helpful at all because it relies on false mathematical intuition regardless of whether or not it's been endorsed by M-L.

### Online PostScript file

Could you provide an extended quote ?

### Reference

Thank you for the reference.

What I've read is quite different from Charles Stuart's claim:

In fact, Martin-Loef's 1971 "An intuitionistic theory of types" used "cartesian product" to name the Pi type, and "disjoint union" to name the Sigma type.

In fact, Martin-Lof speaks in terms of the "cartesian product of a family of types"/"disjoint union of a family of types" rather than just a "cartesian product"/disjoint union". Later, he talks about the cartesian product/disjoint union of two types. He clearly differentiates between say a 'disjoint union of a family of types' and a merely disjoint union [of two types] by virtue of using different terminology. As a matter of fact, I used essentially Martin-Lof's terminology in my earlier message:

...
The existential quantifier, exists(x:A)B(x), is interpreted as an [indexed] disjoint union, Sigma (x:A).B(x) of a family of types
...

### Misled by whom?

What I've read is quite different from Charles Stuart's(sic) claim

Nonsense. Martin-Loef uses just the phrases I gave without the qualifier "of a family of types" many times in the article starting from section 1.3. When he talks of the cartesian product simpliciter being a type, or likewise the disjoint product being a type, he always refers to a family of types, not to the unrelated pair.

The distinction I made previously between of a family of types and of a pair of types captured this point exactly. What I wrote was not misleading.

As a matter of fact, I used essentially Martin-Lof's terminology in my earlier message:...

Equally, I had talked of indexed products and sums before I first replied to you. Don't leap off the handle with claims of being misled if you haven't bothered to follow the discussion.

### sums and products

The dependent type terminology originated I believe at Cornell.

The old style was:

'dependent function type' for Pi and 'dependent product type' for Sigma because with simple types without dependency the former becomes just a function type and the latter a binary product.

Nowadays, the predominant usage appears to be 'dependent product' for Pi and 'depndent sum' for Sigma which is sort of strange to those accustomed to the old terminology.