In particular, [it requires] that the type theory doesn't admit types as terms and doesn't provide any operations on types besides sum, product, and exponential.
That's not true.
For example, if your type theory exposed a function inhabited :: type > bool which returned true if the type were inhabited and false if empty, then not every universal quantification would have such a free theorem.
That sounds possible, but I don't see a counterexample off the top of my head. Do you?
The naturality property does not depend on parametricity, BTW; rather it's the other way round, as there are some natural transformations which are not parametric. So being able to distinguish forall'd types is not necessarily enough to break the theorem.
Also, don't you think conversely that, if some feature of a type system breaks the naturality property, that that might reasonably be construed as an argument against adopting such a feature? After all, it charges a cost for a theorem one would otherwise get for free. :)
For example, forall a.[a]>[bool] would exist but wouldn't be invertible.
What does invertibility have to do with it? In the example above r is not assumed to be invertible. Indeed, I can't think of any functions with that type which are invertible.
In my view, the reorganization of type theory into category theory has been very harmful to the subject. It has reinforced the separation of types and terms, leaving few researchers to ponder systems combining the two or to even consider the possibility that such systems might be valid and useful.
Hm, that's a fascinating point of view. It's a bit like saying microscopes have reinforced the separation of man and bacteria.
Did you know that there are perfectly good categorical models of dependently typed languages? Or that the fact that one can easily describe consistent and general categorical models of dependent types is often touted as one of the success stories of categorytheoretic model theory?
Why? Category theory's focus on isomorphism (instead of equality) doesn't scale.
Doesn't it? First, the `focus' on isos is not part of the letter, but rather the spirit, of category theory. Second, actually one should expect uptoiso equivalence to scale better than uptoequality equivalence, because it demands less, so there are more models which translates back to more implementations. For example, demanding a type iso to a particular stack ADT lets you use anything which behaves like a stack rather than that particular stack implementation. To me, this sounds like exactly the sort of thing you want for `largescale' programming and specification.
John Baez's writings on things such as "the category of all categories" and "the category of all categories of all ..." illustrate the nearhopeless complexity of isomorphismsofcategories perfectly.
Unlike set theory where one has an infinitely high tower of infinities?
As a result, simple notions like "the type of the type of all inhabited subtypes of the type of natural numbers" seem incomprehensibly difficult in category theory and hence in type theory.
You call that simple? :) (No, don't explainI know what it means.)
Category theory brought useful advances to topology. No surprise here, because topology is precisely the study of isomorphisms between geometric objects.
Well, the study of certain equivalencesusually not isosbetween geometric objects.
Anyway, maybe the `essence' of type theory and/or computation is also precisely the study of equivalences between computational objects? I'm not saying that it is but, if it can be characterized that way, maybe that fact would only become apparent after one tries to regard it that way.
It's not such a strange idea, after all. Church's thesis says that lambdacalculus, an archetypal equational language, is sort of the essence of computation. And Dana Scott's adaptation of topological notions forms the heart of domain theory.
Type theory, as it applies to any sort of actual programming system, just needs to answer the question, "which values are allowed in this context?" and there is no benefit to dragging isomorphism into this picture.
That is only the very tip of an enormous iceberg.
For most type systems, one can lay out the typing rules which answer your question in only a page or two. It doesn't just stop there. Then you develop the theory, to find out what properties characterize the type system: subject reduction, termination, confluence, etc. The question of which types behave the same way, that is, which types are interderivable, that is, which types are isomorphic, is a natural question to ask; for example, it's a measure of how redundant the type system is.
The state of type theory today seems to me to like a set theory where most refuse to acknowledge the possibility that sets might themselves contain sets, or the existence of functions from sets to sets, from values to sets, or from sets to values.
That's an interesting analogy.
Sure, the current approach brings some simplifications to reasoning about types, but only at the expense of a wealth of new tools for expressing, refining, and reasoning about types.
Blame Russell. He should never have made that remark to Frege. :)
But seriously, it is easy to complain about all the heavy machinery one needs for categorical type theory, but much harder to provide an alternative which solves the same problems. The work of people like Baez is actually an attempt to organize and reduce that machinery, although I guess you don't see that.
