The Lambda Cube & Some Programming Languages

In the lambda cube, the three axes are terms-depending-on-types, types-depending-on-types and types-depending-on-types. To check my intuition:

Isn't sub-typing, in the manner of Java, C++ and C#, an example of plain terms-depending-on-types?

Are prototype-based languages, such as Javascript, plain terms-depending-on-terms?

Are multi-parameter typeclasses, C++ templates and "generics" examples of types-depending-on-types? Are associated types (for example, the iterator for a certain collection) "higher-order" or are they essentially the same thing?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Phase distinctions

In the lambda-cube the three axes represent static relationships that are formalized through a type system, your first example seem more dynamic and not that easily integrable in the Cube systems:

- sub-typing in Java, C++ and C#: if you refer to dynamic, late-bound overloading then I guess we could say it's a term-depending-on-type mixed with a term-depending-on-term: the behavior depends on the overloaded method definition which is chosen depending on the runtime type.

- Typeclasses, templates and generics are all instances of static types-depending-on-types, with resolution of classes and instantiation of templates and generics happening at type-checking. However, typeclasses and templates give you terms-depending-on-types because of specializations while generics give you by definition the same behavior for any type (or so I hope, at least parametric polymorphism does).

Answers

Isn't sub-typing, in the manner of Java, C++ and C#, an example of plain terms-depending-on-types?

No, subtyping is an entirely different dimension. The canonical example of "terms-depending-on-types" would be parametric polymorphism, as e.g. found in ML or as exemplified by generic methods in some OO languages.

Are prototype-based languages, such as Javascript, plain terms-depending-on-terms?

Javascript is untyped, so it's outside the classification of the lambda cube. A better example would be, say, Pascal. Prototypes don't have any relevance to this.

Are multi-parameter typeclasses, C++ templates and "generics" examples of types-depending-on-types? Are associated types (for example, the iterator for a certain collection) "higher-order" or are they essentially the same thing?

The primary examples of type constructors ("types-depending-on-types") are parameterised type definitions as found in ML or Haskell.

Type classes again are a different dimension, sometimes called "qualified types".

C++ templates are essentially untyped macros, but ignoring that fact (and a couple of others) you could view class templates as type constructors ("types-depending-on-types") and function templates as polymorphic functions ("terms-depending-on-types"). They can even be higher-order to a limited extent, by using "template template arguments".

"Generics" a la Java or C# provide a somewhat more principled implementation of similar concepts. They are not higher-order. Scala has fully higher-order generics and member types.

Associated types as in C++ or Haskell are slightly different, and are rather interpreted as type-indexed families of types - which means that their definitions are not uniform for different arguments. In fact, the same could already be said about C++ templates, since template specialisation might render them non-uniform - but it is largely moot trying to give a proper type-theoretic categorisation to C++.

As for dependent types ("types-depending-on-terms"), they hardly show up in today's mainstream languages. They can be found in the languages of provers like Coq, Twelf, or Agda. If you wanted, you could view C-style array types with their bounds as a very simple instance of a dependent type.

Sub-typing "is evil"

Isn't sub-typing, in the manner of Java, C++ and C#, an example of plain terms-depending-on-types?

No, subtyping is an entirely different dimension. The canonical example of "terms-depending-on-types" would be parametric polymorphism, as e.g. found in ML or as exemplified by generic methods in some OO languages.

How do we understand sub-typing in the context of the lambda calculus? Is this why sub-typing (to paraphrase Wadler) "is evil"?

Perhaps (very) convenient

Perhaps (very) convenient sugar? Eg., arithmetic has structure, but outside of the problems it caused Church, most of it is noise. Perhaps refinement types could be similarly interpreted.

Btw, any reference on the Wadler statement?

Reference

He said it at the BayFP meeting, late last year some time. It had the flavor of a characteristic remark.

limited dependent typing

As for dependent types ("types-depending-on-terms"), they hardly show up in today's mainstream languages.

You can view the template value parameters in C++ as a small fragment of dependent typing in a mainstream language.

Subtyping

[Oops, wrong Reply button. Meant as reply to this post.]

How do we understand sub-typing in the context of the lambda calculus?

Well, subtyping is well-understood in the LC as just that, subtyping. Pierce's "Types and Programming Languages" discusses it in great detail.

But from your original question and Matthieu's reply I gather that by subtyping you actually mean late binding. However, this has nothing really to do with subtyping - not even with typing. It exists in untyped languages, too, e.g. prototype-based OOPLs. Late binding is just an ad-hoc form of higher-order argument passing. That some languages tie it to classes and subclassing (which isn't the same as subtyping, btw) is just an artefact of these languages.

Is this why sub-typing (to paraphrase Wadler) "is evil"?

Subtyping is often deemed evil because the meta-theory of a type system including it becomes much more complicated. Worse, it tends to destroy certain useful meta-theoretical properties. As a result, complete type inference is practically intractable for the simplest systems already, and in more expressive ones even decidability of mere type checking can quickly go overboard. From a practical perspective, types become less accurate and implementation more expensive, because subtyping polymorphism applies everywhere implicitly.

I'm curious with respect to

I'm curious with respect to true subtype relationships. Perhaps you mean it "is well-understood in the LC as just that, subtyping" similar to what I wrote as sugar -- a useful ordering (or replacement, etc.) relationship that requires a bit of footwork and therefore adds noise.

It's not about dependencies, but (behavioral) equality -- a lateral movement on an axis.

Curious about what?

[Oh my, used the wrong comment field again. I should stay in bed where I belong.]

I agree that subtyping mainly adds convenience. You can almost always translate it away using explicit coercions or injections. Still, I don't understand what you mean. What is "true" subtyping? And what "dependencies" are you referring to?

The lambda cube seems to be

The lambda cube seems to be about expressing [term, type] x 'depends on' x [term, type]. How does supporting the swapability of terms that follow a subtyping relationship fit?

It expands our ability to make abstractions beyond what is provided by, say, let-polymorphism, but, as you noted, not in a deep way. This would suggest it pushes us further along the terms-depending-on-types axis. It is not on the types-on-types or types-on-terms axis because the notion of swapability is a 'hard-coded' relationship in the language (though perhaps tying it to subclassing muddies this up, haven't thought about it).

Just a simple philosophy exercise :)

The lambda cube seems to be

The lambda cube seems to be about expressing [term, type] x 'depends on' x [term, type]. How does supporting the swapability of terms that follow a subtyping relationship fit?

The lambda cube is not a complete picture. As Andreas' responses have suggested, there are many more axes you could consider. Trying to shoehorn everything into the lambda cube is not particularly beneficial.

more than convenience (?)

I agree that subtyping mainly adds convenience.

On the caller's side, I see subtyping as a convenience, saving the explicit use of coercion functions, but when combined with, say, parametric polymorphism to give bounded quantification, I get the feeling it is more than just a convenience.
I'm not much of an expert on type systems, but thinking about System F-sub, I was wondering: can System F-sub be translated into System F plus existential types (or some other combination of features that avoids a subtyping relationship)?

Yes

You translate bounded quantification into unbounded quantification plus abstraction over a corresponding coercion function:

forall a < t. u ==> forall a. (a -> t) -> u
exists a < t. u ==> exists a. (a -> t) * u

Thanks, that's even simpler

Thanks, that's even simpler than what I was thinking about. I retract my previous statement then; subtyping in this case really is only a convenience.