archives

Symmetry in type systems

Type systems exhibit a certain symmetry that causes almost all features to be duplicated. For example: sums & products, covariance & contravariance, unions & intersections, universals & existentials, exstensible records & extensible variants, etc. In each case you can transform one into the other with callbacks (continuation passing).

Even a function type like int -> string is made up of two parts. If we see a type annotation E(v : int -> string), that is saying two things: (1) the value v, when given an int, will produce a string (2) the continuation E will only call v with an int, and will be happy to get a string in return. The type annotation is not just constraining what the value can be, but also what the continuation can be.

What should I read to understand this two-sidedness better? Some question I have are: in physics when there is symmetry in a problem, it can often be eliminated to make the problem simpler. Can the symmetry in type systems be eliminated? For example you can imagine a purely CPS language where you have a type K(T) that represents a continuation that takes a T. Then you have K(A + B) = K(A) x K(B), and similar for other types. For function types you have A -> B = K(A x K(B)). K essentially flips the constraints around: the constraints that were laid on the value are now laid on the continuation and vice versa. Has this been explored? What should I read?

Another question is: does it make sense to separate the two pieces of a type? For example does it make sense to have a function type that only says "when given an int, v produces a string" but does not guarantee that the continuation will only call it with an int (perhaps in combination with other features like intersection and union)? Does this two-sidedness arise in other languages, like logic languages, where it's not so clear what are inputs and what are outputs?