Type dispatch on continuations is isomorphic to type dispatch on calls. Why therefore is it considered "unsound?"

Why is doing type dispatch on method calls different or more acceptable to (static) type theorists than doing type dispatch on continuation (return) calls?

Function calls and function returns are completely isomorphic, as anyone can prove by transforming a program into continuation-passing style. What possible justification then, can there be for considering type dispatch "sound" and return dispatch "unsound"?

The more I think about this the more fundamental a question it seems to become.

The essence of sound static typing is that, at any point in the program, we know exactly what type everything is, yes? Without resorting to nasty labels like type tags, for the purists. We can achieve that in the presence of polymorphic function calls by choosing at what point in the program to continue based on what type/s we are returning.

There is no difference between this technique and choosing what point in the program to call based on the argument types, because after all every function return is just a function call to a continuation. In both cases, we know exactly what type everything is at every point in the program simply because we do not go to points in the program where the types we have in hand would result in a type mismatch with the types expected at that point in the program.

Syntactically some code may look like a ladder of type checks, but the clauses can be treated as separate continuation addresses to pass to the function whose type they are checking.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

There seems to be a misunderstanding

Firstly, functions that have multiple return points (they take a product of continuations) are equivalent to functions that return a sum value. You can convert back and forth between them. In practice I'm not aware of any language that uses type dispatch to choose a continuation, what are you thinking of? For example, switch (foo.Type) { Int:...; Bool: ... } would be a value dispatch on a sum type, that happens to be a runtime representation of type information.

Type soundness means that "well-typed programs do not go wrong". This is a "subjective" statement in the sense that it depends on what you mean by "go" and by "wrong". You need to define which programs are well-typed, how programs compute, and which kind of computations are considered "wrong"; usually it is getting "stuck", reaching a point where the behavior is undefined, without being in a final state of returning a (well-typed) value. But for example, in most practical programming language, looping endlessly without doing nothing and failing or a division by zero or array index out of bound is not considered "going wrong".

You can certainly model static type systems for the language that use dynamic types, and/or various type dispatch mechanism, and reason on whether they are sound or not. "Dynamic types" usually mean that tags are attached to values that reflects their static type, and that when a value is needed a runtime check is done on this tag to decide what to do (note that ML languages without dynamic typing also make use these runtime mechanisms when manipulating values of algebraic data types; at runtime there is no difference, it's only a question of intent of whether the information models something we think of as "type" or not).

A dynamic language that will raise a "Method not found" exception when you use an object of, morally, "the wrong type", will be considered sound in a type system where this is considered as "not going wrong", because it was designed to catch worse trouble like segfaults (a reasonable thing to do if you're designing an implementation for this language). On the contrary, if you want to provide an analysis to forbid such "method not found" or "null" errors, you must provide a type system that is still sound when this is considered "going wrong".