Practical Type Inference Based on Success Typings

We show that it is possible to reconstruct a significant portion of the type information which is implicit in a program, automatically annotate function interfaces, and detect definite type clashes without fundamental changes to the philosophy of the language or imposing a type system which unnecessarily rejects perfectly reasonable programs. To do so, we introduce the notion of success typings of functions. Unlike most static type systems, success typings incorporate subtyping and never disallow a use of a function that will not result in a type clash during runtime. Unlike most soft typing systems that have previously been proposed, success typings allow for compositional, bottom-up type inference which appears to scale well in practice.
A recent paper using a subset of Erlang for the examples. This continues the trend of methods for uncovering type errors in dynamically-typed Erlang. One such tool, Dialyzer, is now part of the Erlang distribution.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Success typing

Success typing attempts to find the most restrictive type such that all calls to the function that do not conform to the inferred type fail. The authors use union types when several argument or return types fit the bill. External functions can be typed simply with (any())* -> any().

Very interesting paper, thanks James.

Combining strong and success typing

So success typings overapproximate the type as a set of allowed values (and underapproximate its complement).
"Failure" typings do the opposite. Both have their merits.

Would it be useful to infer both of the typings in the same language and thus provide three degrees of certainity - an expression is definitely mistyped, or definitely well-typed, or uncertain.

As I understood, success typings cater for 1 and 3, while "failure" typings for 2 and 3.

BTW, what would be the accepted term for "failure" typing? I am not sure it exactly corresponds to strong typing.

Duality between narrowing versus widening type inference algos

This success typing work is nicely simple and elegant. This does allow me to ask following question:
I have noted that some type inference algorithm "grow" types (widen?)while others refine types (narrow?). This looked similare to solving primal versus dual problems say with a simplex algorithm. So the question is then: does this comparison make sense and are there "primal-dual" algorithms that do type inference both from the "bottom" and the "top" of the type hieararchies at the same time.

[later edit]
On consideration, the comparison to primal dual makes no sense as it demands a base transformation. Type inference, whether starting from the bottom or the top of the type treilli is still "using the same coordinates". Still, I hope the rest of the question makes sense.

I thought a similar thing

I thought a similar thing but along the lines of duality between filters and ideals. Not knowing much about the simplex algorithm I can't make a more useful contribution.


This topic seems particularly interesting to me, would anyone happen to know of other good/similar papers out there?

Intersection Types?

I'm not sure if intersection types are exactly what I'm talking about here but they'd seem to provide a answer to the and example in the paper:

and(true , true)  -> true;
and(false, _   )  -> false;
and(_    , false) -> false.

Could be typed as:

and : (bool(), bool()) & (bool(), any()) & (any(), bool()) -> bool()

However I can think of an example (using Ruby-esque syntax) that would be problematic:

def foo(a)
  if a.fuzzy?

The object a may only have the fuzz method if fuzzy? is true. Although I don't know Erlang all too well I imagine a similar example could be contrived.