Constraint-Based Type Inference and Parametric Polymorphism

An old paper ('94) by Ole Agesen, abstract:

Constraint-based analysis is a technique for inferring implementation types. Traditionally it has been described using mathematical formalisms. We explain it in a different and more intuitive way as a flow problem. The intuition is facilitated by a direct correspondence between run-time and analysis-time concepts.

Precise analysis of polymorphism is hard; several algorithms have been developed to cope with it. Focusing on parametric polymorphism and using the flow perspective, we analyze and compare these algorithms, for the first time directly characterizing when they succeed and fail.

Our study of the algorithms lead us to two conclusions. First, designing an algorithm that is either efficient or precise is easy, but designing an algorithm that is efficient and precise is hard. Second, to achieve efficiency and precision simultaneously, the analysis effort must be actively guided towards the areas of the program with the highest pay-off. We define a general class of algorithms that do this: the adaptive algorithms. The two most powerful of the five algorithms we study fall in this class.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Why has type inference in OO languages failed?

So I'm going through the literature and I can't really find what went wrong: most of our languages support at best local type inference and only those that forgo subtype polymorphism (like OCaml or Haskell) can do better than that?

I hate it when there is a long trail of literature but no negative results papers :p, failure to be self critical is a huge failure in our field. Perhaps their should be a statute of limitations on ideas: if it isn't useful 10 years after publication, then it fails by default and others should be allowed to try again.

Because subtyping is a pig

Subtyping gives rise to inequivalence constraints instead of equivalence constraints, with inequivalence there is no unification, without unification the complexity of type( bound)s explodes and pollutes everything, and with that both the complexity of the analysis and the incomprehensibility of user-facing error messages become impractical on scale.

Nobody is disallowed from trying again. Just don't assume that anybody feels obliged to care if you haven't even bothered doing your homework. ;) No, one cannot forego the boring old-fashioned way of doing research by reading up on a field first, and no, one cannot simply assume that everybody was doing it all wrong anyway and they just were too full of themselves to see or admit it.

In the framework you are

In the framework you are comfortable with, subtyping is a pig for sure. But you can build a type system and do type inference (gasp) without unification.

I'm all for doing my homework! I'm not complaining about reading the papers, but the fact that they all end "done and finished", then the field just completely dies in 1997 or so and the researchers moved on to do other things. How does one bridge gaps between two histories when other researchers have felt this is unnecessary? It is like the oblivious crap going on between self-adjusting computation, FRP, and constraint programming.

The big problem is that many of the conclusions between here and now are unpublished; consider:

The reason Scala does not have Hindley/Milner type inference is that it is very difficult to combine with features such as overloading (the ad-hoc variant, not type classes), record selection, and subtyping. I’m not saying impossible — there exist a number of extensions that incorporate these features; in fact I have been guitly of some of them myself. I’m just saying it’s very difficult to make this work well in practice, where one needs to have small type expressions, and good error messages. It’s not a shut case either — many researchers are working on pushing the boundaries here (look for instance at Remy’s MLF). But right now it is a tradeoff of better type inferencing vs better support for these features. You can make the tradeoff both ways. The fact that we wanted to integrate with Java tipped the scales in favor of subtyping and away from Hindley/Milner.

Probably because our field isn't so interested in aggregating results that are not all positive, so all big conclusions are made in smoky rooms.

let there be types

I'm reminded of the classic Lisp quote about a diamond versus a ball of mud. Type systems are generally imposed by centralized fiat, whereas in a thriving software community source code wants to accumulate by anarchic sprawl.

What the heck is a "type"

What the heck is a "type" anyways? Is a type system if it can detect type errors and drive code completion, but lacks the formal notion of a type?

I like imposing my will via centralized fiat. Too much flexibility is not for everyone, or for most of us.

Hm, 'type system' usually

Hm, 'type system' usually refers to the formal systems, perhaps because they're generally the most extensive type systems. (I had mostly formal systems in mind when venting my own skepticism on types.)

By policy, LISP has never really catered to mere mortals. — Larry Wall

The object of a conservative static analysis

Speaking as generally as possible, "type" only has meaning within a type system, and a type system is any static analysis which, when it and the code disagree on something assumes that it's right and the code is wrong.

So in answer to your question, yes.

Uhm, in my system the code

Uhm, in my system the code is right and the type is expanded to accommodate the code. That is until an incompatibility is detected (defined as two traits that can't be extended together, like bool and int), or a term has been frozen in a signature because it was assigned from an encapsulated object behind the signature.

Types are more interesting to me as a basis for providing semantic feedback in general, not just type errors. We over focus on that because functional programming languages have crappy support for code completion in spite of their advanced type systems.

I think we agree...

To me that sounds like "The code is correct, right up until my analysis can explain why it's wrong." In the event of an incompatibility, does your system reject the code, or does it issue a warning and run anyway?

I agree that types are interesting for more than just errors, but I think their defining characteristic (in contrast with other forms of analysis) is what happens when the system detects something it can't properly reason about (inconsistency, inability to infer, etc.).

Type checking is fully

Type checking is fully incremental....this is part of my live programming work after all and the code needs to run even if it has type errors :). When an incompatibility is detected, an error is displayed under the line in the source AND the last offending trait is not admitted into the term's trait extension set. This arguably leads to inconsistencies as different orders lead to different messages, but it prevents error cascades from occuring.

Right and wrong, pfah. There's only consistency & inconsistency

Static analysis is useful. But there are problems I have no idea how to apply it to. So I'm not satisfied with it being the only thing available in a language's type system.

As for whether the code or the program is "right", the issue is merely that the static type analysis and program code are inconsistent with each other. Whether you handle that by refusing to start the program, or by some other means, is not implicit in the inconsistency.

It's about how you treat inconsistency

I'm expressing the view that the key difference between type systems and other static analyses is that type systems take the perspective that an inconsistency is an error. Do you disagree?

As for static analysis being insufficient, I thought that the prevailing view was that type checking had to be done during compilation, and thus anything which required runtime information (like Python's types) was not considered part of the type system, at least in the PL theory sense. If that is the case, then type checking is just a particular kind of static analysis, isn't it?

Composition, Modularity

the key difference between type systems and other static analyses is that type systems take the perspective that an inconsistency is an error. Do you disagree?

There are many forms of "white box" static analysis that require access to implementation details, such as abstract interpretation. Analysis in the presence of macros, templates, etc. also involves a white box.

Type systems support a "black box" analysis. One can know the type of a composite by knowing the types of the components and the composition operators, without knowing the details for any of them. The implementation details may be hidden, so long as they are compatible with the expected type. Types enable modular separation and development of components.

IMO, the compositional nature of types is the key difference between type systems and other static analysis.

Also, type systems aren't strictly about analysis or error detection. They can be used to infer code, to fill gaps and select between potential subprograms. I understand type systems generally as a modular model for staged computation.

What's your point?

Probably because our field isn't so interested in aggregating results that are not all positive, so all big conclusions are made in smoky rooms.

What's your point? Nobody knows how to do scalable inference for these features, but they all should be writing papers about what they don't know? Note that there is a difference between a negative result and the lack of a positive one.

There are quite a few negative results on type inference in literature, e.g. undecidability of interesting systems (such as higher-order polymorphism). Or quasi-negative ones, e.g. disheartening complexity bounds (especially in the area of subtyping!). But in addition, every field also has its "folklore" knowledge that everybody understands already, so nobody found it worthwhile to write down. I don't think that has much to do with being "not all positive".

But you can build a type system and do type inference (gasp) without unification.

Yes, sure, but it will likely have all the same problems, e.g. not scaling beyond local inference.

Yes, sure, but it will

What's your point? Nobody knows how to do scalable inference for these features, but they all should be writing papers about what they don't know? Note that there is a difference between a negative result and the lack of a positive one.

This is the consensus I think, but I don't immediately feel I'm allowed to say it. Consensus is something I should be able to establish quickly. Martin's quote should help.

Yes, sure, but it will likely have all the same problems, e.g. not scaling beyond local inference.

It already scales beyond local type inference. Unless you are saying my implementation is wrong....I just need to write this up.

Hindley-Milner:System F as ?:F<

I think subtyping research just to slowed to a crawl because the alternative was so promising.

In the short term Hindley-Milner was powerful and pragmatic, Haskell and typeclasses were taking off and in the long term Dependent Types looked like a potential panacea. It looked like a clean march across the axes of the lambda cube would lead to victory.

Subtyping, especially with intersection and union types just seemed to be offering problem after problem. No principle types, no unification, huge inferred type expressions, and undecidable problems everywhere you looked. Then you have the OO movement giving subtyping a bad name in the eyes of researchers.

There was also no broadly convincing corollary to Hindley-Milner to use a s stop-gap. Full type inference seemed mandatory back then and subtyping just makes that very tricky.

Fast-forward 15 years and full-type inference doesn't seem as vital anymore. Haskell is more powerful but not as elegant. Dependent Types are still 10 years away. I think subtypes with intersections and unions could be in for a revival.

Actually...

I've been trying to publish a paper about this for a few years now, but I haven't been able to spend time on it with any type-theory experts. Would you mind having an email conversation in which I could explain the core idea to you and you could maybe give some tips or outlines on how to properly present it in a paper?

Type inference in C++11

C++11 tacks on some forms of type inference. The auto keyword means to infer the type of a variable from its initializing assignment and the decltype(EXPR) construction means the type of the enclosed expression. Not being a language guru, I'm kind of curious about the exact meaning of "failed" and how it relates to the comprehensiveness or lack thereof of those capabilities.

Don't believe the hype

FWIW, nobody in the types community would call what these C++ features do "type inference" -- it's nothing more than boring bottom-up type derivation. The only thing in C++ worth calling type inference is its "template argument deduction", and that's still comparably lame.

What's a good example of an

What's a good example of an important use case in a language with successful non-boring type inference that is impossible or cumbersome in a language with only bottom-up type derivation?

Does it involve anonymous type creation expressions - as opposed to named type declarations)? If so, I recognize that the only thing similar in C++ are variadic templates (classes and functions), also a recent C++ feature (with a complex syntax).

Canonical example

You have real "inference" when information about the type of an expression or variable is e.g. inferred from its uses. That requires far more than bottom-up analysis.

A simple canonical example (and litmus test) would be inferring the types for function parameters. That becomes pretty important for a more functional style of programming with lots of small lambdas or local functions.

Wrt generics, more "interesting" type inference can not only infer the instantiation types for uses of generics (important if you program with many small generic abstractions), it can infer the definitions of generics. That's the key feature of ML-style Damas/Milner inference, for example, where any value/function definition that could be generic in fact is generic, by default.

Inference of arguments to

Inference of arguments to template/generic functions was already present in C++98, and those arguments could be, for instance, generic functions themselves. However, if one wanted to express something about the signature of one of those arguments, that sub-expression needs to be captured in the signature of the receiving function. So if I am understanding you correctly, the interesting use case in, say, ML, might be something where a lot of different part of the signature were used independently, and perhaps implicitly, within the function - doing that would be cumbersome and ugly in C++...or did you mean something different by "infer the definitions of generics"?

Here's a very simple

Here's a very simple example:

fun map f l = case l of
    nil => nil
    x::xs => f x :: map f xs

This is a recursive function that applies f to each element of a list l, returning a list of the results. There isn't a single type annotation written here. Nevertheless, the type system knows that the type of map is (α -> β) -> α list -> β list, i.e., a generic, higher-order function that takes a function from an arbitrary type α to an arbitrary type β, and a list of α's, and returns a list of β's. You can use it e.g. like

fun scale n l = map (fn x => n*x) l         (* double : int -> int list -> int list *)
fun dup l = map (fn x => (x, x)) l          (* dup : α list -> (α × α) list *)
fun lift l = map map l                      (* lift : (α -> β) list -> (α list -> β list) list *)

Note again that there isn't a single type annotation (although you are free to add them, of course), and the latter two functions are by themselves generic.

In C++11, these functions would have to be written something like (untested sketch):

template<class A, class B>
std::forward_list<B> map(std::function<B(A)> f, std::forward_list<A> l) {
  // implementation elided...
}

std::forward_list<int> scale(int n, std::forward_list<int> l) {
  return map([&](int x) { return n*x; });
}

template<class A>
std::forward_list<std::pair<A, A>> dup(std::forward_list<A> l) {
  return map([&](A x) { return make_pair(x, x); });
}

template<class A, class B>
std::forward_list<std::function<std::forward_list<B>(std::forward_list<A>)>> lift(std::forward_list<std::function<B(A)>> l) {
  return map([&](std::function<B(A)> f) { return [&](A x) { return f(x); }; });
}

That should already make the benefit of type inference pretty obvious (as well as the fact that C++ type and declaration syntax is a disgrace). And by functional programming standards, these are still fairly trivial types.

That should already make the

That should already make the benefit of type inference pretty obvious (as well as the fact that C++ type and declaration syntax is a disgrace). And by functional programming standards, these are still fairly trivial types.

C++ template syntax is known to be ugly, and I believe your implementation with forward_list accidentally reverses the order of the ML version...

But only thing I see here that is really particularly germane to the topical question of type inference is that your translation of scale requires 'int' instead of making the value type generic. But that was not necessary in C++. Alternatives would have been either to make it Container::value_type and assume that multiplication of such returns the same type, or to make the scale a new type parameter and possibly use static_cast<> for coercing it back to Container::value_type.

I'll split up the code so

I'll split up the code so that it's easy to see.

Here is what the code would look like if you had type inference:

map(f, l) {
  // implementation elided...
}

scale(n, l) {
  return map([&](x) { return n*x; });
}

dup(l) {
  return map([&](x) { return make_pair(x, x); });
}

lift(l) {
  return map([&](f) { return [&](x) { return f(x); }; });
}

(isn't it surprising how relatively nice this code looks even though it's C++ syntax?)

Here is the part that would have been inferred by type inference:

template<class A, class B>
std::forward_list<B> std::function<B(A)> std::forward_list<A>

std::forward_list<int> int std::forward_list<int>
           int

template<class A>
std::forward_list<std::pair<A, A>>    std::forward_list<A>
           A 

template<class A, class B>
std::forward_list<std::function<std::forward_list<B>(std::
  forward_list<A>)>> std::forward_list<std::function<B(A)>> 
         std::function<B(A)> A 

That's what type inference is about in languages like ML.

Broken HTML

Thanks for that. The HTML appears to be broken, though.

I tried to fix it. Still

I tried to fix it. Still broken?

Better :)

Seems to be fixed now, thanks!

Still kind of wide. Can you

Still kind of wide. Can you break up the last line of your last pre?

done


C++11 limitations for this *are not* in type inference

That's what type inference is about in languages like ML

In C++11 the declarations could be, if one wanted


template < class Container, class Func >
auto map(Container c, Func f);

template < class Container, Class ScaleFactor >
auto scale(Container c, ScaleFactor s);

In any version of C++ it is possible to define a custom version of the multiplication operator over 'ScaleFactor' and int - some people, but not LtU readers in thread, feel that is a mis-feature, by the by. And in any version it is predefined over all the built in numeric types: int, long, float, double, etc.

So the same generic C++ code could be used for a function that took strings and returned the same list with each string duplicated n times, if that's what one wanted to do and defined multiplication over strings and ints to mean that.

Type checking of generics

I'm not sure what exactly you are arguing, but note that if you made the generic types as general as you propose, then you would never be able to type-check the generic definitions by themselves. Now, C++ templates, being glorified macros, don't really do that anyway (with all the problems that ensues), but the other languages we are talking about do.

So, sure, you can evade type inference by evading type-checking itself, but that's hardly disproving the benefits of inference.

follow-ups

I'm not sure what exactly you are arguing,

If you look back at the thread, I clearly participated to ask a question of the form - "Hey C++11 can do a few varieties of type inference and C++ has object oriented features? What sort of type inference are you interested in that it lacks?". You followed up my question with what purported to be examples from ML that C++ couldn't replicate. But in fact, your cases do not seem to be ones that C++ can't replicate. I suspect there may be such cases, so my original question remains open. Also, since there are a lot of implementations of C++ compilers, what had been achieved (or not achieved) there may be of interest to the topic initiator.

but note that if you made the generic types as general as you propose, then you would never be able to type-check the generic definitions by themselves. Now, C++ templates, being glorified macros, don't really do that anyway (with all the problems that ensues), but the other languages we are talking about do.

Above you say something that I will consider as another possibly interesting answer to my original question, but I'm not too sure.

The relevant questions are "What is the type checking that is desirable here?" and also "Is there a related desirable form of type checking that is possible in Haskell or ML but not in C++?" As far as I can see, the author of a particular piece of code in either language has a set of choices that fall somewhere along the spectrum between "Most generic, but least type constrained" and "Most type constrained, but least generic". It's not clear to me that ML or Haskell is giving some particularly good choice that isn't achievable in C++, but perhaps that is so, and I'm just not seeing it. So maybe you can elaborate on what you think that particular sweet spot looks like.

What is also true is that there is a proposed mechanism for C++ called "concepts" that did not make it into the C++11 standard which would have provided additional mechanisms to constrain the type range of template parameters with predicates. So it is judged that such a feature would be useful, but it didn't make it into the standard and is currently vaporware. But saying it is useful is different from either saying it is required to achieve a given effect or saying that such is provided by either ML or Haskell.

So, sure, you can evade type inference by evading type-checking itself, but that's hardly disproving the benefits of inference.

"Does it compile" is, in a sense (and modulo implementation errors), the de-facto form of static type checking in all static languages - even if the language had a proof system for interesting semantic assertions, we could still consider that part of the compilation process and we would still have doubt about the coverage of the set of assertions that had been proved. Does C++ lack (without concepts) a way to restrict what compiles to the scope of a given module? No, in fact that *could* be done readily with namespaces if that is judged to be important - that's just a different mechanism.

In summary, my tentative conclusion is that your remarks of the form "C++ can't do...." are probably not correct but it is probably true for some X we can identify that "X is nice and C++ can't do X as easily" - and I would still like to understand what that X actually is.

C++ doesn't do the following

C++ doesn't do the following combination: (A) inference of types of functions (B) the type correctness of the function is checked at definition time.

As you propose you can make everything a template with a template argument for each parameter, which kinda gives you (A) but then you lose (B). If you want (B) you have to explicitly write the types down, so you lose (A).

Functional parameters can be partially parameterized

C++ doesn't do the following combination: (A) inference of types of functions (B) the type correctness of the function is checked at definition time.

As you propose you can make everything a template with a template argument for each parameter, which kinda gives you (A) but then you lose (B). If you want (B) you have to explicitly write the types down, so you lose (A)

Templates declarations like the following are fine:


template < class Container, class A, class B >
vector < B > map(Container c, B (*func)(A));

Here, instead of allowing the template to take any old function that compiles, I've required it to accept only a function argument with of the type which takes one argument of some unknown type A, returns some unknown type B, and stated that I'm going to return a container of vector type parameterized by B. The idea of course is that the definition of map involves putting the output of func applied to elements of Container c and appending them to a vector of B's which I return (return value optimization is standard so a dynamically growing array turns out to be more time and space efficient than a linked list for most purposes. I'm not sure if that answer is relevant to what you meant though because I'm not clear on which aspects of type correctness for the function were the ones you wanted to check. Perhaps if you gave an example from a language that does implement what you want it would be clearer to me.

Exactly, and ML gives you

Exactly, and ML gives you that stronger type without you having to explicitly write down that type signature.

Incomplete example

Exactly, and ML gives you that stronger type without you having to explicitly write down that type signature.

My code snippet was only for the purpose of showing that C++ allows finer control over the template parameterizatoin of a function argument to a generic - i.e. it doesn't have to be just "a function". But the code snippet didn't say anything about why I was controlling that. In the particular case of map, the compilation process itself would check that the types worked out with or without that extra notation - it wasn't useful in that case. So it sounds like you are saying that I had to do that extra work to get something that comes for free in ML but not C++, and that is incorrect. It comes for free in C++ also. We have to dig a little more to figure out what is the extra thing that comes for free in ML.

You are saying contradictory

You are saying contradictory things. On the one hand you say that if you explicitly add the function type to the parameter you get extra safety at definition time (e.g. if you used the argument as a non function then you'd get an error message), but on the other hand you're saying that you get that for free.

misunderstandings

You are saying contradictory things. On the one hand you say that if you explicitly add the function type to the parameter you get extra safety at definition time (e.g. if you used the argument as a non function then you'd get an error message), but on the other hand you're saying that you get that for free.

No, I didn't say the first thing. However, I can see where my prior response may have been confusing on pragmatic grounds. I was replying to your claim above that

C++ doesn't do the following combination: (A) inference of types of functions (B) the type correctness of the function is checked at definition time.

As you propose you can make everything a template with a template argument for each parameter, which kinda gives you (A) but then you lose (B). If you want (B) you have to explicitly write the types down, so you lose (A)

I would have been clearer if I had written instead "C++ checks the type correctness of the function modulo it's visible usage in the template definition. If you want other checking beyond that then it can be added in various ways. Which other checking do you have in mind?" In my next post I could then have said "See, here is one way of adding some other constraint on the type signature of the function, which applies whether or not it figures in the usage within the template."

Backing up now, I'm unsure whether the original misunderstanding was a) simply your error about whether C++ type checks template instantiations, or whether it was b) my obliviousness to the form of type checking in ML that you had in mind above - i.e. something different than just the instantiation of the template type checks according to the regular type checking of the language. In case of the latter, perhaps you can give an example where the difference in languages in that respect seems to significantly matter.

Take for instance the

Take for instance the following function:

let f arr = arr.(0)

The arr.(0) is array indexing notation. The compiler infers this type:

'a array -> 'a

Which means that it takes an array of A and returns an A. Note how the compiler infers the type of arr from the context in which it is used. Because the compiler was able to infer a type, we know for sure that the function f has no type errors internally.

C++11 does that

The type inference was there in C++98, but the new features auto and lambda make it closer to your example, as shown below.

template < class Arr >
auto indexIntoFunc(Arr arr) {
  return [](Arr arr) { return arr[0]; };
}

int main(int argc, char** argv) {

  int iarr[] = {1,2,3};
  double darr[] = {4.5, 5.5, 6.5};
  vector < string > sarr{"one","two","three"};
  
  auto f1 = indexIntoFunc(iarr);
  auto f2 = indexIntoFunc(darr);
  auto f3 = indexIntoFunc(sarr);

  int iarr2[]  = {4, 5, 6};
  double darr2[] = {7.9, 8.9};
  vector < string > sarr2{"four","five"};

  cout << "Here is the value of f1 on iarr2: " << f1(iarr2) << endl;
  // prints 4
  cout << "Here is the value of f2 on darr2: " << f2(darr2) << endl;
  // prints 7.9
  cout << "Here is the value of f3 on sarr2: " << f3(sarr2) << endl;
  /// prints "four"

  // cout << "Here is the value of f1 on darr2: " << f1(darr2) << endl;
  // above doesn't compile - g++ 4.8.1 says error: no match for call to ‘(indexIntoFunc(A*) [with Arr = int*]::__lambda0) (double [2])’
}

I'm not entirely sure why

I'm not entirely sure why you needed the lambda there, but anyway.

The problem is that indexIntoFunc is not statically checked at definition time. You could have done anything with arr in the body of that function and you wouldn't have gotten an error message from the definition alone. This may not seem such a big deal in a small example, but this feature of C++ is exactly what causes template error messages to blow up to galactic proportions, and it makes it hard to see whether the error is in the caller or in the callee.

I'm not entirely sure why

I'm not entirely sure why you needed the lambda there, but anyway.

I was trying to make something as close to your ML example as I could. C++11 doesn't allow it's generics to be defined within a function body, so putting the template definition outside the body where it was used was unavoidable; using a lambda expression in that body simply eliminated one extra template definition in that outlying location.

The problem is that indexIntoFunc is not statically checked at definition time. You could have done anything with arr in the body of that function and you wouldn't have gotten an error message from the definition alone. This may not seem such a big deal in a small example, but this feature of C++ is exactly what causes template error messages to blow up to galactic proportions, and it makes it hard to see whether the error is in the caller or in the callee.

One kind of thing that makes the error messages long is that parameterized class and function names themselves are very long. That is partly due to C++'s balky template syntax and partly due to the style of conding. If one really tried to use as many type parameters in ML, and passed them in as lambdas (there are no parameterized classes, so we can't even do that in ML) then compilation error message in ML would also become long (but not as long).

A second kind of thing, which you are talking about, is that many levels are sometimes being recursively instantiated at the same time, so the error message will be of the form "In trying to do X1, we tried to X2, which required X3, and such and such an error was found there." So, for the first kind of reason, the names of X1, X2. and X3 can be long, and then there are those levels.

There are benefits and drawbacks to this. Openess of customization is one benefit. Allowing the compiler to skip unusued definitions is another. But if a given developer can't understand (or spends too much effort) trying to understand the error messages his tool of choice is giving him, then I would agree that those benefits don't matter.

I don't personally experience that problem - one quickly learns how to read the output from a given tool. For instance, the part of the error message that is normally most relevant will be of the form "In object O at Ti doesn't define or is of type Tj when it should be Tk." It doesn't really take much effor to then to look at the definition of Ti and see that in fact it's blech is screwed up and needs an edit.

I do miss the the speed that pre-compiled headers would give when they are part of the language spec but often not implemented well or at all by most of the C++ tools which still work on catching up to required rather than optional elements of the current standards.

There is a close analogy

There is a close analogy between dynamic typing vs static typing and C++ templates vs ML/Haskell/Agda/etc polymorphism. C++ templates are in this respect like a restricted form of compile time dynamically typed macros. Templates, like dynamic typing, are checked when you call/instantiate a function/template. ML style polymorphism, like static typing, is checked at definition time. In a dynamically typed language if X1 calls X2 calls X3, and X3 contains a type error, you get a big error message with a call stack. From this error message you can't easily tell what the problem is. It could be that X1 passed a wrong kind of value to X2, which caused X2 to pass a wrong kind of value to X3, which caused a type error. Or the error could be in X2, or in X3. You have the same situation with templates. With static typing and with ML/Haskell style polymorphism, each component's type determines what kind of inputs/outputs it expects. So if you have an error, the error messages are smaller and more localized. More importantly, you do not have to look at the implementation of a component to determine whether something will type check, you only have to look at its type. Just like with dynamic vs static typing, it's a trade-off.

Template type parameters and auto = Type inference

I haven't played too much with the newest & wackiest C++ features, but it sounds like you've discovered fully type inferred style. For every function declaration, you use a fresh template type parameter for each parameter:

template<typename A, typename B, typename C...>
auto foo(A a, B b, C c, ...) {...}

I think you should try this on a nontrivial project and then return with your experiences of "type inference in C++".

Principal types

Jules' reply pretty much sums up what I tried to explain above. But to add an extra point to what you said, namely:

The relevant questions are "What is the type checking that is desirable here?" and also "Is there a related desirable form of type checking that is possible in Haskell or ML but not in C++?" As far as I can see, the author of a particular piece of code in either language has a set of choices that fall somewhere along the spectrum between "Most generic, but least type constrained" and "Most type constrained, but least generic".

Maybe that's where the misunderstanding lies. In ML or Haskell, you don't need to choose. Type inference will automatically find the unique most general (i.e., least constrained) type that still allows the generic definition to fully type-check (its so-called principal type). After that, the definition is provably type-correct for any possible instantiation compatible with that type. There can never be any errors at instantiation time. In fact, nothing but the inferred type is needed to type-check instantiations of the generic (in particular, not the generic's definition). This sort of modularity really is the essence of polymorphic type checking. C++ does not have it (short of concepts), let alone the ability to infer it.

Sweet spot

In ML or Haskell, you don't need to choose. Type inference will automatically find the unique most general (i.e., least constrained) type that still allows the generic definition to fully type-check (its so-called principal type).

The choice I was talking about was the choice of what to parameterize. That choice is similar across languages.

The distinction I think you are driving at is one about how open vs. closed the definition of the generic function is at definition time. Templates give more flexibility in terms of keeping that definition open, at the expense of potentially requiring more work to corral unintended/accidentally colliding instantiations.

Type inference will automatically find the unique most general (i.e., least constrained) type that still allows the generic definition to fully type-check (its so-called principal type). After that, the definition is provably type-correct for any possible instantiation compatible with that type. There can never be any errors at instantiation time. In fact, nothing but the inferred type is needed to type-check instantiations of the generic (in particular, not the generic's definition). This sort of modularity really is the essence of polymorphic type checking. C++ does not have it (short of concepts), let alone the ability to infer it.

I read this as an oversell. On the one hand "instantiation time" is made to sound like some sort of risky thing, but it is just another phase of static compilation. On the other hand, "provably correct" is really just a synonym for "consequences closed at definition time (whether or not they were understood by the programmer)".

That particular set of tradeoffs is along a different dimension than the one I had in mind, but it's still interesting to discuss whether or not it is a sweet spot on that open vs. closed definition dimension. I kind of lean towards liking the more open version better, but I'm open to persuasion via more concrete argument or examples.

Instantiation time checking

Instantiation time checking is no better than what you get with macros. It's not "risky", but it is utterly unhelpful and unmodular. When have you last waded through a page-long template instantiation error message that forced you to understand implementation details of third-party code 5 levels down that you should have no business ever caring about? When did you last curse the ludicrous build times of your C++ project? Both of these are consequences of C++ not being able to type-check templates separately and modularly. I don't know about you but I'd happily trade them for what you call "openness" immediately.

In any case, this subthread has drifted away from your original question about the nature of type inference. I think my earlier examples have made sufficiently clear that it is a very long shot from the basic convenience features in C++11 to full-blown type inference. To summarise again: C++ can infer neither the types of functions, nor the parameters and constraints of generic definitions, nor the instantiation types for generics if they are not apparent from the arguments. To name just a few.

Whether you find that useful is up to you. For those of us who program extensively in both C++ and various languages with type inference, it is a game changer. Its enabling power can hardly be overstated. The level of abstraction that you achieve e.g. in functional programming would be largely impractical without substantial amounts of type inference.

various disagreements

Instantiation time checking is no better than what you get for macros. It's not "risky", but it is utterly unhelpful and unmodular.

While macros lead to instantiation time checking as well, C++ templates are not similar to any particular macro language I'm aware of, so that claim is oddly put. For instance, stepping through template code in a source level debugger that properly supports it is like stepping through regular code - not like squinting at a macro.

When have you last waded through a page-long template instantiation error message that forced you to understand implementation details of third-party code 5 levels down that you should have no business ever caring about? When did you last curse the ludicrous build times of your C++ project? Both of these are consequences of C++ not being able to type-check templates separately and modularly.

Basing a large portion - and the most widely used portions - of the C++ std library on templates turned out to be a very successful decision. That pretty much gives the lie to a claim of the form "An open approach to generics is bad because that makes it impossible to create user friendly libraries". So perhaps your idea is better expressed as "Open generics require more skill from the library writer in order to create user friendly libraries." I agree with that claim, but find that libraries based on open generics are also more powerful and easier to adapt to my specific needs. Library creation is already a job for the skilled. So I'm not agreement with your view that it's a bad approach to language design.

In any case, this subthread has drifted away from your original question about the nature of type inference. I think my earlier examples has made sufficiently clear that it is a very long shot from the basic convenience features in C++11 to full-blown type inference. To summarise again: C++ can infer neither the types of functions, nor the parameters and constraints of generic definitions, nor the instantiation types for generics if they are not apparent from the arguments. To name just a few.

I don't see this at all. I'm still patiently waiting for a nice use case of something involving type inference that can be easily done in ML or Haskell and can't be easily done in C++ *because* of limitations of type inference. So far nobody has offered any such examples in this thread.

Whether you find that useful is up to you. For those of us who program extensively in both C++ and various languages with type inference, it is a game changer. Its enabling power can hardly be overstated. The level of abstraction that you achieve e.g. in functional programming would be largely impractical without substantial amounts of type inference.

The topic here isn't supposed to be about whether C++ in toto is a nice language or not. But when put forward the argument that C++ is bad *because*, in your opinion, is weak in type inference and fails at handling large projects - now I really have to laugh. Many of the largest projects, both old and new code, are mostly based in C++. The only other current language which might be more popular for new code is Java (which has less than C++ in the way of type inference). Neither ML nor Haskell is popular in that way - though both of them are obviously much prettier to look at.

User-friendly?

While macros lead to instantiation time checking as well, C++ templates are not similar to any particular macro language I'm aware of, so that claim is oddly put. For instance, stepping through template code in a source level debugger that properly supports it is like stepping through regular code - not like squinting at a macro.

How does the ability to step through compiled code help you a iota with the compile-time problems I pointed out?

Basing a large portion - and the most widely used portions - of the C++ std library on templates turned out to be a very successful decision. That pretty much gives the lie to a claim of the form "An open approach to generics is bad because that makes it impossible to create user friendly libraries".

I'm sorry, but template libraries (including std) do not qualify as user-friendly in any conceivable universe I can imagine. If you seriously think so then I don't think anything anybody can say here is going to convince you.

I don't see this at all. I'm still patiently waiting for a nice use case of something involving type inference that can be easily done in ML or Haskell and can't be easily done in C++ *because* of limitations of type inference. So far nobody has offered any such examples in this thread.

I already showed you examples that demonstrate the technical claims I made. In reply you then shifted your argument to questioning the practical significance of those examples, and argued that you can work around them this or that way, but that's a completely different discussion.

If you still claim otherwise, please show us how you write the equivalent definitions in C++ (A) without any type annotations but (B) with full definition-site checking. Claiming that that isn't relevant does not prove that you can.

But when put forward the argument that C++ is bad *because*, in your opinion, is weak in type inference and fails at handling large projects - now I really have to laugh.

Sorry, it's not my opinion, it's a technical fact. But apparently, I keep failing in explaining the obvious.

Many of the largest projects, both old and new code, are mostly based in C++. The only other current language which might be more popular for new code is Java (which has less than C++ in the way of type inference). Neither ML nor Haskell is popular in that way

Please, like with many technologies in our industry, prevalence has little correlation with technical merits (or popularity, for that matter). It's primarily a social, commercial, and political process.

Compared to what?

While macros lead to instantiation time checking as well, C++ templates are not similar to any particular macro language I'm aware of, so that claim is oddly put. For instance, stepping through template code in a source level debugger that properly supports it is like stepping through regular code - not like squinting at a macro.

How does the ability to step through compiled code help you a iota with the compile-time problems I pointed out?

You said "Instantiation time checking is no better than what you get for macros. It's not "risky", but it is utterly unhelpful and unmodular." If you only talking about instantiation time versus definition reading time then why bring up macros at all and say templates are no better? It seemed obvious to me you were saying that templates inherit all the problems of macros and so I pointed out that is not true - instantiation time is a point of similarity, and perhaps there are others, but templates are not within the macro category, so labeling them as such doesn't support other unreasoned inferences.


Basing a large portion - and the most widely used portions - of the C++ std library on templates turned out to be a very successful decision. That pretty much gives the lie to a claim of the form "An open approach to generics is bad because that makes it impossible to create user friendly libraries".

I'm sorry, but template libraries (including std) do not qualify as user-friendly in any conceivable universe I can imagine. If you seriously think so then I don't think anything anybody can say here is going to convince you.

Compared to other large C++ class libraries that don't make use of templates but deliver similar functionality (though usually a proper subset), the C++ std library is user friendly. You mentioned that you do some substantial amount of C++ programming, so I'm curious what you choose to (or wish you could choose to) use instead of the standard library and why you find it superior in terms of overall usability+power. I don't know whether your example will convince me, but providing it would at least shine some light on what you think and the type of tradeoffs you are more comfortable with.


To summarise again: C++ can infer neither the types of functions, nor the parameters and constraints of generic definitions, nor the instantiation types for generics if they are not apparent from the arguments. To name just a few.

I'm still patiently waiting for a nice use case of something involving type inference that can be easily done in ML or Haskell and can't be easily done in C++ *because* of limitations of type inference. So far nobody has offered any such examples in this thread.

I already showed you examples that demonstrate the technical claims I made. In reply you then shifted your argument to questioning the practical significance of those examples, and argued that you can work around them this or that way, but that's a completely different discussion.

Disagree. I asked for examples of things that were hard to do in C++...you then gave some ML examples, and I replied that there were easy to replicate in C++; I only gave signatures in my reply, but I could give full code if you wish. Your reply to that was to claim that code I pointed to didn't do the same kind of type checking. I replied that it did do type checking and asked what you meant. So far as I can tell, your claim about not doing the same type checking was then reduced to "the type checking happens at a later phase in the compilation process - at instantiation time." That claim doesn't match what you are saying in the quotation above, which I was replying to, about the relative limitations of C++ type inference. I'm not aware of any examples from you that support those technical claims. In fact you seem to agree that you have redefined what you claim:

If you still claim otherwise, please show us how you write the equivalent definitions in C++ (A) without any type annotations but (B) with full definition-site checking. Claiming that that isn't relevant does not prove that you can.

In other words, instead of backing something that would match what most anyone would understand from your words, you've redefined something which is considered to be an important *feature* of C++ templates - that definitions aren't checked or needed until instantiation - into something which you are unwilling to count as any type checking at all. Your reason for not counting it is not because it doesn't do the job of static type checking - in fact it does - but rather because, in your judgment, the technical benefits of this feature are less than the disadvantages in user-friendliness of error messages.

I don't want to go back and forth about language usage, but that type of semantic game seems pretty silly.


Many of the largest projects, both old and new code, are mostly based in C++. The only other current language which might be more popular for new code is Java (which has less than C++ in the way of type inference). Neither ML nor Haskell is popular in that way

Please, like with many technologies in our industry, prevalence has little correlation with technical merits (or popularity, for that matter). It's primarily a social, commercial, and political process.

I'm responding to your claim that all the languages which are commonly used for large projects are unsuitable for such - because they lack important features (i.e. type inference capabilities) or they are too user unfriendly, while some other mature languages generally known to developers but not commonly used for such are in fact suitable. Doesn't that strike you as an incredible claim that *you* are making?? It's not theoretical but rather anti-empirical. Without presenting any interesting theory to back it up, you are saying that the empirical evidence that the features and user friendliness of these languages does make they well suited to large projects should be disregarded in favor of these hand-wavey remarks you are throwing out there.

Hm

I would like to point out that while Andreas' tone is maybe a bit too harsh, I would tend to agree with the points he's making and am not convinced by your presentation of templates as type inference either. You should not consider (from the fact that he is the most fervent to raise his voice in the debate) that this is a one-person-only opinion.

I think the examples you have provided so far show that C++ templates allow you to write very little type annotations (which is not the same as "not at all"), at the cost of also enforcing nothing at all at the definition site (as opposed to, say, your example enforcing map's argument to take only one argument). On the contrary, the result of what we usually consider "type inference" allows you to deduce everything there is to know about in which situations it is statically allowed to use this piece of code -- by looking only at the inferred type information, not at the *implementation* of the piece of code.

In other words, instead of backing something that would match what most anyone would understand from your words, you've redefined something which is considered to be an important *feature* of C++ templates - that definitions aren't checked or needed until instantiation - into something which you are unwilling to count as any type checking at all. Your reason for not counting it is not because it doesn't do the job of static type checking - in fact it does - but rather because, in your judgment, the technical benefits of this feature are less than the disadvantages in user-friendliness of error messages.

I'm not sure who is considering instantiation-site checking as an important "feature" of C++ templates -- I only ever considered it as a design flaw. One that allowed C++ templates to remain very simple and be used in interesting/crazy unexpected situations, but still a major source of difficulties when maintaining template-using code.

(If you cannot know whether a template definition is correct before using it in some places, you're no better than in dynamic languages where you have to run code on unit-tests to make sure you haven't done a silly typing mistake. Which haven't made either dynamically-typed languages or C++ templates absolute failures, but understandably doesn't count as a success from the point of view of *static type inference*.)

You mentioned "concepts" as a proposal to get some static checking at definition-site of templates, and I'm sure most people here would see some form of concepts getting adopted as a major improvement to C++ templates.

You may want to think, in the context of this discussion, that a more faithful correspondence with type inference in C++ (that would be closer to what Andreas expects) would be code examples where *both* template parameters and the corresponding concepts they are expected to satisfy are inferred. You can assume that all primitive operations used in the template definitions come with a specified vocabulary of concepts restricting their uses, just ML's standard library has hard-coded the fact that (+) expects integers, and Haskell's that its input type a must satisfy the type constraint/predicate Num a.

Hm...err

I would like to point out that while Andreas' tone is maybe a bit too harsh, I would tend to agree with the points he's making and am not convinced by your presentation of templates as type inference either. You should not consider (from the fact that he is the most fervent to raise his voice in the debate) that this is a one-person-only opinion.

I take no issue with Andreas' tone. I just don't agree with his arguments or how he characterized some things I wrote. I quoted him at length to show where those claims and characterizations went wrong. If you think you can fill in some gaps in his argument or be more clear in disputing anything I said, then please just jump right in.

I think the examples you have provided so far show that C++ templates allow you to write very little type annotations (which is not the same as "not at all"), at the cost of also enforcing nothing at all at the definition site (as opposed to, say, your example enforcing map's argument to take only one argument). On the contrary, the result of what we usually consider "type inference" allows you to deduce everything there is to know about in which situations it is statically allowed to use this piece of code -- by looking only at the inferred type information, not at the *implementation* of the piece of code.

The only examples I provided were simple C++ parallels to ML code that others provided while making the claim that they were showing something unavailable in C++. I showed similar code that can be used in a similar way. Andreas and you want to argue both that a) there is a crucial point of dissimilarity because the code isn't checked until later in the compilation process, and b) that this difference is a big negative for C++. My POV is that on the whole it is a positive - it gives greater expressive capability, which I have found useful as a developer. I also find the linguistic case for claiming that isn't type inference or type checking to be somewhat odd and misleading - it is type inference and checking that is done during the process of static compilation, and the difference in when it is done is a design feature, not a limitation.

In other words, instead of backing something that would match what most anyone would understand from your words, you've redefined something which is considered to be an important *feature* of C++ templates - that definitions aren't checked or needed until instantiation - into something which you are unwilling to count as any type checking at all. Your reason for not counting it is not because it doesn't do the job of static type checking - in fact it does - but rather because, in your judgment, the technical benefits of this feature are less than the disadvantages in user-friendliness of error messages.

I'm not sure who is considering instantiation-site checking as an important "feature" of C++ templates -- I only ever considered it as a design flaw. One that allowed C++ templates to remain very simple and be used in interesting/crazy unexpected situations, but still a major source of difficulties when maintaining template-using code.

It is a positive design feature; the template code, by design, is incomplete until all of its parts have been specified; that is one kind of idea of what generic programming is all about: allowing incremental, partial specification of algorithms and their implementation.

(If you cannot know whether a template definition is correct before using it in some places, you're no better than in dynamic languages where you have to run code on unit-tests to make sure you haven't done a silly typing mistake.

That claim is nonsensical to me - you are a lot better because the type checking is done statically, at compile time. The template definitions themselves are *incompletely typed* by design. They cannot be correct until more is specified and the C++ language standard has been explicitly written to state that it is not error to have a template which cannot be made correct with any type included in a header if that particular template is never instantiated. That is an explicit, by design, feature of the language standard.

If you only talking about

If you only talking about instantiation time versus definition reading time then why bring up macros at all and say templates are no better?

With respect to type checking templates are no real different from macros. With respect to code generation templates are no different from macros. With respect to execution they are no different from macros. The fact that existing compilers may provide better debugging support for code generated from templates than from macros is pretty much an implementation artifact. There is no principle reason why they couldn't do something similar for macros (and they don't always do it for templates either).

The main difference between templates and macros is how they are invoked, e.g. sometimes implicitly, with overloading and specialization resolution, etc.

Compared to other large C++ class libraries that don't make use of templates but deliver similar functionality (though usually a proper subset), the C++ std library is user friendly.

That's not a comparison I particularly care about, though. The interesting comparison is to generic libraries in languages with type-checked generics. With what C++ offers, you cannot do better. (I'm at Google these days, where the STL isn't used anyway, but FWIW, I never found the reasons particularly convincing.)

Disagree. I asked for examples of things that were hard to do in C++...you then gave some ML examples, and I replied that there were easy to replicate in C++

OK, fair enough, I only remembered the "what's an example" part of your question, and not the "of an important use case". Nevertheless, I maintain that your code does not faithfully replicate the examples, because it allows virtually no definition-site checking.

As for a use case that's "important", I consider the examples I gave representative of something important, but it won't show from toy examples. If you look at, say, real Haskell programs, though, then they typically consist of tons of small functions and lambdas like that, and the majority of named definitions is actually generic. Having to annotate everything would make that style of code completely impractical, and blow up code size by potentially several factors.

The reason that it isn't perceived as hurting so much in C++ is that it's a low-level language, and you don't use a lot of higher-order abstractions anyway -- in large part because you cannot without proper closures and garbage collection. But also in part because the noise of type annotations quickly becomes prohibitive, so you don't even try.

something which you are unwilling to count as any type checking at all. Your reason for not counting it is not because it doesn't do the job of static type checking - in fact it does

You can justify it in whatever way you see fit, the technical fact remains: template definitions are not type-checked. Only instantiations are. Just like with macros.

If you don't see a problem with that, well, good for you. :) Unfortunately, lots of people do.

I'm responding to your claim that all the languages which are commonly used for large projects are unsuitable for such

I never claimed that. Of course you can cope. Google does e.g. by putting massive amounts of resources into farming out compiles into data centers, caching them on various levels, developing immensely clever build tools, and other measures. Using C++ at the scale of Google is virtually impossible without these efforts. Turn-around times would be in the hours. And it's no secret that the template mechanism is a major culprit.

With respect to type

With respect to type checking templates are no real different from macros. With respect to code generation templates are no different from macros.

There is complex functionality built into the template resolution mechanisms that relates to overloading, default parameter types, specialization, partial specialization, name lookup, etc. Templates are incomplete types and these things need to be resolved within the template when it is instantiated. The algorithms involved are quite complicated and part of intense scrutiny through the standards process. In these ways, templates are unlike all other things I know of that are called "macros". Again, if you want to define the meaning of macro as "any recipe for instantiation time synthesis of code which could be described as some equivalent computational algorithm with an intermediate stage of producing completely instantiated source code in the target language (though this doesn't happen with C++ templates)" then I won't argue to much, but merely point that I considered the usage more harmful than helpful relative to common interpretation of the term.

The fact that existing compilers may provide better debugging support for code generated from templates than from macros is pretty much an implementation artifact. There is no principle reason why they couldn't do something similar for macros

That depends on the particular macro system - some systems can effect transformations of code that are too far removed from the original source to make stepping in a source code debugger meaningful.


As for a use case that's "important", I consider the examples I gave representative of something important, but it won't show from toy examples. If you look at, say, real Haskell programs, though, then they typically consist of tons of small functions and lambdas like that, and the majority of named definitions is actually generic.Having to annotate everything would make that style of code completely impractical, and blow up code size by potentially several factors.

There are lots of C++ programs where the amount of template based source code measured in lines/functions is more than, say, 3/4 of the source. I didn't get your idea about some mysterious problem requiring one to "annotate everything". I'd still like to see examples of good usage that is hard to replicate in C++ because of issues related to type inference. To my mind, the claim that the shortcoming of type inference is that definition checking only happens at instantiation time kind of suggests that we aren't going to find a use case expressed as a fixed body of source code, and that your hard to replicate use case would instead be one where C++ developers couldn't figure out how to make compilation fast or how to prototype their work as they went along. I'm kind of skeptical about such, and, in any case, it's a different sort of claim that where this sub-thread started.

Productivity

Among other things you somehow seem to be equating "useful" with "impossible to express otherwise", making you miss a big part of the picture, which is productivity (in terms of both programmer and compiler time) and reliability.

Look, just yesterday I was implementing some generic abstraction in C++. It was a fairly trivial thing, just a nested environment chain. For reasons too boring to go into it, it involved a template class of the form EffectsMixin<Var, NestedEffectsBase<Var, kNoVar>, Effects>, where Effects is itself instantiated with an EffectsMixin<Var, EffectsBase<Var, kNoVar>, EffectsBase<Var, kNoVar>>. Getting this right took the better part of a day, a substantial amount of which I spent deciphering template instantiation error messages, some reaching into somebody else's ZoneSplayTree template on top of somebody else's SplayTree template. Having to add an additional template parameter alone costed me like 20 minutes or more at some point, because I had to thread it through to all kinds of boring places that I only discovered from obscure instantiation errors (and that wasn't for the lack of using strategic typedefs wherever possible). And even after it already "worked" I kept getting new errors when I added a second instantiation. Whether the thing would compile for any other than the two cases I currently use I cannot tell.

The core of the abstraction is of almost trivial size, and in retrospect, I would have saved a lot of time (and lines of code!) simply copy & pasting the implementation instead. And that's what many of my colleagues typically do, for less syntax noise and more useful type-checking. I'm a sucker for clean abstractions, so I couldn't let myself go like that. But I can't blame them either. More often than not, templates are a gigantic time sink, for both implementers and users of an abstraction. IME, the tiny anecdote above isn't the exception, it's the rule, unless you're doing something really trivial.

If all you're used to is C++ and the likes then you might not even realise how deep deep into crazy land all this is. I've written abstractions like that plenty of times, and still it takes me ages. Meanwhile, I'm pretty positive that I would have implemented the equivalent thing in ML or Haskell or perhaps Scala over lunch break. (Though to be fair, not all of the difference would be due to templates and lack of type inference, but also due to having to deal with allocation policies and other insanities.)

Topic change is fine

Among other things you somehow seem to be equating "useful" with "impossible to express otherwise", making you miss a big part of the picture, which is productivity (in terms of both programmer and compiler time) and reliability.

Not guilty as charged. We were talking about type inference and how the C++ version measures up. I simply asked for an example of something interesting that was significantly more awkward to do in C++.

I definitely agree that C++ is not the highest productivity language, and I wish C++ template syntax was nicer, but I don't agree that templates are what hurt C++ productivity - I find they help, and people who don't shouldn't be writing libraries with them. In your example, it sounds like you were trying to refactor someone elses immature library code, so in way, it's not surprising you ran into extra issues.

What do I think hurts productivity the most in C++? IMO: a) one definition rule (requiring separate header and implementation files plus lots of header guards,etc.), b) lack of built in garbage collection (could be optional), c) lack of a standard box type (as an option, for prototyping, etc.), d) lack of any dynamic scripting capability, e) ridiculously crappy macro language, f) lack of type introspection, g) lack of good support for serialization, ...(long list of stuff that's not particular interesting to language researchers). The template issues your are complaining about don't make my list.

It wasn't supposed to be

Just to clarify, that wasn't supposed to be a topic change, but rather another data point to answer your question about an example where type inference is "useful". If you haven't noticed, none of the various productivity blockers I mention above would have come up if there was proper template type checking and type inference in C++. I would qualify that as immensely useful. ;)

(And no, I did not refactor existing code. I just happened to have to write something with a somewhat non-trivial type structure, abstracting open type recursion via a functor, in case you want to know. Nothing a seasoned Haskeller would find particularly noteworthy, FWIW.)

As for the top blockers in the language, it depends on what you use it for, but I'd say 1) lack of garbage collection and memory safety, 2) the incredible amount of boilerplate required to get anything done (lack of type inference ties in to that), 3) glaring type system holes (esp null pointers and arrays), 4) the broken notion of generics, 5) the archaic model of separate compilation (closely connected to the previous point), 6) lack of algebraic datatypes and pattern matching, 7) lack of a usable notion of local functions and closures (requires GC), 8) the unreliable exception system.

Just to clarify, that

Just to clarify, that wasn't supposed to be a topic change, but rather another data point to answer your question about an example where type inference is "useful". If you haven't noticed, none of the various productivity blockers I mention above would have come up if there was proper template type checking and type inference in C++. I would qualify that as immensely useful. ;)

Wearing LtU hats we should be able to talk directly about a) whether hairy compile time metaprogramming is common need or more typically an example of Knuth's warning that "premature optimization is the root of all evil" and b) to the extent we think it is a common need, what do we think a good language/development environment/toolchain for that would like - i.e. definitely not like existing C++ or Haskell. I find those questions pretty interesting. The fact that you ran into a cluster f&%# trying to modify someone elses hairy dependent type constructor metaprogram and things would have been a little easier for you if metaprogramming in C++ was a little more modular (and imperative :) in style is pretty orthogonal to what I think of as the original question.

As for the top blockers in the language, it depends on what you use it for, but I'd say 1) lack of garbage collection and memory safety, 2) the incredible amount of boilerplate required to get anything done (lack of type inference ties in to that), 3) glaring type system holes (esp null pointers and arrays), 4) the broken notion of generics, 5) the archaic model of separate compilation (closely connected to the previous point), 6) lack of algebraic datatypes and pattern matching, 7) lack of a usable notion of local functions and closures (requires GC), 8) the unreliable exception system.

I like much of your list - points 1) and 5) were on my initial list as well. C++11 does add a bunch of different features to help mitigate the boileplate problem 2) - prior to C++11 I would have put that as well, and now I'm not sure where it falls (except that we both agree 5) is thiss a problem. Regarding generics though, I just don't agree. And I'm not sure what aspect of the exception system you are categorizing an unreliable - is it the lack of signatures, the minimalist default exit handler, or something else?

"Meta-programming"

As said above, I wasn't modifying somebody else's code. I was using somebody else's library template (which actually is quite well-done relative to what C++ enables) in the implementation of my own template (whose signature would not be considered particularly hairy outside C++ land either). The template problems I describe are not an anomaly, they are the norm, in my experience, as soon as you go beyond trivial uses -- but maybe I'm just too stupid to program templates?

(Also, I certainly don't want to write that stuff in a more "imperative" manner (which is the antithesis of more modular). I'm shocked, where did I give that impression?)

As for meta-programming, it's one of these fuzzy and overused buzzwords... A lot of template meta-programming in C++ is really just working around various shortcomings of the language. In my mind, a far more interesting category of tricks that some might put under the "meta-programming" label are various forms of type-level programming, where you encode domain knowledge and invariants in types. That is the kind of stuff naasking gave a simple example of. Its logical progression would be dependently typed programming. And coincidentally, for any of this stuff to be at all feasible, the type system absolutely must not require you to hold hands in a thousand and one places, but be able to infer and propagate information automatically. ;)

Basing a large portion - and

Basing a large portion - and the most widely used portions - of the C++ std library on templates turned out to be a very successful decision. That pretty much gives the lie to a claim of the form "An open approach to generics is bad because that makes it impossible to create user friendly libraries".

The C++ stdlib seems structurally simple, mainly consisting of flat abstractions containing one, maybe two generic parameters. I'm not sure you can generalize from this data to "user friendly libraries are possible". Andreas is arguing that anything more complicated than what appears in the stdlib quickly becomes incredibly difficult.

I'm still patiently waiting for a nice use case of something involving type inference that can be easily done in ML or Haskell and can't be easily done in C++ *because* of limitations of type inference.

Lightweight Static Capabilities.

The C++ stdlib seems

The C++ stdlib seems structurally simple, mainly consisting of flat abstractions containing one, maybe two generic parameters.

There are often more than two parameters. An map for example has at least generic key, value, comparison, and allocator parameters. But the latter two default to common choices. Defaults are one way of adding user friendliness.

I'm not sure you can generalize from this data to "user friendly libraries are possible". Andreas is arguing that anything more complicated than what appears in the stdlib quickly becomes incredibly difficult.

Huh? It is an example of "user friendly libraries are possible". When a lot of generic parameters are desired, the idiomatic way to do that in C++ with a "trait type", which is a single type containing some conventional set of typedefs within it; the size of this set is essentially arbitrary; this allows one to pass a very large number of generic parameters, without blowing up the name of the generic.

Note that throughout this thread, I'm specifically asking for interesting examples of generics which are not possible to do in a user friendly way due to limitations of C++ type inference. I'm certainly not precluding them.

Huh? It is an example of

Huh? It is an example of "user friendly libraries are possible".

You're making an existential claim, in that there exists at least one user friendly library expressible in C++. By that argument, C provides user friendly generic libraries via macros too.

Andreas' claim is stronger, more along the lines of a universal claim that all libraries requiring parametric polymorphism provide user friendly interfaces to users. More precisely, the user friendly libraries expressible in C++, UFL(C++) is a strict subset of the user friendly libraries expressible in ML/Haskell, UFL(FP), and the ratio |UFL(FP)| / |UFL(C++)| >>> 1.

Note that throughout this thread, I'm specifically asking for interesting examples of generics which are not possible to do in a user friendly way due to limitations of C++ type inference. I'm certainly not precluding them.

Did you see the link to Lightweight Static Capabilities in my previous post? I think that's a pretty compelling example, as are most EDSL libraries.

You're making an

You're making an existential claim, in that there exists at least one user friendly library expressible in C++. By that argument, C provides user friendly generic libraries via macros too.

It's a stronger claim that that because it is the Standard C++ library, the most widely used C++ library, C++ is relatively popular language, and the library is used for all sorts of code. That's rather a different than saying there is some obscure library someplace which I think is user friendly for some purpose. But sure, the latter would work logically for proving an existential claim, which you negated.

Andreas' claim is stronger, more along the lines of a universal claim that all libraries requiring parametric polymorphism provide user friendly interfaces to users. More precisely, the user friendly libraries expressible in C++, UFL(C++) is a strict subset of the user friendly libraries expressible in ML/Haskell, UFL(FP), and the ratio |UFL(FP)| / |UFL(C++)| >>> 1.

I don't understand. The first sentence sounds trivially false (a library can be unfriendly for a lot of different reasosn), while the second sentence says something entirely different from the first and, in doing so, seems to imply that I can always translate the same library between languages and maintain some sort of ordering relation on the subjective-quality of user-friendliness. That sounds either wrong, or else an existential claim that the judge is a real language bigot. :)

Did you see the link to Lightweight Static Capabilities in my previous post? I think that's a pretty compelling example, as are most EDSL libraries.

I saw it but didn't understand how it fit well. There was a link to pdf paper discussing an implementation of capabilities. Any particular piece of code or idiom you wanted me to focus on?

while the second sentence

while the second sentence says something entirely different from the first and, in doing so, seems to imply that I can always translate the same library between languages and maintain some sort of ordering relation on the subjective-quality of user-friendliness.

It depends what you mean by "translate". And while there might be some disagreement around the edges of these sets as to whether a program is indeed user-friendly, even being excessively conservative yields a large size differential. Garbage collection counts for the elimination of a lot of complexity.

I saw it but didn't understand how it fit well. There was a link to pdf paper discussing an implementation of capabilities. Any particular piece of code or idiom you wanted me to focus on?

The technique critically relies on type inference for usability in propagating phantom types. Check the paper.

In fact, the fact that phantom types are so seldom used in C++, if at all, when they are very common in the other languages discussed should indicate something about the complexity of templates and the usefulness of type inference.

The technique critically

The technique critically relies on type inference for usability in propagating phantom types. Check the paper.

My quick glance at the paper didn't yield one particular thing I could match to "the technique". The most complicated programming example in the paper involved compile time creation and indexing of a sorted array for security purposes. If that was important in the real world, I would want to use a language with powerful metaprogramming where I could do something like take the SHA1 hash of binary chunks at compile time and sort on that as keys. Trying to cram some limited version into C++ or Haskell doesn't seem like using the right tool for the job.

I realize you were just trying to find an example of something, but in that capacity I'm going to demur on a response because the paper didn't actually give the implementations of its examples, and the question of saying how an implementation I can't see translates to another language seems to open ended for the scope of this thread.

My quick glance at the paper

My quick glance at the paper didn't yield one particular thing I could match to "the technique". The most complicated programming example in the paper involved compile time creation and indexing of a sorted array for security purposes.

No, the array is not created or sorted at compile-time. Instead, evidence that only properly sorted or sized arrays are used in proper ways are represented at compile-time by phantom types, and this evidence is transparently passed around by type inference and manipulated via combinators that properly transform the evidence in well-defined ways.

Since you found the more complicated use-case confusing, let's try a simpler example: why aren't phantom types used more often in C++?

For a more concrete example, suppose we try to represent read, write, and read/write access rights as compile-time properties, in C#:

public interface IPermission { }
public interface R : IPermission { }
public interface W : IPermission { }
public interface RW : W, R { }

Now suppose we have a file path class that represents the permission to the file in its type:

public sealed class File<TPerm>
    where TPerm : IPermission
{
  ....
}

Now we can define operations that discriminate on the static permission type, like OpenRead and OpenWrite:

public static FileStream OpenRead<TPerm>(File<TPerm> path, FileShare share = FileShare.None)
    where TPerm : R
{
    ...
}

public static FileStream OpenWrite<TPerm>(File<TPerm> path, FileMode mode = FileMode.Open, FileShare share = FileShare.None)
    where TPerm : W
{
    ...
}

FileMode and FileShare parameters are default parameters in C# by the way. This is the most basic application of the phantom types technique used in the paper I linked to. The paper takes it a step further by using type-level naturals to describe sized arrays and ensure indexing is always within the array bounds, so that's the next step assuming you can reproduce the above structure in C++ in a usable way.

Multiplicative Disjunction and Principal Types

In linear logic there is an interesting distinction between 'choice-of-provider' (x + y) and 'choice of consumer' (x â…‹ y). The former is the traditional tagged union; a consumer for a (+) type must handle each of the possible inputs. The latter (x â…‹ y) means that only one type is selected, but that it is determined by the user - thus, the user needs only handle one possibility, but may handle more than one possibility if they wish to pass the choice onwards.

A practical application in PL for the â…‹ type would be formalizing typeful dispatch. A function can have multiple types that are partially selected by the desired output types, and partially selected by the available input types. I.e. if we have Int -> Float â…‹ Float -> Float â…‹ Int -> Int then we can infer the input type is an Int if the output type is an Int, and if the output type is Float then further choice may be left to the client (whether providing a Float or Int).

Anyhow, the absence of multiplicative disjunction seems to simplify unification - perhaps related to the 'monomorphism limit' where there can only be one principal type rather than a choice of many to be selected in each context. In practice, it is not unusual in Haskell to repeat code segments to avoid the monomorphism limits, because the type of a code segment often is determined in context (i.e. per instantiation).

I've been contemplating whether (â…‹) types might be useful to model strategies in place of traditional functions for a type-driven programming and proof system. A strategy with only a simple definition and one type would correspond to a traditional function. But when we start expanding on this, composition of functions can become much more flexible and adaptive in their meanings, closer in nature to natural language.

But only thing I see here

But only thing I see here that is really particularly germane to the topical question of type inference is that your translation of scale requires 'int' instead of making the value type generic. But that was not necessary in C++. Alternatives would have been either to make it Container::value_type and assume that multiplication of such returns the same type, or to make the scale a new type parameter and possibly use static_cast<> for coercing it back to Container::value_type.

For simplicity, I was using Standard ML here, which has type inference without much bells & whistles. If you want to be able to be generic over multiplication as well, then Haskell's type classes are the extension you are looking for. Basically, they are what concepts tried to be, but with full inference. In Haskell (which, amusingly, switches the meanings of : and ::):

map f l = case l of              -- map :: (a -> b) -> [a] -> [b]
    [] -> nil
    x:xs -> f x : map f xs

scale n l = map (\x -> n*x) l    -- scale :: Num a => a -> [a] -> [a]

Here, scale will work for any type a that supports numeric operations, as expressed by the (inferred) constraint Num a.

Standard OO languages lead to hard problems

The central problem in all type inference algorithms can be phrased as a subtype entailment problem: is A ≤ B. The reason for this is that when you have polymorphic types, the instantiation order induces a subtype relation. That is, we want to say that

∀α. α → α ≤ int → int

because a polymorphic function of type ∀α. α → α can be used whenever a function of type int → int is needed. And if you can decide subtype constraints, you can do type inference, because the subtype entailment bridges the gap between what you know and what you need.

Now, this works out reasonably well (albeit imperfectly), for simple functional languages, because they have a property that OO languages lack: their type systems are entirely structural. This is not the same thing as the usual structural/nominal distinction: here, structural means that there are no type constructors which are subtypes of constructors with a different arity. This is best explained by counterexample: in Java or Scala, you have subtype relations like:

List[A] ≤ Object

Here, List is a type operator of arity 1 (i.e., it takes one argument), and Object is a type operator of arity 0 (i.e., it has no type parameters). This makes the subtype relation non-structural, and this really kills you.

In Complexity of Subtype Satisfiability over Posets, Niehren et al show that this problem is either PSPACE or EXPTIME-complete in the size of the entailment you have to solve (depending on some details of the exact formulation). In contrast, the corresponding subtype entailment problem in ML is linear in the sizes of the types you compare.

Now, note that if your nontrivial subtype constraints are all structural (e.g., Int ≤ Number), you're in better shape. Rehof and Henglein (The Complexity of Subtype Entailment for Simple Types) showed that this is coNP-complete.

For richer type systems, you need richer entailments, and again there's a big gap between structural and non-structural systems. Kuncak and Rinard showed (in Structural Subtyping of Non-recursive Types is Decidable) that the first-order theory of structural subtyping is decidable. In contrast Su et al showed that the first-order theory of non-structural subtyping is undecidable (in The First-Order Theory of Subtyping Constraints.

(This is not a complete answer! There's a ton of work in this area, and there are huge things like like the impact of intersection and union types that I've completely failed to mention.)

Got it, I think I understand

Got it, I think I understand this much. I still think the problem is not with subtyping, but the fundamental type systems themselves. That the problem is with how we defined "type."

If we just phrase the problem differently, most trouble about subtyping just disappears. Consider how it typically works where types restrict how terms can be used:

X := Y          // X's type is restricted by and intersected with Y's type
X.DoDogThing(); // ok if X, Y are subtypes of dog

Now, if one tracks usage requirements rather than compute types on what is provided; e.g.

X := Y          // usage constraints of X propagate to Y
X.DoDogThing(); // X,Y will be at least dogs

Now, we've flipped the problem of type checking into one of usage propagation, and the problems with subtyping sort of disappear because...well...there is no subtyping anymore!

The difference being?

What else are your "usage constraints" than a form of (structural?) subtype constraints?

If your usage constraints

If your usage constraints are nominal trait extensions, then the system is obviously not structural. The traits themselves do not explicitly extend other traits, though extensions to other traits can be inferred on "this."

Otherwise I'm not sure. I guess the primary difference is between "restrict and intersect" provision constraints and "expand and union" requirement constraints; they also propagate in completely opposite direction.

Maybe it boils down to, are "type constraints on usage" equivalent to "usage constraints on type"?

Yes

Maybe it boils down to, are "type constraints on usage" equivalent to "usage constraints on type"?

Yes, exactly. AFAICS, it is merely transposing the definition, it doesn't really change any of the technical implications.

It is the difference between

It is the difference between union and intersection. Are they essentially the same? Do they always come with the same implications?

We could actually run the algorithm in reverse and build types via intersection as is normal, but inference then wouldn't do much...it definitely couldn't be used to define objects, annotations would be required, traits would have to explicitly extend other traits, arguments would have to be annotated, and so on. How is that even much different from local type inference?

How do you handle function arguments?

How do you typecheck method calls are safe? This seems to require a subtype check to me:

Γ ⊢ x ⇒ {... m : Y → R ...}      Γ ⊢ y ⇒ Y'      Y' ≤ Y 
——————————————————————————————————————————————————————————
Γ ⊢ x.m(y) ⇒ R 

Here, suppose you infer a type for x that says it has a method m that takes a Y as an argument, and also suppose that you infer the type Y' for the argument y.

Now, it seems to me that you have to check the compatibility of Y' with Y. Furthermore, since y might be a formal parameter, it seems like you can't just use Y to "update" all the definition sites of y.

First, I don't infer types,

First, I don't infer types, just usage requirements. A call z := x.m(y) for m in trait T states that: x and all of its assignees must extend T and x.m#arg := y, and z := x.m#ret. The assignment graph is propagated to all users of the unit being compiled, so the constraint is always propagated. There is a special case for encapsulated object definitions, which freeze whatever terms they are assigned to.

A type error occurs when two traits are required that are incompatible (think Int and Bool) or we try to add a new trait extension to a frozen term (so...encapsulation error).

I haven't added this yet, but function arguments should be dealt with monomorphically. Think of the function's parameters and return value as fields of the function, then the body of the function assigns to the return value while the callers of the function assign to its parameters. It doesn't feel like a big trade off to me, since C# and other typical OO languages don't support polymorphic function parameters anyways.

Only top-level procedures and traits are handled polymorphically actually. I wanted trait methods to be polymorphic also, but ran into some problems with encapsulation and overriding.

First-class polymorphism in OO languages

It doesn't feel like a big trade off to me, since C# and other typical OO languages don't support polymorphic function parameters anyways.

First-class polymorphism was added to the OCaml language when it was found necessary to handle usual patterns in object-oriented programming. I think that if you take as input an object that has a method that has a polymorphic type, you "support polymorphic function parameters" in some sense. Does your language forbid that?

I would assume it is rather common in C# to have a method be parametric on some specific type parameter, on which the whole class itself is not parametrized. In OCaml it happens for example if you want to implement lists as classes: the `fold` method needs to be polymorphic on its return type. See the relevant part of the manual.

Trust me, I've run into this

I think that if you take as input an object that has a method that has a polymorphic type, you "support polymorphic function parameters" in some sense. Does your language forbid that?

Traits are instantiated polymorphically, so you get most of your polymorphism simply by using them as usage requirements. It is a bit difficult to explain, perhaps you could ask it with a specific example? That would also help me with my presentation :) But let's try:

def foo(l):
  var x := l.GetL(0)
  x.DoAnimalThing()
  x.DoWildThing()

var y := new()
var z := new()

y.SetL(0, new())
z.SetL(0, new())

y.DoDogThing()
z.DoCatThing()
foo(y)
foo(z)

So foo expects l to be a list of wild animals. y is necessarily a list of dogs, z is a list of cats, they can both be passed into foo, which adds the wild trait to their elements (they already have the animal trait), so y is a list of wild dogs, z is a list of wild cats.

Trust me, I've run into the "function parameters can't be generic" limitation of C# before! I'm not saying it is not useful, I would love to support it, but I don't know how yet. I'm sure if I could figure out how to make trait methods polymorphic (the "big" trade off), then function parameters would fall as well. Besides, fold (as a top-level method, unfortunately) would work as it does in C#, which is a nice goal for now.

I'm sure the expressiveness of my type system falls somewhere beneath MLF. The amazing part is just that it works and is about as expressive as a typical modern OO type system; how many people wouldn't want full type inference in C#? Really figuring out the boundaries here is probably another 10 years of research (unless I can find that it really is equivalent to another type system, then we are done).

But actually, code completion is much more important than type checking (to me at least, but I claim also in general). To my knowledge, no one has actually designed type systems to support it.

Thanks

A call z := x.m(y) for m in trait T states that: x and all of its assignees must extend T and x.m#arg := y, and z := x.m#ret. The assignment graph is propagated to all users of the unit being compiled, so the constraint is always propagated. There is a special case for encapsulated object definitions, which freeze whatever terms they are assigned to.

This sounds a lot like dataflow analysis? That is, the assignment graph is a lattice, and the types are the summaries you use at modularity boundaries. There's been surprisingly little work on combining dataflow analysis and typechecking. Since flow analyses often have reasonable time complexity, this seems like a good approach to investigate.

What do you do about branches? If you have something like:

   if (foo?) { 
     x := bar;
   } else { 
     x := baz;
   }

Do you require x to be a union of bar and baz?

One paper I liked about mixing flow analysis and typechecking is Guha et al's Typing Local Control and State Using Flow Analysis. They were using dataflow analysis to typecheck Javascript programs that used type tests as guards to method calls. If your language has anything like instanceof tests, there might be useful ideas in it.

It is tracing data flow,

It is tracing data flow, obviously, but it is not control flow sensitive; so instruction order and branches are ignored.

Do you require x to be a union of bar and baz?

Remember usage requirements actually flow in reverse, so "bar" and "baz" must both have the same requirements of "x" :) The type of "x" is actually uninfluenced by "bar" and "baz"....where there is one case where it is restricted in a typical way to deal with encapsulated term definitions (x can become "frozen" because bar or baz return something whose type cannot be expanded with new usage requirements...).

Dynamic downcasts are actually quite an interesting problem with an interesting solution: trait extension requirements become predicated by the downcast type. But this is too much for now!

More unicode problems with

More unicode problems with neelk posts! I suggest future upgrades use neelk posts as a sanity check. :-)

iconv

I first replaced the middle line with dashes (which it obviously is) because iconv -f utf8 -t os2latin1 doesn't accept it, and then:

Γ ⊢ x ⇒ {... m : Y → R ...}      Γ ⊢ y ⇒ Y'      Y' ≤ Y 
---------------------------------------------------------------------
Γ ⊢ x.m(y) ⇒ R

Mathjax > manual unicode usage

Note that the use of unicode characters for math will decrease now that MathJax support is enabled -- the rendering is much better.

Just in time...for a new

Just in time...for a new post :)

Is this true?

[...] only those that forgo subtype polymorphism (like OCaml or Haskell) can do better than that [...]

But type inference with qualified types (as done with Haskell's type classes) should subsume subtyping, right? Both type class constraints and subtype constraints are qualifications on polymorphic types and work in the same framework.

I'm sure there's some other case that you have in mind that I'm missing, but for example consider an infinite (or indefinite-length) sequence of multiparameter type classes with fundeps for "Tuple2 p a b | p -> a, p -> b", "Tuple2 p a b => Tuple3 p a b c | p -> c", etc. Then you can overload "first" with type "Tuple2 p a b => p -> a" etc, and it appears exactly identical to subtyping.

You need existentials

Qualified types, or other refinements of parametric polymorphism, only subsume subtyping (at least to some extent) when you have both universal and existential polymorphism. For example, existentials are necessary to build a list of 'Stringables'.

It also depends on what form of subtyping you are talking about. For example, structural (width) subtyping on records or objects requires some form of row polymorphism to replace.

Not sure I follow

I understand why you'd need something like existentials for a heterogeneous list of "Stringables" like you suggest, but that doesn't seem like a necessary condition for "subtyping" (but again maybe the OP can offer a more detailed definition of what he means by "subtyping").

OTOH, I am inclined to go along with you anyway because I think that existentials are necessary to have for any useful language. Basically, closures are existentials and function application can be treated as an overloaded case as well (either on contextless functions, closures, maybe arrays, maps, etc).

Subtyping enables heterogeneous lists

Well, subtyping enables you to express a heterogeneous list of Stringables, too. So if you argue about subsuming it, you need a way to recover that expressiveness.

Is that necessary?

You might allow "forall a . (a <: Jimmy) => [a]" but not "[forall a . (a <: Jimmy) => a]" and still call it subtyping, right?

Anyway, I'm happy requiring existentials so maybe it's merely an academic question. :)

The only point I wanted to make was that subtyping fits in the framework of qualified types, which is amenable to type inference (and is, IMHO, a profitable way to think about it).

I don't see how

Nit: it would be [exists a . (a <: Jimmy) => a]. But technically that's not subtyping either way.

Subtyping is defined by subsumption. I don't see any way to exclude heterogeneity without losing soundness. Consider:

let f(a1 : Animal, a2 : Animal) = [a1, a2]
let l = f(dog, cat)

Clearly, that's allowed by any useful notion of subtyping. At the same time, l reduces to a heterogeneous list.

So if you want something that properly subsumes subtyping, you have to be able to express heterogeneity.

I think that you just stated your previous assertion again

(No offense intended.)

The "forall" usage is a Haskellism, but I do understand the use of existentials in your example (was your encoding HTML-escaped?).

How does excluding heterogeneous lists lose soundness? I thought it'd be more likely to be the other way around, but I'm probably missing your point. ;)

Whether or not it's "true subtyping", we can at least imagine a system of qualified types with "subtype constraints" that doesn't allow heterogeneous lists, right? You can easily construct it in Haskell (up to a fixed length tuple size in the previous example). Maybe you'd call it "overloading with subtype constraints" or something?

The main point that I was trying to make was that qualified types are compatible with type inference and subtyping, so hopefully that's not lost in this tangent. ;)

Not quite as easy

Note that the "forall" Haskellism only applies to data type constructors. In any other context it does not make sense, because the syntax already expresses first-class universals, which have the opposite meaning. (Fixed the HTML, thanks!)

I'm not sure I understand your question. As I said above, the example evaluates to a heterogeneous list. So for soundness, you either have to allow such lists, or you have to rule out the above program. But on what grounds would you do that? I claim that as long as you allow subtyping anywhere, you can also construct an example that creates a heterogeneous list. So it's an integral part of subtyping.

I agree with your general point about qualified types being able to express the most interesting uses of subtyping, although it generally isn't quite as simple as you assume. A bit more machinery is needed to fully subsume it. Consider e.g. the case of List(Dog) < List(Animal). When recast in terms of existentials with type class contexts, you probably would need to copy the entire list and repack all elements individually to emulate the subsumption.

Thank you for your thoughtful reply

I guess our disagreement was on "as long as you allow subtyping anywhere", since I guess in principle there's nothing "wrong" with restricting it in some places (although maybe it'd be inconvenient and impractical to use).

I agree that "more machinery" is necessary -- at least more machinery than you get with type classes (the compiler essentially has to be able to automatically derive type class instances for all subtype constraints).

Although the type class mechanism by itself isn't sufficient, I think that it's instructive for your '[Dog] <: [Animal]' example. So I think (please correct me if you disagree) that this is similar to '[Int] <: Num n => [n]' (and here there's just a slight overloading of <: to the normal HM 'less-general-than' relation).

In that case, the evidence of the membership in Num or equally the evidence that 'Dog <: Animal' (implied by '[Dog] <: [Animal]') gets inserted alongside the value it's associated with (e.g.: dictionary passing).

So I don't think that you'd necessarily have to copy the whole list to witness the subtyping constraint.

[malthreaded reply]

[deleted]