Making implicits less powerful?

Many people feel that Scala implicits are powerful enough to replace all uses of Haskell type classes, but might be too powerful, because they make it too easy to shoot yourself in the foot.

The problem is caused by not just implicit values, but also implicitly invoked functions. These are necessary if you want to cover all the uses of type classes. For example, the statement "if values of type T can be compared, then values of type List<T> can be compared as well" corresponds to an implicitly invoked function that receives a Comparator<T> and returns a Comparator<List<T>>. Having implicitly invoked functions introduces an alternate model of computation into the language, allowing a simple form of logic programming. A similar thing happens in Haskell with type class constraints. The Agda designers decided against implicitly invoked functions for precisely this reason.

I was unable to find any consensus on how to make implicits weaker, to allow useful scenarios but prevent harmful ones. Scala has adopted several such restrictions, described in the links above. I feel that the Scala model might be still too permissive. Here's some more ideas for restrictions:

1) Require explicit importing of all implicit values defined in other modules, except the ones automatically defined by the language (e.g. Comparator<Pair<Int,Int>> or the function Comparator<T> -> Comparator<List<T>>). This might be too drastic though, especially if the language requires you to say "deriving Eq" and doesn't define stuff automatically.

2) Check for all possible ambiguities in implicit resolution, not just actual ones. Haskell's rules for instance contexts (7.6.3) are one way to make that decidable. In Scala I guess it's undecidable, but can't say for sure.

3) Allow only types whose definition is marked as "implicit" to be used as implicit arguments. For example, Comparator would be allowed to be implicit, but Int or String or function types wouldn't. This would also remove a potential ambiguity, because functions marked as "implicit" would be always treated as implicitly invoked functions, and never as implicit values.

Has anyone here given this some thought? What are the engineering considerations?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

replace implicits with IDE-based inference

Implicits allow you to use what you don't know about, this is great, but they allow you to continue not knowing about them, which is bad.

We can do much better in an IDE that would simply fill in the gaps in code; a win-win for discover-ability + awareness. Have the type system designed for this would be even better.

I believe Scala has a feature like this already in the IDE. If not, this seems like a no brainer.

Sometimes there are no gaps

For example, I'd like the expression "a == b" to desugar to "equals(a,b)", where the function "equals" accepts a third implicit argument of type Comparator<T>. Similar for desugaring infix arithmetic operators to arithmetic functions, etc. I guess you could throw away infix operators, but that seems unpleasant.

Overall I feel that SML eqtypes, Haskell type classes and Scala implicits are successive approximations to some future solution that will feel "right". I'm interested in figuring out what that solution might be. Explicitly passing a dictionary to each comparison operation doesn't feel "right" to me, though I guess it's a question of taste.

This is a problem with

This is a problem with Scala's attempt to optimize for EDSLs. What implicits do should be obvious in (and limited to) the clear semantics of the EDSL being implemented. The user shouldn't have to scratch their head in figuring out what a == b does.

The problem is that the user might not be really aware of the EDSL they are in, and I guess many EDSL's can be used at once. There is no easy way to deal with that, and its one of the reasons why the whole EDSL story is just a user-unfriendly mess of clever hackery.

Not just EDSLs

Languages like Java or Python also allow users to redefine equality, though they don't optimize for EDSLs. I think it's worthwhile in most languages to have an equality operation that the user can redefine for their own types.

For example, there could be a type ImmutableSet<T> defined as an interface (record of functions), so that multiple implementations can coexist in the same program. It makes sense to define an equality operation that compares the contents of two such sets, which coincides with observational equality because the sets are immutable. And since equality of functions is undecidable, the equality operation on these interface records cannot be derived automatically, and needs to be user-defined.

There are other examples of types that need user-defined equality, like types that contain a mutable array and treat is as immutable after construction, e.g. String. And there are other examples of operations that should work on many types, like toString, compareTo, hashCode, serialize, etc. I'm not sure I'd want to work with a language where all of these required explicit dictionary passing. Hence my interest in things like implicits and type classes. (I prefer implicits to type classes for the reasons described in this post by Gabriel Gonzalez.)

Overriding a virtual method

Overriding a virtual method or overloading == is quite different from redefining it with an unseen implicit parameter.

Overriding a virtual method

Overriding a virtual method or overloading == is quite different from redefining it with an unseen implicit parameter.

I think type classes have shown that overloading is denotationally just such an implicit parameter, so not that different really. Unless by "redefining" you mean overlapping instances, in which case I agree.

Overloading in C# is dead

Overloading in C# is dead simple and not subject to outside interference (except for extension methods, but these are scoped simply also). Implicit parameters in Scala at least are much more powerful, and you could possibly change the behavior of an overload independent of the static types involved. I'm not sure about Haskell.

So the question must be asked: you can do overloading with X, but do you really want to? Anyways the more the compiler does, the more we have to think about what it does.

overloading in haskell

In Haskell it is pretty explicit.

a) You define (==):: a -> a -> Bool
example

 instance (Eq a) => Eq (Tree a) where 
  Leaf a         == Leaf b          =  a == b
  (Branch l1 r1) == (Branch l2 r2)  =  (l1==l2) && (r1==r2)
  _              == _               =  False

b) You get it implicitly based on rules when you want the compiler to do the obvious thing:

 data Foo = Foo {x :: Integer, str :: String}
    deriving (Eq, Ord, Show) 

Equality

Maybe we don't agree on what we want from an equality operation? Here's a part of my wish list:

1) All equality operations defined by default should coincide with observational equality.

2) As a corollary of point 1, some types should have equality defined by default, but not all. For example, integers should have equality, but functions shouldn't have equality by default, because observational equality of functions is undecidable.

3) There should be a way to say things like "this method accepts two objects of any type T, provided that T has equality".

4) There should be a way to say things like "if type T has equality, then type List<T> has equality that is derived from equality of T".

As far as I can tell, C# seems to fail on all these points. (I'm not completely sure about point 3, it's a bit tricky in the presence of inheritance. It seems to hinge on F-bounded polymorphism and wildcards.) On the other hand, Haskell type classes succeed on all points, and I think you can also make a design based on implicits that will succeed on all points. (Scala seems to fail on points 1 and 2, but that's more due to Java legacy.)

Depends on the meaning of List<T>

if type T has equality, then type List<T> has equality that is derived from equality of T

Of course, it's a very minor nitpick, but this is true (for the usual recursive definition of lists) if we take the least, but not if we take the greatest, fixed point. (At least, assuming that you want equality to be decideable, which your (2) suggests you do.) For example, something like

let x = 1:x in x == x

will not return.

drupal is funny

"overloading in c# is dead"
vs.
"overloading in c# is dead simple"

You can't dispatch on return type with virtual methods

In Haskell, the standard library has a function

read :: Read a => String -> a

which says that you can deserialize any type implementing the Read interface. Because it dispatches on the return type, you can't implement this with OO-style dynamic dispatch.

It's a surprisingly big win to be able to parse strings into user-defined types using the same interface as for builtins. Writing something like:

   read "([('a', 1), ('b', 2), ('c', 3)], False)"

is just vastly more understandable than the non-typeclassy alternative

  readPair (readList (readPair readChar readInt)) readBool "([('a', 1), ('b', 2), ('c', 3)], False)"

And that's just one typeclass! From a purely type-theoretic POV, I don't much like typeclasses, but I think there's no question that they are one of the important language design innovations of the last 20 years.

Modular Type Classes

I've been reading Modular Type Classes, an older article attempting to combine modules and type-classes (was even briefly discusses on LtU).

In it, type classes are elaborated as modules (similar to how they are encoded as dicts/vtables in GHC), and can both be implicitly built/combined and implicitly selected at method calls. However, the rules are somewhat more limited than in Haskell, and modules must explicitly be made eligible for automatic selection. I haven't finished it yet, but it looks promising.

Proof search is unavoidable

I reject the argument that type-classes are bad "because they introduce another computation mechanism (logic programming)". Recursive instance search is a restricted form of proof search, which indeed has non-trivial computational power. But why should we reject this as a programming system feature? On the contrary, I believe that we will have *more* forms of proof search in the future (than mere type inference as we have in ML), as we get better tools to express program specification and automatically specify them.

We should work as hard as possible to make sure that expressive proof search, when it is used for code inference, does not make code inference itself unreliable (by asking for strong coherence property: there is no ambiguity in what gets inferred). But we shouldn't take computionally-expressive proof search as a bad thing per se.

The Agda designers' argument in the "instance search" paper linked above is quite different. They don't say that they reject proof search, but that they made the choice to relegate it to the IDE. This is a perfectly reasonable choice that mirrors Sean's remark that IDE should play a major role for implicits (by showing the inferred terms, and more generally mediating between the parts of the program written by the programmer and the parts provided by the system).
But in this case the language itself and the IDE form a single "programming system" we should design semantics for. I don't think this change the nature of the problem nor the solutions one should consider and evaluate.

I think the idea is not that

I think the idea is not that proof search is bad, but that introducing another computation mechanism for it is bad. Ideally the proof search procedure would itself be expressed within the language.

This is similar to the idea that C++ templates are bad because they introduce an additional computational mechanism (function application f(x) vs template application F<x>), but dependent types are good because they have the same language at the type level and on the value level (f x and F x). That makes the system as a whole more powerful and simple.

Dependent types seem problematic

Firstly I like different paradigms for different levels. For value programming I am concerned with efficiency and how things are done. For type level programming i don't want to be concerned with the how, just the what. So for me an imperative language with full bit-level memory access makes sense for the low level, and a logic language makes sense for the high level. A functional language is a compromise that is ideal for neither.

Dependent types break the separation between compile time and runtime. I want a small runtime system as possible, so including the compiler in the runtime is no good.

Dependent types map to higher order logic, so I would probably choose lambda Prolog over dependent types if I wanted everything in one language. I am not convinced that higher order is necessary in Prolog however.

Misframed

Dependent types break the separation between compile time and runtime. I want a small runtime system as possible, so including the compiler in the runtime is no good.

Since that separation doesn't really exist, it's hard to break. Dependent type doesn't require including the compiler in the runtime; it requires that the static-compile-time compiler emit checks that are performed at runtime.

The thing that makes us push the compiler into the runtime is the combination of some form of generics (including parametric polymorphism) with explicitly unboxed types. More precisely: these two features cannot be combined with any semblance of acceptable performance unless late compilation is performed on the target system. Up to the point where you admit explicit run-time loading features into the runtime, it's possible to do the necessary compilation at install time rather than run time. Run-time loading may or may not admit this solution, depending on what can be loaded and what static declarations the installation bundle is able and willing to provide.

Getting back to dependent type, I don't think the real issue is the emission of checks. I think the real issue is that we, as programmers, don't have a clear understanding of when those checks are emitted. That is: we don't know when static requirements were able to be discharged. This defeats our mental model of performance, and it also defeats any ability to assess whether the code is dynamically exception-free.

I'm inclined to appreciate the power that we get from dependent type, but as a systems-oriented programmer I definitely need a mechanism in the language that lets me say when static discharge failures should be treated as static errors.

What checks do dependently

What checks do dependently typed languages need to emit?

?

The thing that makes us push the compiler into the runtime is the combination of some form of generics (including parametric polymorphism) with explicitly unboxed types. More precisely: these two features cannot be combined with any semblance of acceptable performance unless late compilation is performed on the target system.

No, this isn't true. The two things that make compilation of generic and higher-order code more difficult than compiling (say) C are that (a) polymorphic types can quantify over datatypes of differing sizes, and (b) instantiating a polymorphic type variable can change the arity of a function. But it's relatively easy to set up a type system which tracks the sizes of types, and restricts quantification so that arities don't change. For instance:

κ ::= n 
P ::= α | void | P + P | unit | P × P | Ptr(P) | Delay(N)
N ::= ∀α:κ.N | P → N | Val(P)

Here, we take our kinds κ to be sizes, which for simplicity I'll just take to be a natural number of words. The positive types P are datatypes, and are of varying size, and the negative types N are the type of computations.

The idea is that sums and products will be stored in an unboxed way, so that a pair P × Q will consist of some words for P, followed by some words for Q's representation. Sums are tagged, so that a P + Q gives you a tag word for the sum plus the words for P or Q respectively.

We'll use the pointer type Ptr(P) to represent a boxed value of type P. The idea is that this will always be 1 word, since it's a pointer. The type Delay(N) is a suspeneded computation. It's also taken to be 1 word, since in general it has to be a closure

The negative types are function types P → N, and a computation type Val(P) which will produce a value of type P, as well as a polymorphic type ∀α:κ.N. Essentially, a whole negative type is an n-ary function, possibly polymorphic – it's just that having a notion of negative type lets us give rules without having a big ugly composite function type. The key point is that type polymorphism only ranges over the positive types of a given size κ, and so instantiating a quantifier will never change the arity of a negative type, and that you can never substitute a type of different sizes for a given type variable.

The net result is that you can read off the layout information from a type, just as you can in C, even though we've got polymorphism and higher-order functions in play. Because of this, you can dynamically load and link code without having to do any troublesome recompilation.

Operationally, I've described a language halfway between ML and C: it assumes garbage collection, but still lets you control memory layout of values. Adding some linearity and regions would let you relax the gc assumption as well. (Though I kind of like the name gC for this language...!)

We can formalize this a little with the following kinding rules:

Δ ::= · | Δ, α:κ

Δ ⊢ P : κ
Δ ⊢ N

α:κ ∈ Δ
————————
Δ ⊢ α:κ

                      Γ ⊢ P : n    Γ ⊢ P' : m 
————————————–	      ————————————————————————
Γ ⊢ unit : 0	         Γ ⊢ P × P' : n + m   


                      Γ ⊢ P : n    Γ ⊢ P' : n 
————————————–         ———————————————————————— 
Γ ⊢ void : n 	        Γ ⊢ P' + Q' : 1 + n 	



  Γ ⊢ P : n               Γ ⊢ N 		
———————————————	      —————————————————
Γ ⊢ Ptr(Q) : 1	      Γ ⊢ Delay(N) : 1



Γ, α:κ ⊢ N           Γ ⊢ P : n    Γ ⊢ N 
———————————	     ————————————————————
Γ ⊢ ∀α:κ.N 	        Γ ⊢ P → N 	   


Γ ⊢ P : n 
———————————
Γ ⊢ Val(P) 

Not Sure.

So are we saying that partial evaluation is all that is necessary? Obviously you can easily implement an interpreter for a dependently typed language, but what would a compiler do where you have types that depend on run-time IO values, that are then used to select implementations of functions in polymorphic code?

I'm not talking about dependent types...

shap said that he thought that you needed runtime specialization if you have a language with both polymorphism and types of different sizes. I was illustrating how to get around that, by sketching the design of a language that has polymorphism, types of different sizes, and doesn't require runtime specialization (since you can read off layout information from types, even ones with free variables).

Dependent types are orthogonal to this issue.

Still don't see it

I must be missing the point. Can you explain how your system would type a function polymorphic in the size of its argument or return type? What would a separate compilation scheme look like for a function like that? (I don't mean to be snarky, I'm generally interested in this issue and presume that I've misunderstood.)

Also, I don't think shap was really specifically demanding runtime specialization, merely as much link-time specialization as you get with C++ templates. But maybe I misunderstood him too?

I'm doing something similar

I'm doing something similar (posted before on LTU too but I can't find it now). Polymorphic functions are compiled by actually passing a representation of the type in at run time. So if you have the function id x = x, it becomes id type x = x where type is a representation of the type of x. Then at run time the type is analyzed and the appropriate action is taken. For instance if type is an unboxed pair of two ints (int,int), then the id function will do the appropriate thing for taking the pair of 2 integers in the argument and returning the pair. The same thing even works for dependent pairs and dependent functions.

Then to actually make this fast there is a staging mechanism by which the programmer can specify which parameters are run time and which parameters are compile time. If you mark the type parameter of id as compile time then for each call site with a different type a different version of id will be compiled that is specialized to that type, similar to C++ templates.

Note that all of this is only for efficiency. You can also compile polymorphic/dependent types with type erasure, but then you don't get efficient value representations because everything needs to be pointer sized.

Hmm...

This sounds like a restatement of Shap's point: you either add a layer of dynamic dispatch (unacceptable performance) or else you need a way of generating specialized code for new call sites. I believe it's possible to do better than C++, but it seemed to me that Neel was making a stronger claim.

This sounds like a

This sounds like a restatement of Shap's point: you either add a layer of dynamic dispatch (unacceptable performance) or else you need a way of generating specialized code for new call sites.

It's not the call site itself, it's the polymorphic function. Polymorphic recursion will always incur at least one dispatch, but all other polymorphic functions simply need to be compiled for a conservatively large but fixed number of unboxed sizes. Anything more than 3 or 4 words per type probably doesn't make any sense to pass unboxed due to copying overheads. So 3-4 overloads per polymorphic parameter, assuming types must be aligned on word boundaries.

This results in an exponential number of overloads being generated, ie. N polymorphic arguments yieolds 4N overloads. For simple polymorphic functions with one or two parameters, that's reasonable. Most ML functions seem to have less than 4 polymorphic parameters in my experience, so 256 overloads for the worst-case. So it's possible in principle, but it quickly becomes expensive.

Whole program compilers like MLTon use tree shaking to eliminate the unused polymorphic overloads.

I think shap's claim can simply be amended to, "you need link-time compilation for efficient polymorphism that avoids exponential code blowup".

You can generate

You can generate specializations on demand, so you don't need to generate all 4^n overloads, only those that are actually used. I don't think it's a problem at all in practice. In C++ ALL polymorphic functions are specialized, and while compiling C++ is slow, it's not because of template specialization (but rather because of the way include files let the same code get compiled thousands of times). I heard that in MLton they actually observed a *decrease* in the size of the final binary since typically (1) the monomorphic version is smaller and (2) the average number of specializations of a function is small.

Yes you need computation at link time, but I don't see how that's a problem. I'm not sure if that was shap's point either, since he said "but as a systems-oriented programmer I definitely need a mechanism in the language that lets me say when static discharge failures should be treated as static errors.". Since current dependently typed languages already treat all discharge failures as static errors, it already does exactly what he wants.

You can generate

You can generate specializations on demand, so you don't need to generate all 4^n overloads, only those that are actually used.

Right, not problematic in practice, but doing so kind of violates separate compilation, ie. some degenerate AST must be available at link-time to generate the specialized version -- see Cyclone's template-based method to avoid carrying building an entire compiler into a linker.

Yes you need computation at link time, but I don't see how that's a problem. I'm not sure if that was shap's point either, since he said "but as a systems-oriented programmer I definitely need a mechanism in the language that lets me say when static discharge failures should be treated as static errors."

I think shap was just making a general claim that expressive type systems make it possible to express and check strong constraints, but one may not always be sure when the constraints have been proven to be satisfied purely as a type as opposed to having some runtime impact.

For instance, section 3.2 of Easy as PIE:

Program equivalence, and provable equivalence
The notion of type and term equivalence that we use in PIE is full β-convertibility (also referred to as intensional equality) taking into account the definitions in the environment. It is an invariant,
due to our effect analysis system (which includes termination checking), that β-reductions performed at compile time terminate.

However, β-convertibility is often not enough. For example, the type checker cannot establish the equivalence:

Eq (leq n n) True ≡ Eq True True

because leq n n is not convertible to True, although it is provably equal to True. For such examples we plan to experiment with using proof terms of equivalence as coercions, whenever automatic convertibility fails. Special care has to be taken to prevent such coercions from having runtime overhead.

The "special care" required is the problem.

Why do we want separate

Why do we want separate compilation anyway? Two reasons:

  1. Compilation time that scales with the size of the changes rather than with the whole codebase.
  2. Modular type checking: type correctness of clients of a module only depends on the interface of the module, not on its implementation.

You can still have both of these with this scheme. Also in my experience even with C/C++, link time optimization is one of the most impactful options you can enable in your compiler. So in practice we already don't even want separate compilation in that sense anyway.

re: Dependent types: yes it is a problem that some values are only in the program to make the type checker happy. You probably need some mechanism to mark them to be erased at run time. In any case dependent types are very different than dynamic types in that dependent types in principle need to have no overhead at all like ordinary static types, and unlike dynamic types.

Re link-time code generation

I agree with your conclusion. Delaying final code generation to link-time in at least some cases is clearly already pretty common, and I think it's a fine solution.

whither eval?

?

duplicate


Can you explain how your

Can you explain how your system would type a function polymorphic in the size of its argument or return type?

This system statically forbids such functions. Each definable function takes an argument of fixed size, and in a polymorphic type ∀α:k.N you are only permitted to quantify over types of the same size.

So a function ∀a:1. a -> a -> Val(int) is one that takes two arguments of size 1. You can choose any type of size 1, but you can't instantiate a with anything of size 2 or 3. The thing that makes this interesting is that pointers always have a size of 1 -- you can have a pointer to a large block, but it will be size 1. As a result, you can program with the sort of boxing discipline that languages like Ocaml or Java use, but you don't have to.

Furthermore, the calling convention of this language is basically like C's calling convention, despite the fact it supports polymorphism and HOFs.

Ok

Ok, then I didn't misunderstand. But it really seems like a cop-out in response to Shap's post. It seems to me that the whole difficulty is how to build efficient generics in the presence of non-uniform representation. This is something that C++ templates do very well, but obviously the drawbacks of that approach are well known. By statically forbidding generic functions of that kind, you're winning the game only by refusing to play.

Lets try again.

In order to compile the code it must first type-check (or you will be compiling invalid or undefined behaviour). If the types depend on runtime values you cannot determine if the program fragment can be compiled until you know what the values are so you can type-check it, and you don't know that until runtime. Hence you need the compiler available at runtime.

The point of dependent types

The point of dependent types is that whether the code will type check can be checked at compile time even when the types depend on run time values. This means that types can only depend on run time values in ways that are decidable by the type checking algorithm. So you most definitely do not need the compiler at run time.

Polymorphic Recursion

Do you need to be careful about polymorphic recursion? It leads to types that are dynamic at runtime, which means that not all types can be statically checked.

Are there any other difficult corner cases where telling whether or not it is safe to compile may not be easy? Is it decidable?

As I said, types can be

As I said, types can be statically checked even if they are dynamic at run time, just like you can check that n < 5 if n can be 2 or 3 at run time. The only thing that the compiler needs to check is that whatever the types may be at run time, T1, T2 or T3, the types are fine in either case. Polymorphic recursion in dependently typed languages is not different than polymorphic recursion in e.g. Haskell.

Most dependent type systems nowadays have decidable type checking, but that of course implies that there are some theorems that you can't prove with them.

In short there is absolutely nothing difficult about compiling a dependently typed language that would require a compiler at run time or something like that.

How do you handle...

I input an integer between 1 and N from the user, and use this to index into a heterogeneous list, extracting the element selected by the user and feed it into a function like add. After the value is input we could type check the calculation and it would pass. Presumably it would fail to type check statically because we cannot prove that every element in the heterogeneous list as 'addable'?

If most programs where types depend on runtime-values cannot pass typechecking, why have types that depend on values at all? Why not just lift all statically determinable values from the value level to the type level, and have types that only depend on types (refinement types).

Type checking works by

Type checking works by checking types in a finite number of contexts, statically. Those contexts can have type variables, though, and through polymorphic recursion might expand into infinitely many monotypes. Or with dependent types, by e.g. having a type family that's indexed over Nat.

The useful property of type checking is that by statically checking the finite set of contexts, you're guaranteed that everything will work out during evaluation.

I don't really understand your second paragraph. How are you proposing to represent a type that says "square matrix of size NxN" where N is a value just read from IO?

Yes it will fail to type

Yes it will fail to type check because it's actually type incorrect since it would fail at run time...? A better example would be two heterogeneous lists, one list a[i] and the other list f[i] where if a{i] has type X then f[i] has type X -> string. Then you could input an index i from the user, and do f[i](a[i]) and you're guaranteed to have a string even though the intermediate types depend on the run time value of i.

Lifting values to the type level doesn't give you the same power because then types can't depend on run time values at all. As I said, types depending on run time values is fine, as long as the compiler can check that whatever that run time value happens to be, the types are correct.

In a dependent type system,

In a dependent type system, what you check at compile time is that the function you write can handle whatever value will be passed to it at run time. This means you have to write a proof that you give to the compiler to enforce this invariant. But this is irrelevant!

If your type system does not track the sizes of values (as the type systems of Python, ML, Java, Haskell, Agda, and so on do not), then the language must either do some dynamic tests at runtime, or use a uniform (e.g., boxed) representation of values. Note that the list of languages I gave ran the gamut from dynamically typed to dependently typed!

If the type system does statically enforce a size discipline, then you can generate code that is polymorphic in type, but monomorphic in size. This is true regardless of whether the type system is dependent or not.

BTW, just tracking the size

BTW, just tracking the size is not enough if you want to have a GC. The GC needs to know where the pointers are.

Looks like neelk's comment

Looks like neelk's comment here was badly mangled in the recent conversion.

The encoding got screwed up indeed

The conversion treated uniformly old Unicode comments as if they were Latin1. Given PHP's (non)support for charsets, that's unsurprising.

iconv to the (partial) rescue

iconv -c -f utf8 -t os2latin1

κ ::= n 
P ::= α | void | P + P | unit | P × P | Ptr(P) | Delay(N)
N ::= ��α:κ.N | P → N | Val(P)

Δ ::= · | Δ, α:κ

Δ ⊢ P : κ
Δ ⊢ N

α:κ ∈ Δ
��������������
Δ ⊢ α:κ

                      Γ ⊢ P : n    Γ ⊢ P' : m 
��������������������������	      ����������������������������������������������
Γ ⊢ unit : 0	         Γ ⊢ P × P' : n + m   


                      Γ ⊢ P : n    Γ ⊢ P' : n 
��������������������������         ������������������������������������������������ 
Γ ⊢ void : n 	        Γ ⊢ P' + Q' : 1 + n 	



  Γ ⊢ P : n               Γ ⊢ N 		
������������������������������	      ��������������������������������
Γ ⊢ Ptr(Q) : 1	      Γ ⊢ Delay(N) : 1



Γ, α:κ ⊢ N           Γ ⊢ P : n    Γ ⊢ N 
����������������������	     ��������������������������������������
Γ ⊢ ��α:κ.N 	        Γ ⊢ P → N 	   


Γ ⊢ P : n 
��������������������
Γ ⊢ Val(P) 

Hm

Given the way proof search is used for code inference (it isn't a property of implementation), I don't see how you could express it without giving rise to a computation mechanism. If I'm allowed to let the compiler try to fill a piece of code with type (Eq (List a)), or fail if it can't, how do I prevent people to also use it to fill a piece of code of the form AddRel (a, b, c) (building a witness that a+b = c), or fail if it can't? And is the latter actually a bad thing? Even if the proof search is implemented (and maybe tunable/refinable) in the language itself, the observed behavior is proof search, which is a computational model.

Why exactly is it bad to have two different computational models, with rewriting at the term level, and a form of proof search for code inference? I do get the "having only one makes for a simpler language" argument, but then there are many other concepts that you could remove for simplicity, and decide to keep for the extra expressivity or productivity. Don't we expect that programmers in our language will have the power to express and use logic-programming abstractions for particular problem domains where it fits? Why should language designers be forbidden from making the same choice in the parts of "programming language design" domain where it fits?

My point was that we should accept the new computational power, yet ensure (through careful study of the design's properties) that it does not distract from the intended use or actively hurts the language good properties. I suppose the same point could apply to your other example, C++ templates: is the real problem the fact that you can express static computations, or the fact that the feature is very complex, fragile and difficult to use?

C++ templates are complex,

C++ templates are complex, fragile and difficult to use *because* it is a separate language. You have to do incredibly complex magic to replicate things that would be trivial in the main language. You see some of the same with Haskell type class trickery.

I don't have any good solution for this, but on the other hand it's not too difficult to imagine that the same language could be used for type class / implicit value computations and main language computations. A type class Foo is essentially a function from a tuple of types to a type class dictionary: getFooDictictionary : TypeDescr -> Maybe FooDictionary, where TypeDescr is a representation of the types. For example:

getEqDictionary Int = return {eq n k = intEq n k}

getEqDictionary (List a) = 
   do aDict <- getEqDictionary a; 
      return {eq as bs = ... compute list equality using aDict.eq for element equality...}

Of course this is far from the whole story; this example is only to show that the idea that the computation mechanism for type classes being the same as for the main language is not preposterous.

How vs What

C++ templates are fragile because they have no simple semantics behind their operation, and because they are not really a language in their own right (well not one you would want to program in anyway). I don't think this could be compared to a well designed system with separate languages for value and type level programming.

Type class 'programming' is a kind of restricted logic programming. There already exists a language where this is used at the value level as well, and that's Prolog (and other logic languages). You end up with something simpler taking first-order logic form the type level down to the value level, than you do taking lambda's up to the type level, which leads to higher-order logic, and undecidable unification (although lambda-prolog and dependent-type systems based on it like Twelf use pattern-unification, which is a decidable subset).

Is C++ spinning out of control?

I've been looking at a number of talks on C++ recently, and I can't say I really understand the language. But the proliferation of l/rvalues recently proposed, the various pointer types, introduction of auto and decltype, the sluggishness of template compilation leaves an underwhelming impression in comparison to the old C++ I once learned...

Then again, with a lecturer background I guess I favor short simple languages with clear semantics. Anyone here, like Keean, who likes to comment on whether Stroustrup still has it under control?

C++ under control for now.

If I compare the C++ I was programming 10 years ago to modern C++, it has got a lot better - template generics (STL), value semantics, move semantics, RAII, etc. Its not perfect, decltype and auto are a poor substitute for type inference, but as they cannot break backwards compatibility, its the best they could do. I'm hoping 'concepts' get into c++14, as these are effectively Haskell type classes, but statically evaluated. I don't want to critique C++, as it would end up being a manifesto for my own language development.

I have never found template compilation to be sluggish, so I am not sure what is going on there?

It was something Stroustrup acknowledged in a talk

I think it was a talk at Google but I forgot the link. My gut feeling, without understanding the template system but recognizing it as 'a type driven macro system,' was that they might have delegated too much of what should be the compiler's job to the template system.

But I really can't say I understand the language that well, or anymore, I guess.

Templates vs Compilers

I am not sure about that. I think you can reflectively expose the compiler in the meta-language, so the distinction between compiler and meta-language disappears.

User defined optimisations are getting close to what the compiler does, but I think are actually the right idea. This way library authors, who are domain experts can code optimisations into the libraries. I think the idea of general purpose compiler optimisation producing the best code for all problem domains is overly general and will never work.

spin control

When change is not in response to demand from industry devs, motivation may be idiosyncratic ("I find adding xyz entertaining") or political ("Now I can achieve the next step in my master plan") or patronizing ("An average dev is now too dumb to grasp low level"). None of those are good. It's hard for me to tell whether the plan is to kill C++, so folks move to languages more under vendor control.

... and I can't say I really understand the language.

Me either; I have no plans to keep up. I used C++ as my main language from 1990 to 2010, but changes making it more high level force me to abandon it, in favor of tools where I can know exactly what is happening. I can't say going back to C is fun, but it greatly reduces surface area letting other folks mess with you.

C++ standards committee

Well. In the same talks the internal working of the standards committee was discussed. I don't think you've got to be afraid that they'll derail because of that.

But I found the trend towards a rather baroque language with, it seems, major breaking decisions between versions rather worrying. I guess it all makes sense if you're a hardened C++ developer, or compiler writer.

No idea. Really.

oh, maybe they are above reproach

(You argue committee members are never idiosyncratic, political, or patronizing? :-) The C++ standards committee seems incorrectly named. What they're doing is steering, like C++ is a movement, or revolution, that needs guidance from senior party members who know better than the poor disenfranchised C++ workers of the world.

While it's not like I really care, people who can affect lives of others should be more susceptible to criticism of their efforts. When occupying a position of high respect, if you do something really stupid, a little mockery might do you some good, as a sort of reality check. We need a short film featuring Groucho Marx on the committee. (Hmm, that should get a lot of Youtube hits.)

Duplicate post

-

Hm

I partially agree with Keean Schupke on the C++ templates point (I originally replied to him but prefer to avoid extra nesting). I heard that D had a better-designed language for static computation that is vastly simpler than hacking C++ templates (and still give reasonable generics), that could be an example but I don't know much about it.

Where I disagree with Keean is on his conclusion: "we have logic programming at the type level, therefore we should also have it at the type level". I agree that would give a more unified system, but my point is precisely to defend the opinion that it is ok to have different computation models for specialized things. I think that computation-as-reduction (lambda-calculi and functional programming language) is a good way to write type-level programs, and that proof search (which is taken as the computational system of logic programming) is a good way to express code inference, and I want to have both, and not impose a single system to do both things.

(I don't claim that functional programming is fundamentally better than logic programming at the term level. I don't know, but what I'm interested in working on is a functional programming language, with some form of code inference.)

More precisely, I think a dream system could have:

- a functional programming model at the term-level (classic) and at the type-level (F-omega, Coq)
- a specialized, restricted proof search automation for code inference (with strong guarantees on non-ambiguity of inference)
- but such automation would be embedded in a more general / expressive , programmable "tactics" language to express proof search in the general sense (more flexibility, calling external solvers and what not, but no guarantees as to which proof terms get inferred)

I Agree

I was just pointing out that Prolog already unifies the type and value levels in a single language, that is significantly simpler than dependent types, and that dependent types are really just higher-order logic and hence isomorphic to lambda-prolog (which is used as the core of Twelf). You can already program different search strategies as meta-interpreters in Prolog too.

I wasn't suggesting that this is what I want, personally I think its better to have different languages with different computation models at the different levels.

qi/shen uses prolog and can do dependent types

just a random link that relates to such buzzwords :-) ... https://groups.google.com/forum/#!msg/Qilang/EkdF25yRrNM/sOuRYN2s85IJ :-)

In theory it's not bad to

In theory it's not bad to have an additional system of computation, but in practice the sublanguage tends to accrete more and more features and complexity because there are things that you can't express or can't conveniently express because it's a limited sublanguage. Often these features more or less directly duplicate things that are already in the main language (e.g. enable_if in C++ and type functions in Haskell). A specialized sublanguage that is built on a general language is totally fine because you can always escape to the general language when it's not (easily) expressible in the sublanguage. For instance you could do logic programming with a monad or similar, and then have sugar on top of it that give you the convenient syntax of type classes.

On the more concrete topic: is proof *search* really what we want for type classes? Aren't most type classes defined by structural recursion on the type structure (like the Eq example I gave). Also, we want it to be unambiguous what code the inference produces. Structural recursion on the types gives that by construction, unlike logic programming. That would suggest that functional programming may be a better model for type classes than logic programming. Genuine proof search is a different story, since we just want to find a proof and we don't care which proof we find.

Ad hoc instances

Aren't most type classes defined by structural recursion on the type structure (like the Eq example I gave).

Eq is often defined using structural recursion ("deriving"), but sometimes you want an ad hoc implementation of Eq for your type, e.g. if you have an interface type (record of functions) for which the default implementation of Eq would be undecidable. And there are other type classes whose instances are mostly defined ad hoc instead of using structural recursion, like Num or Monad.

Open cases, but still structural recursion

The cases you're describing are still structural recursion. It's just the cases are spread out. In the case of OverlappingInstances, you also have rules to decide which pattern is most specific.

Haskell's design disallows backtracking and AFAIK instance "search" can be recast as open functions that pattern match on type level data.

ambiguity

I guess the difference is that Jules intended his system to be unambiguous by construction. I'd love to see a simple design for Eq, Num etc. that would be unambiguous by construction...

additional systems of computation

In theory it's not bad to have an additional system of computation, but in practice the sublanguage tends to accrete more and more features and complexity because there are things that you can't express or can't conveniently express because it's a limited sublanguage.

More or less the central theme of both my master's thesis on RAGs and my dissertation on fexprs. If you start with a lucid system of computation, and then introduce another system for some things on the margins that you can't do with the first system, you end up relying more and more on the secondary system, and lose the benefit of the lucidity of the first system. The example I used in my master's thesis (borrowed from Henning Christiansen, iirc) was of an attribute grammar for Ada — as attribute grammars augment the lucidity of CFGs with opaque semantic functions — in which a single CFG rule had several pages of semantic rules attached to it. Similarly, Lisp macros (and special forms) are outside the elegant lambda-esque function-call structure of Lisp. C++ templates are just another example

More precisely, I think a

More precisely, I think a dream system could have:

- a functional programming model at the term-level (classic) and at the type-level (F-omega, Coq)
- a specialized, restricted proof search automation for code inference (with strong guarantees on non-ambiguity of inference)
- but such automation would be embedded in a more general / expressive , programmable "tactics" language to express proof search in the general sense (more flexibility, calling external solvers and what not, but no guarantees as to which proof terms get inferred)

I'm working along similar lines (very slowly) in Coq.

I think a lot of the debate in this area stems from whether the inferred term is computationally relevant or not.

It's generally a bad idea to, for example, infer a sorting function, since its performance may be terrible; we could mitigate this by enriching the type, but having to deal with such complicated and verbose types may outweigh the benefits of inferring the term to begin with!

Non-modularity of inference

It's useful to infer values of type (Eq (List a)), but the problem is indeed with modularity, as shap described in a different comment.

For example you could have three modules such that any two of them can coexist, but combining all three makes inference ambiguous. The simplest case I can think of is to have one module define an implicit conversion from typeclass A to typeclass B, another from B to C, and the third from C to A. That means your language suddenly needs whole program type checking, and if you have three different third-party modules that exhibit this problem, who do you blame?

We could require programmers to explicitly import all values and functions that should be used for inference within the current module. But that replaces the problem with another one, namely that inference can give different results in different modules, like silently using two different equality comparisons for the same type. That could lead to some interesting debugging sessions. I'm not sure which alternative I prefer.

In OO languages like Java, people don't have such problems, because every class explicitly says which interfaces it implements, and other modules can't affect that. Maybe we could extend that approach to support some form of inference as well, e.g. let the class List<T> say that implements Comparable if and only if T implements Comparable. That wouldn't be as extensible as full-fledged implicits and conversions, but maybe the reduced potential for confusion makes up for it.

Prove coherence

My solution would be: whenever you build a scope where both the conversion from A to B and a conversion from B to A are derivable, the system must check that composing them (to get a conversion from A to A or from B to B) gives the identity function -- possibly by asking the programmer to provide a proof of that. If that is not the case, there is an error; to avoid it, you must explicitly hide one of the conversion before introducing the other in scope.

(Of course such proofs are not very practical today and that's why language designers go for various restrictions to avoid this. There are interesting choices being made, but "at the limit" this strong coherence guarantee is what we want to have.)

Are you sure about the "scope" part?

Even if we had your solution implemented today, it wouldn't completely solve my problem, because avoiding ambiguity is only half of it. For example, the expression "x == y", where x and y both have type Foo, could silently infer different implementations of Eq(Foo) depending on what's imported into the scope where the expression occurs, while being completely unambiguous in each scope. You might say that programmers should learn to live with that, but I think they shouldn't have to. I suspect that we can come up with restrictions that would solve that problem, and these restrictions might also remove the need for coherence proofs in the first place. In any case, it's hard for me to believe that the design space of "languages with overridable equality but without full proofs" has no sweet spots.

Unbounded proof search really IS bad

I see what you're saying, but I don't think it's that simple.

A lot of type system features have been explored in the last 20 years that introduce unbounded search in the type checker. When this occurs, or even when the type resolution algorithms become sufficiently complex, you start to find distinct implementations in different compilers. This, in turn, has the consequence that two conforming compilers may not agree about whether meaning can be assigned to a particular program. Differences in available memory to implement the same algorithm can have similar effect. In either case, pragmatically, a program that compiles fine for you may not compile at all for me.

In such compilers, arbitrary limits are imposed (of necessity) on the search process, and the search algorithm effectively becomes part of the language definition. Breadth-first or depth-first (to give one example) matters. A given compiler may implement alternative or enhanced searches, but there needs to be some common search algorithm on which the developer can rely for predictable program admission. This is especially true when late compilation may occur in the field, beyond sight or recourse of the developer. I have yet to see any language definition for such a language that attempts to address the specification of the search algorithm. Mindful of the obscurity of formal notation in the eyes of most practitioners, I'm not clear on how we might go about that with any rigor in a practically effective way. This is one of the issues that troubles me in admitting dependent type into BitC if we ultimately do so.

I'm not even convinced that we can practically reduce this problem to executing a proof checker (as opposed to re-executing the search). The concern is that proof searches do not respect the modularity or encapsulation boundaries of the source program[s], and so it is difficult to know how to emit a checkable proof for a program fragment (such as a library).

We need to explore these type systems, but their pragmatics seem troublesome.

Separate Concerns

Some languages, such as Typed Clojure, separate the tasks of typechecking code and runnning it... i.e. the code can run without such checking, using dynamic types or assumptions of type correctness. If you perform a quick check before running, you're only doing an extra sanity test, or supporting a few extra optimizations. Rather than waiting for a positive result (program has meaning), you're seeking a non-negative result (no obvious errors or inconsistencies).

For a critical system, where runtime failure must not occur, presumably code could be annotated or provided together with a proof-trace (generated elsewhere). That, or a simple cryptographic signature, indicating a proof was achieved by someone you trust. Such techniques would allow, for example, expensive proofs to be performed on the cloud before using the code on small devices with a bare minimum checker/compiler.

A potential advantage of this separation is that the proof strategies can be specific to programs or common usage patterns. Developers who have difficulty checking a particular program that they know is correct (via non-conventional reasoning) can go work on the typechecker a bit. Development of the de-facto common prover can happen at a different pace from development of the language, can perhaps even have its own libraries or plugins.

The disadvantage with this, of course, is that you cannot at compile-time (or later) infer code from the type description. Implicits, typeclasses, etc. are infeasible if there is no standard principal typing. But development-time (IDE) search would still work.

The Implicit Calculus: A New Foundation for Generic Programming

The following may be of interest: The Implicit Calculus: A New Foundation for Generic Programming, Bruno C. D. S. Oliveira, Tom Schrijvers, Wontae Choi, Wonchan Lee, Kwangkeun Yi, Philip Wadler. Draft paper, 2014.

Thank you

Thank you for mentioning this. May I ask you, how is this draft different from the 2012 paper with the same title [1]?

[1]: http://users.ugent.be/~tschrijv/Research/papers/pldi2012.pdf

From section 3.5:The rules

From section 3.5:

The rules for deterministic resolution presented in this paper support all the examples
described in Section 2. They are strictly more powerful than the rules presented in the
conference version of the paper [Oliveira et al. 2012]. In other words, strictly more
queries resolve with this article’s rules than with the rules of the previous paper. For
example, the query: