Overloading in a dynamic functional language

it looks like I have a little problem with overloading in my own programming language Ela. (There is a prototype here if you are interested: http://groups.google.com/group/elalang/browse_thread/thread/ee93a6de62533b22)

What we have:
* a functional language (not a pure one but pretty oriented towards pure functional programming)
* a dynamic language

Without overloading it is even not possible to implement your own numeric type and support standard operators for it such as "+", "-", etc. So it seems like a very handy thing to have. However when we take a dynamic functional language it is not really clear what is the direct way to go.

The requirement is - to have overloaded functions as first class functions (first class values).

I have one solution but it looks like I actually created solid ground for side effects. This is how it works:

open Con

let foo x = "foo1"
 on Any

let _ = writen (foo 12)

let foo x = "foo2"
 on Int#
let _ = writen (foo 12)

The first function is overloaded for "any" type, the second - only for integers. Overloads are dynamic and are processed as regular bindings (in the order of execution). And here is the result - when I call "foo" function before the second overload a first implementation is called, but after the second overload the second implementation fits best and it gets called. The function invocation code is the same in both cases as you see.

I have a strong feeling that this is a problem. What do you think?
Also what are other ways to do "pure" overloading? If I want to preserve regular bindings semantics.


Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Open datatypes in a dynamic language

It seems the essence of the problem is your open set of overloads in a dynamic language. Can you live without the openness? Maybe with something like this:

let foo1 x = "foo1" on Any

let foo2 x = "foo2" on Int
  or foo1

let foo3 x = "foo3" on String
  or foo1

let foo4 = "foo4" on Float
  or foo2 or foo3 -- Merge these other definitions into an overload set

Do you mean that if I call

Do you mean that if I call foo4 with Int it will delegate a call to foo2?

Well you can do such a thing without overloading by explicitly calling another function. The whole idea is to reuse existing names for your own functions, you can call it "type indexed" functions.

Currently you can't define arithmetic operators for your own numeric type(basically you can but you will shadow standard operators) or you can't write, say, your own implementation of 'toInt' that will convert your own type to integer. Your proposal doesn't seem to solve that if I understand it correctly.

Two ideas

1) Namespace management. Instead of naming the symbols foo1, foo2, etc., you can instead have A.foo, B.foo, etc. where A and B are different namespaces. I'm not a fan of "import polymorphism", though.

2) Do what I've suggested implicitly, so that each redefinition of a symbol rebinds that symbol to a larger overload set. As long as the overloads are part of the function value rather than a separate mechanism that's constantly shifting as the overload sets change, I think you avoid most of the confusing problems:

foo x = "Any" on Any

bar x = foo x

foo x = "Int" on Int

print (bar 10) -- prints "Any"
print (foo 10) -- prints "Int"
print (foo "hello") -- prints "Any"

If you did this, then imports would probably need to merge overload sets of collisions.

I do agree that the problem you're worried about is really going to be a problem in practice. I prefer static resolution of overloads, myself.

Rebinding a symbol is a good

Rebinding a symbol is a good idea, thanks. I need to play with it. Good thing is that it can be implemented just as a quick fix to a current implementation.


I don't know if there's any existing work on this idea in a dynamic setting, but why not do like type classes, and just pass a tuple of the operations on your specific type of number.

Let sum be a function that sums a list of numbers. You'd call it like this:

sum (myAdd,myMul,myZero,myOne) listOfMyNum

You can think of the tuple as the evidence that the numbers in the list are in fact numbers.

With appropriate syntactic sugar added, you could make this much less painful. For example, you could use a named tuple, and sugar for opening a named tuple so as to automatically bind its names. Then it would look like this:

let sum numClass list = open numClass { foldl (+) zero list }

/* here's where we "overload" */
let myNum = ((+) = myAdd, (*) = myMul, zero = myZero, one = myOne)

let list = /* a bunch of myNums */

sum myNum list

You can surely do this right

You can surely do this right now:

let sum (+) x y = x + y

sum myAdditionFunc 2 2

With or without tuples.

As I understand you propose to explicitly create a vocabulary with functions, so your sample can be desugared to:

let sum (+,*,zero,one) list = open numClass { foldl (+) zero list }

It is an interesting idea, however I am not sure that in a dynamic world it will be possible to desugar such a function in a compile time. There should be a way to tell a compiler that "numClass" is actually a "typeclass" (and a specific one). Something like so:

let sum (numClass:myNum) list = open numClass { foldl (+) zero list }

First, "myNum" should discoverable in compile-time (luckily it is possible in Ela). Then we should take into account a situation when there are may be several definitions of "myNum". And finally this approach is not as flexible as my current implementation.

With the current implementation you can have a function "let sum x y = x + y" written ages ago and as long as you have a right overload for "+" it will work for any types - and it is not neccessary to pass any "vocabularies" with functions.

It works because each function itself is a vocabulary of possible overloads. But the problem is that this is basically a mutable vocabulary that is altered at run-time and overloading creates side effects.


What you are describing sounds an awful lot like multimethods in the CLOS tradition. As you note, open multimethods are effectively mutable objects. Even if you resolve the set of all method declarations at load time (as a phase before actually executing code), unconstrained multimethods can compromise modularity.

The key to surmounting this is probably to make the operation of creating or amending a multimethod (as a set of methods) more explicit, and then making that operation a pure function. Matt M's example does this, or you could look at the paper "Modular Statically Typed Multimethods" which has a more object-oriented/inheritance flavor.

I'm not convinced that typeclasses are the right thing to emulate, since they still have to deal with overlapping instances. Usually this is okay for a language like Haskell, since you can pick the "most specific" instance consistently and statically. The nature of your language, though, is that a new "most specific" overload can come along at any time.

Thanks for the hint. The

Thanks for the hint.

The reason why I don't really want to go with typeclasses is that to my understanding this is a concept designed for a statically typed language. The problem is not only in overlapping instances. I can see quite a few problems except of it - NamedInstances for example and the fact that in Haskell you basically have two type of functions - with typeclasses and regular functions. I don't really like that. Especially when you don't have to pay such a price in a dynamic world.

If you add "real" typeclasses to a dynamically typed language you will basically end with a mixture of static and dynamic typing. All typeclass declarations should be static and processed in compile-time - so should be typeclass instances. I will have to annotate all functions by specifying a specific typeclass for each of the arguments. And as a result functions will be less polymorphic. Currently the function like so:

let sum x y z = x + y + z

can have a different meaning depending on the type of the arguments. It can use a single implementation of "+" for both additions or a "(+):int->int" for the first one and "(+):int->float" for the second. And you can effectively "extend" the behavior of this function by adding new overloads.

Typeclass concept will limit this - and will require you to write annotations which are not required currently.

So my point is that probably a Haskell like typeclass concept is not a perfect solution for a dynamic language. Probably it is possible to "rethink" typeclasses in a way best fitting for a dynamic language - but I afraid that I failed to do so.

Even if you eliminate "interfaces" (typeclasses) and just leave typeclass instances like it was proposed in the example above you still have the problem that this concept basically limits the functions. With the function like "let sum x y z = x + y + z" you will have to specify one specific vocabulary to use - there will be not way to have first "+" as int->int and second as "int->float".