Alternatives to ADL?

I'd like to explore the topic of argument-dependent lookup (a.k.a. Koenig lookup), and in particular I'd be interested to find out what people think is the "state of the art" when it comes to resolving unqualified identifiers.

It's my opinion that one of the primary motivations for ADL is to support operator overloading. The function of ADL is to resolve unqualified names, and operators are essentially functions with unusual unqualified names. For functions that have regular names and which are not operators, it's often not a huge burden to explicitly qualify the name, or to manually import it into the current scope. But for operators this destroys their brevity and convenience.

There are many supporters and detractors of ADL and I won't try and repeat all the arguments here. The common theme is that ADL leads to surprise. However, one limitation that interests me is that the methods which operate on the arguments must live in the same namespace with the definitions of those types. This is not such a hardship in languages like C++ where there's no enforced relationship between the namespace hierarchy and the organization of modules and source files, where any header file can append to any namespace. But for many modern languages that's not the case, the package hierarchy is much more rigid (often with good reason.)

I can imagine scenarios where the co-location of operations and types within the same module would be overly restrictive - you might want to extend the set of operations for some set of types (think of a vector math library) without re-defining those types.

So what are the alternatives? I can think of several. The simplest one is to require that unqualified name definitions be explicitly imported into the module's namespace. This has the benefit of being easy to understand, but I'm not sure how well this scales to large programs - you risk polluting your namespace with a lot of symbols.

Another solution is to allow a more limited kind of namespace extension. Some languages have syntax that allows adding methods and definitions to classes only (not namespaces or packages), usually with keywords like "extends" or "extension". This requires that the unqualified names be defined within the class, rather than outside it, which usually implies that the enclosing class "owns" the definitions or at least has special status. In a single-dispatch world, of course, these definitions are methods, which means you run into the "binary operator" problem - who owns the definitions of A+B, is it A or B?

A third approach is to completely give up on static scoping and use dynamic dispatch - i.e. if there's a definition of A+B anywhere in the program, it will be used, even if it's not visible at the call site. However, while dynamic multi-argument dispatch is a valuable technique, it is not without it's set of costs. One that I'm particularly interested in is the "plugin problem", which is that if you are Adobe Photoshop and you load some filter plugin that happens to contain a new definition of the plus operator, does it suddenly change the behavior of your whole program? Or on the opposite extreme, does the plugin contain its own sandboxed versions of all the operators and type definitions? Neither seems desirable to me - the plugin needs at minimum to implement some interface defined outside of the plugin, but its effects need to be limited and predictable.

The benefit of static scoping is that the conceptual search space for unqualified name lookups is smaller and more predictable, and thus easier to reason about - a programmer need only look at the set of imported symbols and their transitive dependencies, instead of having to search a universe of code.

Anyway, I'm interested to know if there's anything out there that's clearly better than the techniques that I've described.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Fourth option

Use globally unique names together with an IDE that supports flexible rendering and auto-completion. E.g. the symbol 'float:+' might be rendered as '+' by the IDE (perhaps in a distinct color for all 'float:' words), and might be found by '+' or 'f+' with auto-completion. In general, the rendering rules could be configurable and contextual.


If you're willing to consider IDE assisted solutions, you can more easily get nice properties such as new imports never silently changing behavior (you either keep your bindings or get an error due to new ambiguity - both options can be useful).

I'd also recommend keeping a distinction between typeful meta-programming (e.g. Serialize or PrettyPrint) and ad hoc overloading (unrelated constructs given the same unqualified name). Types might be involved in resolving either of those, but the difference is that in the former case, the programmer is binding to a meta-program that accepts types and other meta-information as arguments whereas in the latter case the programmer is binding to a specific definition site. In the former case, if parameter types change, the typeful dispatch can change silently to serialize or print the new type. In the latter case, if types change, you should receive an error, not have the compiler go hunting for another definition that happens to match the new types. Many languages combine these two, particularly when the typeful meta-programming is to remain open to new type cases, but I think it's a mistake.

Finally, I'd add that while dynamic dispatch has its place, I don't think symbol binding is it. The mapping from names to definition sites should be static, allowing the programmer to "go to definition" and see that they've selected the symbol they intended. Even if that definition is just an abstract symbol that will be linked in later.

Agreed that the mapping from

Agreed that the mapping from names to definition should be static, however that goes back to the original question - what definitions should be "in scope"? Is there a place for something like ADL where scopes are implicitly included merely by using types defined in those scopes, or is there some better rule we can use?

Not really an option for most people

I'm much more interested in something that fits in with the existing software development ecosystem, as I'm more motivated by practicality than research. I don't want everyone to have to replace their favorite text editor.

Many languages fit the

Many languages fit the existing software development ecosystem without using conventional text editors. LabView, Processing, Kismet, and Matlab come to mind.

Anyhow, if you're going to stick with conventional 'text + filesystem' you'll also be stuck with some implicit weaknesses of that environment (e.g. regarding boiler-plate namespace control and painful package version management).

Personally, I find it increasingly awkward. This is an age of Wikis, Blogs, Twitter, and Clouds. New PL designs ought to keep up with respect to how code is edited, compiled, invoked, debugged, documented, and distributed.


In fact, the IDE could even use type information as a hint that + actually means float:+ in this context, to make it more convenient for users to type. And if there is not enough information to resolve the ambiguity in the current context, instead of making an arbitrary choice (bad!) it could automatically add a dependency on "some notion of +" to the function/class/whatever being defined; automatic dependency injection, very convenient.

The user will be able to inspect the long name used after disambiguation anyway. But is there still something we could prove about this whole thing to make sure that the semantics of the IDE's elaboration are reasonable?

That's a concept

"some notion of +" to the function/class/whatever being defined

This sounds like C++ concepts. I don't like this idea very much, for the reason you get at in your second paragraph. I think it's going to be too hard to reason about the "some notion of" concept to be a good idea. You'll end up with weird bugs in that "automatically generic" code that only crop up in specific instantiations.

Possibly look at work on multimethods

I'm thinking specifically of two things:
- The paper Modular Statically Typed Multimethods
- The Dylan Reference Manual's chapter on "sealing"

Both of these deal with how to restrict the addition of methods to a multimethod (and classes to the hierarchy...) so that existing code is not affected (where "affected" might refer to type-correctness or even generated code).

From what I recall, the formal rules for Dylan mostly amount to the informal intuition you might have: if your separately-compiled/-checked module wants to add a method, you probably need to be sure that one of the parameter types you are selecting on, or the generic function itself, needs to be defined by the same module.