Slots as reifications of OOP method names

I've sketched some ideas about slot-based interfaces as a way to turn method names into first-class citizens of the language, with an example of the resulting framework as applied to C++:

"Objects, slots, functors, duality "

The key idea is to treat objects as polymorphic functors and regard a method call like

x.foo(arg1,...,argn)

as an invocation of x where the first parameter (the slot) is an entity indicating the particular method called upon:

x(foo,arg1,...,argn)

If we now define a slot as a generalized functor accepting objects as their first parameter and with the following semantics:

foo(x,arg1,...,argn) --> x(foo,arg1,...,argn)

then objects and slots become dual notions, and many interesting patterns arise around method decoration, smart references, etc.

I wonder if these ideas have been already explored in a more general setting than the toy framework described here.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

multi methods

Why treat the 'first' argument as special? This limitation is one of the annoying limitations in OO programming, could this be extended to treat all arguments equally?

message passing

I'm a fan of multiple dispatch, but there is motivation for the first argument to be special when dealing with OOP.

In short, OOP is a highly constrained emulation of independent collaborating stateful machines. When that is appropriate, so is single-dispatch, since you're "sending a message" to a particular machine. You may even want to forward the message to another machine on your behalf. In that case, you can really think of a method call as having two arguments: a recipient and a message body.

Of course, you can encode this mechanism using multi-dispatch. Here, it seems like Joaquín is talking about encoding this mechanism via the equivalence of objects and closures. There are many examples of encoding objects as closures which take a quoted symbol as a method name, but one could easily strip the quote and define such a symbol to resolve to a reified "method" object. The closure (or object) and the method identifier would both implement a "Callable" interface.

Nested Visitors

I keep having to implement nested visitors for binary operators that can have arguments of different types. Implementing unification is a classic example, but any binary operation on abstract syntax trees has the same problem. I always dislike the idea that one of the objects is special, should a purchase order object be passed to an account, or vice-versa.

Agree, but...

I agree: Proper multiple dispatch is much nicer than single-dispatch. I even said I was a fan, but that wasn't my point: My point was that single-dispatch makes sense when emulating sending messages to collaborating machines. I don't dislike "objects", but I dislike programming in a style *oriented* by objects. When you do want to couple state and behavior to emulate communicating processes, single dispatch makes sense. For most everything else, multiple-dispatch functions unassociated with classes or instances makes much more sense.

The power of the dot

Well, code completion is quite biased to single dispatch, and as a UX problem, no one has really figured out what completion over multiple arguments would even look like.

dispatch & syntax are independent

You can have noun.verb(args...), or any other noun-before-verb syntax without requiring any particular dispatch mechanism. Look no further than Visual Studio's Intellisense support for F#'s pipeline operator (|>). The offered completion list is derived from a search for functions (not methods!) with a matching first parameter type.

Unrelated: I'm all about better tools, but I find pervasive code completion to be very distracting.

targeting computers vs. "the force"

My point was that no one has figured out how to do code completion on multiple parameter types yet. It is more a challenge of syntax rather than tooling; i.e. how can we specify parameters we wish to dispatch on before the function doing the dispatching. It doesn't really work just to put the parameters first, as that would mess up normal programming flow (see using a stack-oriented language like Forth).

Also F#'s solution is nice, but only works well for OO-styled functions where the first parameter is special as a receiver. Of course, I'm a firm believer that OO-style is pervasive even in most FP languages, but given a language like Haskell where that isn't really true, all bets are off. (And Haskell is a very strange instance of a sophisticated typed language with poor tooling)

Unrelated: I'm all about better tools, but I find pervasive code completion to be very distracting.

I bet you probably don't like using GPS in your car either :) Many people buy into the "use the force, turn off your targeting computer" thing, I don't.

Clarifying "pervasive"

What I really mean is that I dislike how autocomplete pops up every time you type "." in an appropriate place. For similar reasons, I dislike the red-squiggly underlining in word processors. I also prefer muted color schemes.

I find anything the computer is doing while I'm typing to be distracting. It's like trying to talk to somebody, only to have them interrupt mid sentence and be like "I know what you were going to say!" They might be right, but they might also prevent me from ever saying what I was actually going to say.

As for things like squiggly underlines and gutter markings: I wish they were only available on demand. I don't care if I spelled anything right until I get the idea out. I don't care if my code will compile, until I have the general structure or algorithm jotted down.

When I've used Visual Studio (and IntelliJ) in the past for C# (and Java), I tend to also use Vim for most of my work, only switching over to the IDE for refactoring, code analysis, specialized searches, etc. I tend to use the IDE in a manner similar to a linter. Funny enough, I only started doing this because I was not only *using* Visual Studio, but working on Xbox integration for it while at Microsoft. Running two instances at once was a pain, so I started using Vim, and found that I felt more focused in that context.

If it were up to me, I'd live in my distraction-free editor and have my bells-and-whistles code analyzer tools be one keystroke away.

Also, I realize your GPS comment was a joke, but I want to point out the distinction: Driving and navigating are largely non-creative tasks. There's no deep thought process to interrupt, other than maybe some road hypnosis.

Autocomplete HCI

There are a number of ways to do autocompletion. I personally like using the tab key for this purpose: when I press tab, give me my autocomplete options.

This is really quite orthogonal to Sean's belief that noun-verb order is somehow better for autocompletion.

have vs want

One other comment about the noun-verb order:

Completion for verbs in noun.verb is based on what you "have" and offers you choices of what you can get from that.

However, completion for verbs in verb(noun) form could be based on return type. That is, you "want" something and are offered ways to get that thing.

The thing is that the former has a non-meta key to listen for, namely the "." where as the latter would have to listen for an explicit "return" keyword or something like that.

You can't really complete on

You can't really complete on an expected type given the way most of us think. For that to work, you have to think from order of "what you want" to "how you can get it," but people often think from "what they have" to "what they can get!"

Thinking order is where functional and OO styles really diverge, especially in syntax. The OO programmer works left to right, starting with something to get or do something else. So they have x and can get P from x via "x.P." The functional style instead focuses on P and how you get it with x via "P(x)." Fascinating actually.

You can't really complete on

You can't really complete on an expected type given the way most of us think. For that to work, you have to think from order of "what you want" to "how you can get it," but people often think from "what they have" to "what they can get!"

Thinking order is where functional and OO styles really diverge, especially in syntax. The OO programmer works left to right, starting with something to get or do something else. So they have x and can get P from x via "x.P." The functional style instead focuses on P and how you get it with x via "P(x)." Fascinating actually.

Huh?

You can't really complete on an expected type given the way most of us think.

IntelliJ supports completion based on type-context:

Smart Type Code Completion

Seems like that works just fine... Frankly, that better matches the way I think...

For that to work, you have to think from order of "what you want" to "how you can get it," but people often think from "what they have" to "what they can get!"

The point I was making was that there are two different thought patterns with two different completion mechanisms. There's no reason that you couldn't have a common shortcut which presents the intersection of both methods. You should have multiple ways to think about a problem in order to figure out the best solution, but you shouldn't have two different ways to invoke completion.

Thinking order is where functional and OO styles really diverge, especially in syntax. The OO programmer works left to right, starting with something to get or do something else. So they have x and can get P from x via "x.P." The functional style instead focuses on P and how you get it with x via "P(x)." Fascinating actually.

Which is why this entire first-class method thing is interesting to begin with. You don't have to choose noun.verb() over verb(noun) because you can view "." as an application operator.

In Clojure, which is what I work in most of the time lately, keywords (self-evaluating symbols) are functions of associative structures, and associative structures are functions of their keys. This way you can (person :first-name) or (:first-name person). And the -> macros let you choose the most appropriate noun/verb order for your problem. So you can (-> db (get-person 5) :first-name) or (:first-name (:get-person db 5)) or whatever combination works for you. The ->> macro lets you do the same with the *last* argument.

The more interesting distinction isn't noun then verb or verb then noun. It's what vs how. In a stack language, for example, instructions are executed linearly from left to right. Here's "how" to get a result in a sequence of steps. In a functional language (with applicative order) arguments are evaluated inside out. Here's "what" I want from these inputs. The arrow macros in Clojure (or the pipeline operator in F#, etc) let me choose, on a per-expression basis, if I want to emphasize evaluation order or results. The language and the tools should (and can!) support both styles seamlessly.

I don't think having two

I don't think having two ways to express the same thing is really the answer. Multi-identifier auto completion is a good answer, but it skips a lot of steps and might not be always viable for anything but static member access.

Also, does Clojure even support auto completion? Its not exactly a language with a friendly syntax to grock.

does Clojure even support

does Clojure even support auto completion?

As a language, it doesn't do anything special to encourage completion (such as manifest types), but it sure avoids a lot of things it might do to discourage it (eg. dynamic meta-programming).

As for the current state of Clojure tools: Completion of top-levels is trivial and common in Clojure tools. Completion of locals is less common, but readily available from the analyzer API. Completion based on type information or similar isn't available yet as far as I know, but the type hint system (or the more sophisticated Typed Clojure) could conceivably enable it with some effort.

However, if I were to attempt auto-completion for Clojure, I'd also invest in abstract interpretation. That's how JavaScript completion generally works (in combination with type inference), and Clojure is much more amenable to static analysis than JavaScript.

Its not exactly a language with a friendly syntax to grock.

Not going to take the Lisp syntax flame bait... It suffices to say that I quite enjoy Clojure's syntax.

Abstract interpretation

Abstract interpretation hasn't worked so well for Javascript, which is why we are getting Dart and Typescript, which is probably more analogous to typed clojure.

Not going to take the Lisp syntax flame bait... It suffices to say that I quite enjoy Clojure's syntax.

It is difficult to grock Clojure syntax without knowing Clojure, fair enough? Great if you are insider, bad if you are an outsider.

Dart & TypeScript

TypeScript et al are motivated by much more than just editor support.

Not saying that TypeScript doesn't facilitate better editor functionality more easily (it does). Just that abstract interpretation has in fact worked just fine for JavaScript to the caliber expected of contemporary IDEs. In fact, IntelliJ's JavaScript support is richer than Visual Studio's TypeScript support, from what I can tell... For now.

Typescript and Dart are all

Typescript and Dart are all about tooling and not really just about type checking...it's the new upcoming view that type systems are not just about detecting type errors anymore; if anything code completion is more important and soundness be damned.

We had abstract interpretation also, it just isn't good enough to satisfy most developers. You get into an intractable global analysis very quickly and your type feedback still isn't that great.

I for one like the "clack"

I for one like the "clack" of my model M, it doesn't bother me because I expect it, if I don't here it, then I would be worried. For identifiers in progress, I expect them to "not found," and I expect completions to be there. There is stuff on the screen that I totally expect, its not annoying, it is already part of my rhythm in writing code. It does take some getting used to, just like driving with a GPS does, but done right you can eventually get used to it.

I find Visual Studio's 500 millisecond delay for feedback to be extremely annoying. I want my clack "now" not later. Bad feedback is not with you, but behind you. If the computer only recognizes mis-spelled word after 500 ms, that screws up my rhythm and is annoying; no feedback is better than late feedback! Visual Studio is an example of feedback done wrong. Most IDEs follow that given some misguided user research (don't feedback right away, it will bother the user who isn't used to it!). Victor shows us that feedback can be magical if done right.

To quote Hancock, on demand feedback is like shooting with bow and arrow, continuous timely feedback is like shooting with a water hose. Some people are more comfortable with bow and arrows, but you'll hit the target faster with a water hose.

i just came here to say

word.

Stack based languages with

Stack based languages with well defined types, such as Cat or Joy, should enable a lot of type-driven autocompletion based on the types known to be in the tacit environment, along with automatic visualization of the tacit environment. Sadly, this hasn't really been explored.

I'm not sure what you mean by "normal programming flow".

Stack based languages don't feel any more or less natural to me at this point, though they were unfamiliar to start. Stack based programming does emphasize bottom-up rather than top-down, but this seems natural in its own way.

The problem with stack

The problem with stack languages is that you have to keep track of the stack in your head; making arguments implicit in some stack state is quite difficult to deal with. Perhaps you could make that easier via the IDE, but I haven't seen any decent proposals yet, nor is anyone really seriously looking at stack based languages beyond Chuck Moore.

stack mentality

The problem with stack languages is that you have to keep track of the stack in your head;

Traditionally they say that if you have a real problem like that your definitions are too complicated and you ought to break them up.

Tradition aside: is that problem more than an aesthetic one? How do we know?

You can leave things on the

You can leave things on the stack for multiple instructions, duplicate, and so on. You have to understand multiple lines of code to understand one line of code, whereas at least in a normal language you have the names of local variables to help in reading.

in a normal language you

in a normal language you have the names of local variables to help in reading

In a concatenative language, you tend to refactor complicated functions into a few sub-functions. Then function names help you in reading. This works pretty well, though automatic visualization of the stack would also be nice.

Keeping track of the stack

Keeping track of the stack should be easy to automate and render to screen, not much different than the sort of continuous feedback you expect from any good IDE for a typeful language.

Re: "I haven't seen any decent proposals yet" - How hard have you looked for decent proposals? As is, simply printing the stack at each step in a REPL isn't bad, and we could do a lot better for continuous feedback at edit time just by running code (and printing stacks) against a set of examples or tests. This seems very similar to more conventional language adaptations of REPLs to live programming.

Re: "nor is anyone really seriously looking at stack based languages" - I am. And I know of others. Granted, it isn't a very popular area. But I seriously consider tacit concatenative languages the most promising basis for both streaming code and generative programming, and a good fit for a code-as-material metaphor and component based programming. My own language isn't quite stack-based, but it's close in nature.

I do look you know...and I

I do look you know...and I haven't seen much...especially in the area of IDE support for stack-based languages (well, there is color forth). There just isn't very much work on concatenative languages being done, less that is published in a way that I can really soak it in. If I could see some advantage to be had, I would be all over it, but I don't see it right now.

DCI

I always dislike the idea that one of the objects is special, should a purchase order object be passed to an account, or vice-versa.

False dichotomy. Another choice is to model a purchase as an interaction or process independent of the documents and accounts that participate. Look into DCI (data context interaction).

I used to model visitors a lot, but at some point I changed how I approached problems and the visitors just disappeared. I think it's when I started favoring object capability model patterns, which in general don't use class information and operate only at the interface level.

DCI

I guess in many languages like Python with no real data-hiding, objects behave like a plain old 'C' structs, and with duck-typing you can operate on objects of different types, so DCI makes some sense. You will end up using conditional statements to specialise behaviour which is slower than virtual method dispatch, so multi-methods offer a better performance guarantee (note 'guarantee', I am sure a sufficiently clever compiler could optimise the conditionals to a vm-table dispatch if you get the incantation just right, but I prefer a hard guarantee).

If I want strong data-hiding, and strong typing (with parametric polymorphism) I need modules to replace objects for data hiding, otherwise every object gets getter/setters for every property; and I need multi-methods to replace duck-typing.

In the case of the visitor, this is because I am writing many processes that descend abstract-syntax-trees, where nodes are different subclasses. I can't see how DCI helps here (although keeping each process state in the visitor is quite DCI like). This is particularly ugly when doing binary operations on ASTs (nested visitors). Multi-methods seem necessary for doing this neatly.

Well, the intention is not

Well, the intention is not to cover multimethods here, but explore the arising patterns around, as Brandon puts it very succintly, treating both objects and method names as closures or callable entities. For instance, within this framework it is very simple to decorate an object x to, say, log each method invocation on x, and, given the duality between objects and method names, the same logging decorator can be used with a particular method name foo to register calls of the form y(foo,...).

That said, a potential approach to covering multimethods could be to treat sets of multidispatched objects as closures themselves. For instance, restricting ourselves to binary methods we can define a binary slot as a slot dealing with product type values and presenting the following semantics:

foo(x*y,arg1,...argn) --> (x*y)(foo,arg1,...,argn)

where the default product of objects can dispatch to its first component:

(x*y)(foo,arg1,...argn) --> x(foo,y,arg1,...arg)

or something else that users can later override. Just thinking out loud here.

Sure

Craig Chambers' Cecil and Diesel treat all arguments as equal. Personally, I don't think the added complexity of multiple dispatch is worth the headaches.

Heads up

Hi guys,

Interesting as the discussions on IDE autocompletion are, they're not really related to the OT, which deals with reification of method names as callable entities... I'd appreciate if I can get some feedback on that front :-) Thank you!

Signals and Slots

Is this in any way related to signals and slots in the Qt or GTK gui libraries?

Nothing to do with slots

Is this in any way related to signals and slots in the Qt or GTK gui libraries?

I don't think so. The name "slot" means here "reified method name", nothing to do with slots in signal frameworks...

Let me elaborate a bit: in the statement

x.foo(1);

x and 1 are run-time entities of the program (an object variable and a constant), but foo is merely a source-code-level identifier for a particular method x has. If we replace the statement with the "slotified"

x(foo,1);

we're treating foo as a first-class citizen (i.e. an entity of the program); moreover, if we define it to be a functor with some duality properties we can write this in an equivalent manner:

foo(x,1); // --> x(foo,1) ~ x.foo(1)

which opens up possibilities around method decoration and others explained at the article referred to in my original post.

Passing Slots

So the purpose is not to be able to pass a 'slot' to another method so it can use it generically? Because it would seem to enable that kind of thing if the functor 'foo' were first class it could passed as an argument and be used as a signal from some process, with a list of objects that have that slot as the receivers.

So the purpose is not to be

So the purpose is not to be able to pass a 'slot' to another method so it can use it generically?

The framework explores ideas of decoration among which, well, we can consider too composition of objects or methods. For instance, a sort of (compile-time) signals setup can be built if we create sets of receivers {x1,...,xm} with the following semantics:

{x1,...,xm}(foo, argn1,...,argn) --> {foo(x1,arg1,...,argn),...,foo(xm,arg1,...,argn)}

which further reduces (from the properties of foo as a slot) to

{x1(foo,arg1,...,argn),...,xm(foo,arg1,...,argn)}

At the end of the day, this is merely mapping foo(_,arg1,...,argn) over {x1,...,xm}. But, as I was saying, this is just one application of this "method names as functors" approach, not the only one.