extending functions vs. extending objects

There's a talk by Prof Black about the origins of OO and Simula, and it make the point / the claim that it is easier to come up with objects/classes that are successfully malleable than it is to do so with functions; see pages 26 and 27 of the slides.

Parameterized functions instead of Inheritance? When you parameterize a function, you have to plan ahead and make parameters out of every part that could possibly change. functional programmers call this “abstraction”
Two problems: 1. Life is uncertain; 2. Most people think better about the concrete than the abstract.

The value of Inheritance When you inherit from a class or an object, you still have to plan ahead and make methods out of every part that could possibly change. o-o programmers call this “writing short methods” Two benefits: 1. You don’t have to get it right; 2. The short methods are concretions, not abstractions.

Given the problems of OO (in all its various guises), i've of late been leaning way more towards the FP side, so this was slightly interesting to read. :-)

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Audrey Tang

I kinda like Audrey Tang's rule of thumb:

Things interacting in a small number of simple ways (anything)
Things interacting in a large number of simple ways (OO handles best)
Things interacting in a small number of complex ways (FP handles best)

Things interacting in a large number of complex ways what you need is a lot of small mostly independent programs loosely coupled. Generally a dynamic glue between the various pieces, and those pieces written as per the above.

In her original version it was VB for the interfaces, Haskell for engines and Perl for glue.

Manycore

His point (though I think a bit too general) is a good one, "Most computing models treat computation as the expensive resource, it was so when those models were developed! Today: computation is free it happens “in the cracks” between data fetch and data store Data movement is expensive both in time and energy."

His solution was pretty interesting, having controllers that allow for messages to be sent by descriptor or name:
i.e. Dec13th version of Oslo talk or
Most recent version of Oslo talk.

And then combining that with something like distributed version control... I'm not sure how you would handle the merge problem but the idea is fascinating.

Quite bogus claim, but nice talk

This is a strange example of idea that, out of context, is somewhat convincing, but loses most of its ground when rephrased into the context of the talk.

Here is the argument of the speaker :
- Indeed, objects with inheritance can be encoded with functions and explicit open recursion
- but functions are less modular because you have to plan ahead parameter changes

This strikes me as a bogus claim. With the usual encoding of objects as a tuple/record of functions, each taking a "recursive" self parameter to model open recursion and dynamic dispatch, *you don't have to plan ahead*. You just need to add a new function/method somewhere, and call it through the self parameter in any place you wish. There is no plumbing as he claims because the plumbing is already done through `self`.

And this is perfectly natural, even obvious in retrospect : if you faithfully encode object plus inheritance in lambda-calculus, you get something which has just the same properties, with the same behavior wrt. code evolution, and also the same drawbacks wrt. difficult static reasoning due to the multitude of hard-to-specify extension points.

You could attack the idiomatic functional style to be somewhat different wrt. code evolution than the idiomatic object-oriented style. That's certainly true (new data vs. new operations, etc.; that's what JeffB is talking about in his reply). But attacking the functional translation of object-oriented style is bogus.

.

That said, I was rather pleased by the talk. There is a lot of historical research and the general idea that it's interesting to know about what people have done, in which context, etc. This is good. The technical content is quite fuzzy and wide-ranging, with suggestions rather than solutions, but could be considered thought-provoking.

There is a lot of whining about "complex time systems" in this talk that I felt somehow misdirected (all the more when you advertise programming style that do have more-complex-than-necessary behavior, bringing necessarily more complex static reasoning on the table). But I do not wish to start yet another static-vs-dynamic debate in this thread.

There is a funny idea that "object-oriented abstraction can exist without types" (around slide numbered 52 in the presentation and 92 in my PDF reader). He reuses the ADT (abstract data type) vs. Object distinction : one common but private implementation/representation of all ADTs element, vs. potentially different internals for objects of the same type/interface. His idea is that ADTs need a data representation (algebraic data type, concrete data) to benefit from the common representation, and therefore require type hiding to be effectively encapsulated. On the contrary, objects don't/can't/shouldn't expose their internals to each other, so can expose themselves as basically closures, black boxes, codata, that we aren't inspectable even in presence of type information. That's the usual remark that you can hide private state inside a closure (and you don't need existential types for that).

Before the final "concurrency, distribution, fault tolerance" pitch, there are a few statements that I found troubling.

Core Ideas of SIMULA
[...]
The actions and interactions of the objects created by the program model the actions and interactions of the real-world objects that they are designed to simulate.

I know that I have succeeded as a teacher when students anthropomorphize their objects

Another bit that is both weird and nice :

Algol 60 and Simula:
- Dahl recognized that the runtime structures of Algol 60 already contained the mechanisms that were necessary for simulation.
- It is the runtime behavior of a simulation system that models the real world, not the program text.

[...]

The Value of Dynamism:
Smalltalk is a “Dynamic Language”
Many features of the language and the programing environment help the programmer to interact with objects, rather than with code

Proposed definition: a “Dynamic Programming Language” is one designed to facilitate the programmer learning from the run-time behavior of the program.

Most people should

Most people should anthropomorphize their objects. Metaphorical thinking are incredibly useful in helping programmers get a handle on complexity. It can lead you in the wrong direction, of course (metaphors are always not perfect), but its a net plus rather than a net negative for most people. Exceptions are those that can think symbolically and logically, like Vulcans.

Exceptions are those that

Exceptions are those that can think symbolically and logically, like Vulcans.

"...like intelligent humans" would be more accurate. Fiction like Star Trek often does a great job of stereotyping characteristics to turn them into easily discriminated-against packages, but it's particularly egregious in cases like this, where one of humanity's distinctive strengths is turned into a sort of slur.

It wasn't meant as a slur.

It wasn't meant as a slur. There are people out there (for me, mostly in the FP crowd) who can think so well mathematically and symbolically that I can barely understand them. I like to think of Vulcan as a metaphor for those people; they seem like us but they can think in incredibly different ways. Perhaps they would find that kind of thinking offensive, since it is so metaphorical...

I think it is a fallacy to assume that (a) most programmers are like this or that (b) most programmers could be like this if only they studied/worked harder or saw "the light." I find that offensive, it is basically tantamount to cultural subjugation.

Examples?

I disagree agree with this. The useful activity is model building. Programming is building a model and understanding how the problem domain maps to it. A good model is simple and a large catalog of elegant models is one of most important tools in a programmers toolbox.

So, to what extent are these models motivated by the real world? It seems to me most common that the real world analogy is found and used to explain the idea after the simple model is understood. For example, take "trees." I really doubt that one of our ancestors was looking at a tree and thought "that reminds me of relationships in our family" -- the real world concept of tree has too many irrelevant details. Rather, once the inheritance relation was drawn as a graph, someone thought to describe it as a tree. And while the tree analogy is useful for communication, as a tool for better understanding the model it probably doesn't ... bear much fruit.

So I'm curious, do you have any good examples of the analogy driving discovery? For me, looking back, it seems to usually be the models.

How are our models not

How are our models not steeped in metaphors? It doesn't matter if the model is related to the real world or not, we will often apply metaphors to abstract concepts simply so that we can store and access them somewhere in our mind. A tree in programming only loosely resembles a natural tree, but that is irrelevant, we are just storing the abstraction somewhere in our mind where we can conveniently access the knowledge when we need it. For the same reason, we use our own spatial memory to reason about the state of a program, so using maps and dictionaries as metaphors is very useful to us even if the data structures only vaguely resemble their real life counterparts.

A pure model involves abstract symbols that mean little to us outside of the model (hence, they are abstract). This is quite useful when you want to analyze a system without bias, but during design its quite difficult for most of us to work with as we need metaphor and analogy to communicate it to others and evolve it forward. I'm not quite sure why this would be controversial here.

When you write a program do you think of it as a bunch of instructions evaluated by the computer or a living system with actors that do things? Of course, we have to translate it down to the computer, but I bet most of us do not start there.

Ok

I apparently read too much into your use of "metaphor". I'm certainly not suggesting that programming is about algebraic manipulation or should be done in formal systems.