Bob Harper of CMU is blogging about programming languages and introductory CS education

Via Mathematics and Computation (HT: Andrej Bauer):
Bob Harper of CMU, has recently started a blog, called Existential Type, about programming languages. He is a leading expert in Programming Languages. I remember being deeply inspired the first time I heard him talk. I was an incoming graduate student at CMU and he presented what the programming languages people at CMU did. His posts are fun to read, unreserved and very educational. Highly recommended!

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Thanks! His musings on the

Thanks! His musings on the Boolean type really made me think hard about basic things for once in quite a while. A great read.

- S.

the comments were better :-)

one of the problems mentioned in the blog post, that of provenance, is valid for any 'primitive' or 'common as dirt' type so the argument has been made about String too, even by regular non-ivory-tower folks :-) and A. Rossberg's last comment i think nicely tries to get at the crux of the nut there.

at least, any language that doesn't support something like Haskell's newtype is kind of a train wreck looking to keep on happening. :-(

Not new..

I remember of a C coding guideline from a long time ago which said: don't use boolean, use enum: if latter you realize that in fact things are not always 'true' or 'false' and that you need a third value, it creates less refactoring(*).

*: adding value in enums still sucks in C due to its incredibly poor 'switch' statement but that's a different issue..

Different reason?

I may be mistaken, but the C coding guideline's reason for avoiding Booleans is different from Bob Harper's. He's saying that reducing something to a boolean often throws away information that you actually need later.

For example, here's an example of how Java's "instanceof" throws away useful information:

if (o instanceof Box) {
   Box b = (Box) o;
   ...
}

The "instanceof" expression reduces to a boolean, which means you still need to perform an unsafe cast in the body of the "if". C# does a bit better:

Box b = o as Box;
if (b != null) {
   ...
}

Ignoring for a moment the problem with "null" in general, C#'s "as" operator preserves the information you need.

Blog URL

For those want to look at the blog itself, look here.

(Fixed. Sorry)

Corrected link: here

Corrected link: here

OO is anti-modular? Really?

I wonder what Betrand Meyer would say about this notion...
C

Its a religious thing, I've

Its a religious thing, I've also heard that OO is also responsible global warming.

There are plenty of problems that decompose better using OO than FP, and vice versa. I usually stop reading when they assert that OO is somehow a form of imperative programming, but it is rather orthogonal.

Seems so...

His exposition on dynamic versus static is better. At least there he employs humor and sarcasm. Now there's a tired battle... I get why you'd want to be fundamentalist when teaching freshman fp, but I'd say it's going a bit too far to declare OO as the anti-matter of the modular programming universe... Of course, there's nothing like a bit of controversy to attract readership. Funny how this is making the rounds on the web as "end of OO?". I can only guess that Bob's intention is not quite so dramatic(and wrong)...

C

OO as Imperative

I think it reasonable to say that OO is a form of imperative programming. I believe we should use names other than "OO" to describe models that aren't imperative. In its history, OO refers most often to a model of synchronous message passing. It would be quite convenient if we all agreed this was a defining characteristic of 'OO' that distinguishes it from FBP, Actors, multi-agent programming, reactive dataflow, et cetera.

I have designed plenty of

I have designed plenty of declarative languages that are based on objects and inheritance, what other name do you suggest I use to describe my work? There are many out there that think inheritance and/or subtyping (nominal and structural) != OO, but I believe at least "implies OO" if not equals.

Creative enough

It isn't difficult to find something other than 'OO' to describe a new system. Just take two or three major elements of your paradigm, cut the cruft, then stick them together (e.g. Constraint Imperative, Concurrent Constraint, Communicating Sequential Process, Declarative Object, Reactive Object). Presto. Now you have a handle by which people can grasp, explain, tweet, and google your programming model.

Just take a few minutes to find something that hasn't been used before, preferably with an acronym that hasn't been used before (to avoid conflicts like 'Functional Reactive Programming' vs. 'Functional Relational Programming', or 'Concept Oriented Programming' vs. 'Context Oriented Programming').

You are hardly suffering from a dearth of creativity.

Let a hundred schools of

Let a hundred schools of programming flourish?

These labels put emphasis on the school and not the language. I would prefer to treat each language separately and enumerate its attributes (e.g., OO event imperative). I'm not trying to create a new paradigm, I'm just mashing paradigms up together, where the paradigms happen to be extracted from a bunch of other languages. Even the word paradigm is wrong, we should talk in terms of language aspects or features.

OO is a feature of a language that supports inheritance in any form including implementation, type, prototype. Prototype is a feature of an OO language whose inheritance occurs by cloning objects. Declarative is a vague feature that hides control flow details, logic is a more specific declarative feature where control flow is hidden via a proof engine, functional feature hides control flow via lambda reduction. Imperative exposes side effects and makes order relevant. And so on...

Schools of Programming

These labels put emphasis on the school and not the language.

You should label your language, too. ;)

I would prefer to treat each language separately and enumerate its attributes (e.g., OO event imperative). [...] OO is a feature of a language that supports inheritance

That is a reasonable position, but you would be clearer to use 'inheritance' rather than 'OO', since that is what you mean. (I believe most people understand 'OO' to mean a whole package of features. The problem is: there is no consensus on which package.)

There are programming models with inheritance that most would not call OO. Typeclasses can inherit other typeclasses. I've read about a rules-based programming system that involved prototyping (clone and tweak) of prior 'rulebooks'. Frame oriented programming probably should be included.

Declarative is a vague feature that hides control flow details

I found an operational definition for 'declarative' that I've been very satisfied with: a language is 'declarative' to the extent its expressions are commutative and idempotent.

The 'hiding' of control flow is implied: if control flow is obvious in the syntax, then expressions are not commutative.

the word paradigm is wrong, we should talk in terms of language aspects or features

Well, 'it depends'. Certainly, we could talk of a language in terms of its lower level features, just as we could talk of a vehicle in terms of chassis, windshield, engine, and seating. But paradigms are rather convenient ways to describe languages, just as 'car' or 'truck' or 'airplane' are convenient ways to describe vehicles. I rather favor Peter Van Roy's view on 'concept' and 'paradigm'.

Object-oriented is much more

Object-oriented is much more precise in terms of metaphors being used than inheritance, which is simply a mechanism. People are incredibly sensitive to the number of metaphors they can learn about, and creating a new one to describe every variation on object would be disastrous. We already have too many language labels that seem like buzz/hype/jargon. Using object-oriented to describe a language that is oriented around objects is not a stretch by any means, while if you introduce a term like frame you can only communicate that with a few people.

Metaphors aren't precise

If you're arguing that 'inheritance' is even more hand-wavy than 'OO', then you really should be looking for some new terms.

Communication requires that the other person understands what you're saying. Using "object oriented" tells your audience to assume Java or C# or whatever 'OO' language they're familiar with, and tweak the concept from there (based on the other things you've said). As a local example, you can glance below to see what neelk assumes to be part of the 'OO' package, and though you or I may disagree, the point remains that we already have assumptions about 'OO'.

If what you mean by 'object' is significantly different than other people assume, you're just sewing confusion. Using 'OO' to avoid confusion is almost certainly a mistake, no matter how good your intentions. Worse: if they disagree that your objects are what they understand to be 'objects', you may even alienate your audience. For example, I'm quite willing to debate your notion that inheritance implies OO, though I am resisting. Too many discussions devolve into debates over definitions.

If you introduce a term like 'frame', you'll have the advantage that there are fewer initial assumptions and connotations about what it means. People will be confused, but in a good, honest way: they're confused because they aren't assuming they know what you're talking about well enough to skip clarification.

So terminology is incredibly

So terminology is incredibly audience sensitive, I can agree with that. But I disagree that this is about precision, rather its about cultural nuance.

If I'm talking to a roomful of object people (say at OOPSLA or perhaps ECOOP), then they will be believe what I'm talking about relates to objects, even if it doesn't look like the objects they are used to themselves. Actually, they are used to objects that don't look like their objects, there is no resistance there. If I'm talking to an ICFP or POPL audience, I have to be more careful b/c, not being object people, they have rigid preconceived notions of what objects are.

A roomful of object people

I think you could get away with it there because they've already accepted 'object' to be pretty much utterly devoid of meaning, what with every paper having a different definition, so they focus on your other statements.

Stretching definitions mostly serves to weaken them. The term 'definite' means 'bounded with precision', and that's what a definition should be. Could you define 'object'? Or is it just a hand-wavy notion that means whatever you want it to mean?

Object people have a strong

Object people have a strong sense about what objects are, there are criteria to calling something an object that have to do with object thinking. You can't call a lambda an object, even if it manifests in C# as an object instance. On the other hand, you could call anything that allows to divided the world into a set of inter-related nouns an object.

Can you define 'object'?

Can you define 'object'?

Just to let you know: I could easily divide the world into inter-related nouns with logic programming or relational model.

I don't believe logic and

I don't believe logic and relations are far off from objects. To "be" something is definitely a predicate, while a table describes an object and its relationships. However, you don't normally program with objects in a logic programming language even though you could, while relational has long been known to be very related to object.

This debate about what an object is very old. Objects are really just a part of object thinking, which itself means thinking about your problems being solved by a bunch of cooperating related objects. The objects then have identities and existences, we could think of them as little people running around to get things done. This is obviously very different from function thinking.

Pop culture shouldn't decide vocabulary

1) Milkshakes don't bring boys to your yard.

2) You don't have lovely lady humps.

3) whatever...I am being silly.

My point is that what people most often refer to something as is not nearly as important as internalizing a concept and then attaching a symbol to that concept.

Not Pop Culture

What is important depends on your goal: to understand a concept, or to communicate it. In the latter case, it is important to understand whether a symbol already has meaning to most people, and it is important to use that symbol for the broadly accepted meaning.

Anyhow, I don't believe this is a case of 'pop culture'. Go back and look at the main language designs by which people understand 'OO': Smalltalk, Simula, CLOS, Objective C, C++, C#, Java. Even OOHaskell is heavily imperative (using IORefs as the basis of identity). When I crack open my book 'A Theory of Objects' by Martin Abadi and Luca Cardelli, guess what I find? imperative extensions from lambda-calculus. When Benjamin Pierce extends lambda-calculus to support OO in TaPL, I'm sure you can guess what he adds... 'Imperative' is very widely accepted to be part of the concept of OO, and that property is what is embedded in our educational materials on the subject, in history, in research. This is not 'pop culture'.

A minor nit

The first object-oriented language, Simula, didn't have `message passing' and it already captured the essential aspects of O-O -- later languages didn't change things all that much. IIRC, SmallTalk introduced the notion of synchronous message passing (and while you can call procedure calls `sync. msg passing', it seems an overkill. IMHO).

But I agree with your main point.

Synchronous message passing

Procedure calls aren't synchronous message-passing... unless you have first-class procedures. The main distinction is that, with message passing, the recipient is not statically coupled to the sender.

Defining OO: Imperative is not characteristic

I think defining OO in such a way that imperative programming is characteristic is a mistake. True, much OOP is done in the imperative model, but there are many research languages developed under the OO banner that emphasize good support for functional programming (Cecil, my advisor's language, is one; so is Scala). Prominent OO designers like Josh Bloch recommend developers avoid state whenever possible, and many approaches to OO and concurrency emphasize purely functional objects.

So what IS characteristic of OO? As discussed in a prior thread, William Cook argues the defining characteristic of OO is procedural data abstraction (i.e. dynamic dispatch): each object (conceptually) carries with it the procedures necessary to use it; different objects with the same interface can be implemented differently. Of course Cook's essay was just summarizing and repackaging the best understanding of an often misunderstood topic; his definition matches the main formal models of OO (and incidentally, most of those formal models are functional).

There are many other features of OO that often come along for the ride, such as mutable state and inheritance. But, building dynamic dispatch into the language (as opposed to coding it up with a bunch of lambdas, which is of course is possible and is done where necessary in functional languages) is the one thing that separates languages commonly considered "OO" from those that are not.

Arguably, dynamic dispatch is also what distinguishes OO in terms of engineering. For example, if you look at object-oriented design patterns, nearly all of them involve interfaces and dispatch; relatively few require inheritance or state.

Imperative is Characteristic

Even when favoring use of 'pure' objects, you're ultimately relying upon the side-effects from method-calls, which also introduces the issue of time (before the effect, after the effect). That is, objects will still represent/proxy elements in the 'environment' even if your application contains nothing except pure objects.

Every model I've ever read for OO (including Pierce's, Cardelli's, OOHaskell, etc.) introduces state and side-effects. I am curious what your experience is that you call these models 'functional'.

Dynamic dispatch doesn't really distinguish objects. You can get that with pure, first-class functions. What distinguishes objects is behavior hiding and side-effects. Most design-patterns are for managing this hidden behavior... or executing it.

purity and dispatch

If objects are used to model I/O, then yes, you will rely on side-effects from method calls. That does not mean that objects are fundamentally imperative, any more than functions are imperative just because they are sometimes used to model I/O (or other stateful abstractions). The point is: many object-oriented programs have a substantial number of objects that are pure; objects can be used either to model state, or not--just like procedures or functions. Therefore I argue state is not characteristic.

Regarding OO models, the simplest ones are all purely functional. That includes Pierce (FJ and the other simple models in TAPL), the simpler Abadi/Cardelli models, and Fisher/Mitchell's models. Most of these models were extended with state later. But my opinion is that the simpler models do capture the essence of objects--the state extensions are mostly uninteresting once you understand the object model--and I guess that the reason they were presented first, is that their developers also thought this characteristic of objects was the most important to model.

More broadly, Sean is not the only OO researcher who have studied or developed purely functional object-oriented languages; there are many of us. While there might still be some debate about the exact definition of OO in this group, I can promise you many of them would strongly object to limiting objects to imperative models.

Of course first-class functions can be used to implement dynamic dispatch, but it is still possible to characterize objects that way. An object is what you get when you package together a group of functions to abstract a piece of data or a function. This definition admittedly includes an element of intent (the "to abstract..." part) but that intent is usually easy to determine from context. From a programming language point of view, just look for a convenient syntax for a record of functions and you've basically identified OO languages.

Regarding design patterns, look at the list from GoF. Most have nothing to do with state, including factories, adapter, bridge, composite, decorator, facade, chain of responsibility, strategy, template method, and visitor. All these are useful with functional objects. Flyweight (hash-consing) is in fact fundamentally about functional programming!

Some patterns are clearly stateful: observer, state, memento, interpreter, and probably mediator. But the patterns that depend on state or are only useful in a stateful system are clearly a minority.

I'm not going to argue that most OO programs aren't stateful; clearly they are. But that doesn't mean OO has to be imperative, definitionally.

If objects are used to model

If objects are used to model I/O, then yes, you will rely on side-effects from method calls. That does not mean that objects are fundamentally imperative, any more than functions are imperative just because they are sometimes used to model I/O (or other stateful abstractions).

There is a huge difference between explicitly 'modeling' side-effects (as with a monad) and implicitly executing them as part of evaluation, just as there is a huge difference between thinking about a war and causing one. Functions are imperative (and better called 'procedures') if they are used to implicitly execute IO as in ML, Lisp, etc.

The point is: many object-oriented programs have a substantial number of objects that are pure

My point is: it only takes ONE that isn't, no matter how many layers of 'pure' indirection. And, in fact, most object-oriented programs and patterns (including those you named) rely on the fact that there are some objects that are not pure. It doesn't matter whether they internalize state, only that they interact with state in an imperative manner. Even observing a sensor or manipulating an actuator can qualify as 'stateful'.

A language is imperative if its evaluation steps through time - i.e. if there is a logical 'before' and 'after' each step, and order matters. There are non-imperative approaches to expressing side-effects, but they aren't embraced in any OO language I know about. And, rather than polluting the concept space, I would rather keep it that way - anyone attempting non-imperative objects should use a name other than 'OO'.

I can promise you many of them would strongly object to limiting objects to imperative models.

OO languages are de facto imperative. A few whining academics won't make a difference.

Why do they try? Are they trying to ride old buzzwords for funding? To publish at OO conferences? Seriously, the many little agendas to use 'OO' for EVERYTHING stretch its definition to the point of uselessness. Nobody agrees on the definition because there isn't one - no clear limits on the concept. I could play frisbee with a dog and call it OO. I think we should say 'enough!' and draw some lines so we can make sane distinctions, even if that's going to piss a few people off.

a strong definition of imperative

Ah, you are using the term imperative in the strong sense (having any imperative features) rather than in the sense of "emphasizing imperative programming more than functional programming." Sorry, I misunderstood.

I would still argue this is orthogonal to the important aspects of OO, though you are right that at least all industrially relevant OO languages are imperative in this sense. I don't want to toot my own horn too aggressively, but I'm currently developing a language, Plaid, in which all effects are explicitly described in permissions (somewhat similar to Haskell's monads, but permitting a few more convenient forms of composition) and thus there is no "before" or "after" other than what is defined by permission flow. Plaid is object-oriented in every sense that I think is meaningful, but the way it handles state would, I think, lead you to call it non-imperative. Do you really think that is disqualifying? To me object-oriented is still a reasonable term for a language that has objects, methods, an inheritance-like composition mechanism, and state that happens to be handled in a way somewhat similar to monads.

And yes, there is a definition of OO: Cook's. Based on conversations at OO conferences, I think most OO researchers (the experts in the relevant field) more or less agree with it; it also seems to categorize most languages (including Plaid) in the intuitive way, and it matches the emphasis of OO formal models. To me, therefore, that definition makes a lot of sense, and there is no need to bring imperative programming into it.

But at this point we should perhaps just agree to disagree. It looks like some aspects of this conversation have come up before, with Tim Sweeney suggesting a definition similar to Cook's.

Put four 'OO' language

Put four 'OO' language designers in a room and they'll give you four different, vehement, arguments about what it means to be 'OO'. I just want to have one definition so that I know what people mean when they say 'OO'. I don't really care that it be imperative, but that is one of the only properties that is common across every 'OO' model I've seen. (Even your 'dispatch' doesn't hold up to scrutiny of multi-methods, predicate or rules-based dispatch, and other esoteric forms I've seen.)

I've not read much of Cook's work, but in the 2009 paper his very first example for objects in that paper is imperative set operations:

a = Insert(Empty, 1)
b = Insert(a, 3)
print a(3) -- results in true

Oh, and here's his definition for 'object' in OO (section 3.10):

An object is a value exporting a procedural interface to data or behavior. Objects use procedural abstraction for information hiding, not type abstraction.

If we want to go with Cook's definition, then fine, but resist cherry picking the bits you like best so you can shoehorn your language into 'OO' as though the label had intrinsic value. If everyone did that, why, we'd be right where we are today.

I'm not cherry picking from

I'm not cherry picking from Cook's paper; the example is purely functional if you read the surrounding context, which includes the definition of Insert (copied below) and states "Inserting n into a set s creates a function that tests for equality with n or membership in the functional set s." It looks like instead you found a typo; I guess either print a(3) should be false, or print b(3) was meant.

Insert(s, n) = \i. (i = n or s(i))

Some may prefer slightly different definitions, but I think something like Cook's definition is the closest you'll get to a consensus.

Typo

After reviewing the paper again, I grant it's likely a typo.

I think something like Cook's definition is the closest you'll get to a consensus.

Maybe, though the Nygaard classification and Kay's definitions for OO also have a lot of followers, and Cook's definition would receive a lot of objections from the 'OO can support multi-methods' or 'OO means inheritance' camps.

Variety and the state of the art

One aspect I find missing in most discussions of OO is the wide variety of programming styles that are possible in most of the current object-oriented programming languages. As a quick perusal of the daily WTF can demonstrate, there is a lot of bad code out there. And given its status as the dominant paradigm, there is a lot of bad OO code out there. But another consequence of keeping this dominant status for so long is that we (as an industry) have learned some lessons about what works and what doesn't. Minimize mutability, avoid class inheritance, avoid ambient authority (usually stated as "apply dependency injection"), stratify your dependencies (the "law of demeter"), specify behaviour via unit tests, etc., are common slogans among the thinking object-oriented programmers.

I believe it would do good for programming language researchers to spend some time investigating the state of the art in modern OO development. I can do no better than recommend Nat Pryce and Steve Freeman's book, Growing Object Oriented Software Guided by Tests, as a thorough demonstration of the best in modern OO programming.

That is not to say, at all, that object-orientation is superior to other models of computer programming. I personally agree with Harper that functional programming should be the first style students encounter.

Great post!

You also left off Persistence Ignorance, and real-time structured design/object-oriented engineering patterns like Processor-Actuator, Watchdog, and so on that provide heuristics that substitute for first-class control theory in languages.

I personally agree with Harper that functional programming should be the first style students encounter.

The emphasis has to be on how new thoughts are learned. If the idea isn't embedded deep enough, then students will misunderstand it and you will get a cargo cult culture. We shouldn't really focus on what's first, so much as sequencing and identifying "threshold concepts". Debating what CS1 should look like ad nauseum is the domain of empire builders and foot draggers, and we shouldn't have the patience for listening to either group drone on.

Yes, really

The trouble with OO is that it has a set of features that conspire to defeat nearly any kind of equational reasoning: OO features open recursion, the pervasive use of higher-order values, and reflection.

1) Open recursion (aka inheritance) means that you cannot define a type as closed collection of values, which means that you cannot reason inductively about programs. (The validity of induction as a principle of reasoning relies on a closed-world assumption.)

2) When you can't do induction, the usual move is try reasoning coinductively. However, in practice you can't do this for OO programs. The reason is that objects are intrinsically higher-order, since the signature of an object is the signature of its methods, and methods take objects as arguments. As a result, an object's signature will often feature contravariant occurrences of the same object type. Consequently, predicates lifted over this signature will not be monotone, and so you can't easily justify coinduction without some semantic fanciness.

3) Now, contravariant occurrences are quite useful, and the best technique we have for reasoning about them is parametric reasoning about data abstraction (i.e., existential types). However, OO languages universally offer easy access to reflective features (like instanceof) which means that the parametricity properties valid for these languages are very weak -- you don't get many useful free theorems.

As a result, you can't really reason about OO programs without a program logic (like JML or Spec#).

What relationship are you

What relationship are you assuming between 'modularity' and 'equational reasoning'?

You have thought about OO in

You have thought about OO in pure mathematical terms while OO is primarily useful to programmers as a metaphor factory. That I can base my constructions off of "objects" is very useful in helping me organize my program. The implications of human thinking on abstract mathematical thinking is significant, of course, and perhaps programmers should adopt more abstract mathematical thinking and less human thinking, and this is what much of the FP community advocates. However, this is a huge mountain to climb, turns out most humans just prefer human thinking.

The modularity of objects is very strong when human thinking is considered, the object is the ultimate conceptual module.

Amen

Program proving consists of translating a program written in a fairly perspicuous notation designed for it into another notation that isn't perspicuous and isn't designed for it, and claiming that that proves something.

"Almost all theorems are true, but almost all proofs have bugs." —Paul Pedersen

OO is primarily useful to

OO is primarily useful to programmers as a metaphor factory

OOP describes the real-world in the same way that the metaphor “Sean McDirmid is a diamond in the rough” describes your ability to operate as a piece of jewelry.

That is in itself is a huge

That is in itself is a huge debate. OOP doesn't have to describe the real world actually, it just has to describe programs as their own little worlds.

Object the Ultimate

the object is the ultimate conceptual module

I suspect humans may do better with agent-based designs, where the objects can act independently and patterns are social in nature. This still gives us modularity ('modularity' to me means ability to break an application down into independent development units).

I do not believe that objects-in-the-small (at the language level) have proven themselves successful as a basis for modularity... at least, not when mixed with static typing and downcasting and so on. What about composite and visitor pattern is 'modular'? We're often forced to use architectures to decouple further than the languages readily support - i.e. for dependency injection, configuration, abstract factory, extensions or plugins, persistence, asynchronous behavior.

In-the-large, we have services, protocols, URIs, REST, databases, processes, CGI. And it seems to me that this is the only level at which popular construction techniques have achieved modularity. Even here, however, we require a lot more decoupling - i.e. service registries, brokers, discovery protocols.

Actors, agents,

Actors, agents, robots...they are all objects right?

I completely agree with you that the technical details of specific OOP languages have lots of flaws. But it is not the concept of OOP that is flawed in itself. To say objects are anti-modular is like saying lambdas are esoteric b/c of some bad experience with Haskell.

At any rate, I think you'll like my next language even if you don't think it is object oriented :)

Programming in a multi-object system

I like a lot of your work, but wouldn't call much of it 'object oriented'. ;)

To explain 'agent' to someone who doesn't already know the concept, but who does know Java, you usually need to say something like: "it's an object with its own logical thread and event loop, that makes its own observations, and whose effective 'type' can only be described in terms of the social protocols in which it participates". Someone once coined the phrase Language plus plus minus minus to describe how we can get from design to design with a few tweaks, yet admit to significant differences in system properties. Agents are objects plus plus minus minus. Just because you can attribute a property (such as modularity or autonomy or reactivity) to 'agents' does not make it reasonable to attribute that property to 'objects', nor vice versa.

And what's the relationship between 'OOP' and 'object' anyway? Perhaps we could take 'object oriented' means pretty much what Alan Kay laments - that we focus our attention and code upon the 'object' rather than the model of time, space, and information flow between them. There are many models with identity not traditionally understood to be 'OO' - such as CSP, pi calculus, Actors, FBP. In common, these models focus far more upon the communication.

I use agents in RDP, but I refuse to call RDP agent oriented because the focus of code in RDP is entirely upon describing relationships between agents. I.e. a new agent can join the program and marry two previously unrelated agents, without any further round-trip communication beyond establishing the relationship. By comparison, 'agent-oriented' connotes we focus code on the individual agent, e.g. with a BDI model. However, I am willing to say that RDP programs do fall under the auspice of multi-agent systems.

Should we distinguish 'object-oriented programming' from 'programming in an object system'? I think so. Robots (actuators and sensors), processes, and services certainly might be construed as 'objects', but that doesn't mean our program expression is oriented around objects.

Actor the Ultimate

I couldn't agree more that we have good encapsulation and modularity at the levels of services/actors exchanging dynamic messages. I disagree that this shows that objects are ultimate. Objects aas we usually understand them are more limited
than actors because they are statically bound and do not have their own threads of execution.

Human thinking is a function of human learning

The properties Neel mentions affect your programs whether you think mathematically or not. So an aspect of the claim here is that despite any utility OO may have as a metaphor factory etc, it brings baggage and limitations along with it which may ultimately not be worth accepting - at least in some contexts.

One problem in these kinds of discussions is that there are large numbers of people quite happy with their OO languages. People like Harper are concerned with how to improve the state of the art and education, and to that end, they're interested in identifying problems with the state of the art, and interested in cases where the state of the art hampers progress - cases which the average OO-happy developer may be blissfully unaware of, perhaps having never encountered them in the kinds of systems they're doing, or perhaps because of unwittingly working around them.

The problems in this case are real, but the population of people who have been taught that OO is a natural way for humans to think is large, and will naturally resist having too much made of any problems if the solutions involve having to replace rather than simply extend existing knowledge.

turns out most humans just prefer human thinking.

I wrote some manuals and did some training related to OO in the early 90s, and back then most people then didn't naturally think in OO terms. Of course a selling point of OO is that it's so natural, but if you really examine this, it doesn't hold up to scrutiny. OO is learned like anything else, and in general is a somewhat arbitrary agglomeration of features that seemed like a good idea at the time.

the object is the ultimate conceptual module.

That's too vague. Why are the specific features usually associated with objects important? Why, for example, should "the object" be subject to a strict hierarchical organization, as though Linnaeus had designed a programming language? It imposes a simplistic world view that as often as not needs to be worked around in the program design. But worse, it has some demonstrably negative consequences for reasoning about programs.

Hierarchies are just one way of organizing things, and baking the particular set of features that OO typically consists of into the core of a language as the primary code extension and "modularity" mechanism is difficult to justify today. OO has been a stage of development, but we can do better, and the improvements will make a difference even to humans who "just prefer human thinking."

The notion of an object is

The notion of an object is easy for humans to understand, the way that objects are partially realized in many languages is not. I would be happier if Harper had said that "existing OOP languages are anti-modular." In any case, saying objects are anti-modular sounds like ideologue with a particular agenda to push, which I'm sure wasn't the intention.

Objects are the ultimate conceptual module because we think in terms of objects all the time, we use objects as abstractions for complex concepts, they are used to form cognitive modules.

Very few of think in terms of functions during our daily life, which make the paradigm more inaccessible to the masses. Perhaps it is a consequence of objects being more concrete metaphors.

The notion of an object is

The notion of an object is easy for humans to understand

Sure, for some woolly definition of "object", but that doesn't actually buy you anything except a marketing hook: "objects are easy to understand, my language has objects, ergo..."

The problem is that as soon as you try to define what you mean by "object" beyond the most general sense, you're committing to one of many possible perspectives on objects, which is why it is almost inevitable that:

...the way that objects are partially realized in many languages is not [easy to understand]

...and that's more true the more complex a definition of "object" you come up with.

In any case, saying objects are anti-modular sounds like ideologue with a particular agenda to push, which I'm sure wasn't the intention.

Computer scientists are not always the most gentle communicators of truths that might hurt...

Very few of think in terms of functions during our daily life

They may not use the word "function", but in their daily lives people depend completely on processes that transform some input to some output, which is all a function is.

Besides, objects in the PL sense are usually considered to be some set of functions operating on some data, so if you don't understand functions you can't possibly understand objects.

which make the paradigm more inaccessible to the masses.

Again, that's an assertion which I think is much more of a reflection on what people are taught and the tools they're exposed to than anything else.

Since OO vs. functions is being raised as an issue, I want to be clear that I'm not arguing that everyone should switch to current FP languages. (Perhaps they should, but that's a separate issue.) However, I think the problems with OO, particularly features like the centrally-enshrined open recursion that Neel referred to, are such that the basic approach needs to be rethought to be able to move forward and solve some known problems. From that perspective, what Harper is saying and their curriculum choices make perfect sense.

One of the things that a mathematical approach to modeling software
has going for it is that it's more granular and precise, allowing these issues to be modeled without the semantic overkill that traditional "objects" entail.

This provides both a foundation and a toolset for sustainable progress. One can model OO from a functional perspective and actually gain insight into it, which can include identifying more modular, composable and precise ways to achieve the same ends. The other way around doesn't work, it's like modeling atoms with car engines.

Computer scientists are not

Computer scientists are not always the most gentle communicators of truths that might hurt...

We practice our communication skills attempting to get a computer to do something. If not that, we're documenting APIs or explaining difficult concepts to the uninitiated. We aren't offered much practice of subtlety or circumlocution. :)

An object is quite simply a

An object is quite simply a thing, a noun. Describing a program in terms of objects is then just a matter of defining a bunch objects and telling each of them what they should be doing to help achieve the program's objective. There is some abstraction involved, e.g., a number of objects might be created dynamically, but overall the strategy is similar. The first OO languages, say Simula and Smalltalk, were very much like this, but when OO got slammed together with procedural languages like C, it sort of lost its bearing. It is very sad that we now see these Frankensteins as prototypical OO, and it has contributed a lot to the OOP community's decline (e.g., OOPSLA losing its conference status). We definitely need to get back to objects.

I agree with you on many points, but don't see how they are relevant. That there is no nice low-level object theory in which to understand computing is not a big deal, we have the lambda calculus already. Objects are more about human thinking than abstract thinking, so why is it surprising that it doesn't provide the right foundation for describing computing? Perhaps when we start trying to understand how the brain works, objects might be more useful.

Why is open recursion looked down on so much? It is a very powerful and useful concept that can yield a lot of modularity and extensibility. One friction I had while working on Scala was my heavy use of open recursion to break up problems and re-mix previous solutions, which nobody from the FP-world seemed to like. That open recursion has theoretical challenges is hardly a reason to avoid using it if its useful. If your proof system (or your type system modifications in Scala's case) cannot handle it, maybe it simply isn't powerful enough. Don't attack the feature unless you've found it to be unsafe.

An object is quite simply a

An object is quite simply a thing, a noun. Describing a program in terms of objects is then just a matter of defining a bunch objects and telling each of them what they should be doing to help achieve the program's objective.

I don't have any problem with that definition of objects - it's useful as far as it goes. But we're really discussing OO, rather than a very general definition of object. For OO, you provided a stronger constraint in an earlier comment:

OO is a feature of a language that supports inheritance in any form including implementation, type, prototype.

It's when the use of objects pulls in a bunch of big hammers that you can't easily avoid using that the problems start. When such objects are pervasive and can't be escaped, that problem is compounded.

Smalltalk is about as guilty of this as any OO language, so you can't blame the more recent hybrids for the basic problem.

That open recursion has theoretical challenges is hardly a reason to avoid using it if its useful.

Open recursion can certainly be useful, but it's overkill much of the time. The mistake is not in supporting it at all, the mistake is enshrining it at the core of a language's data definition and function invocation mechanism, so that you're encouraged to use it even when it's not necessary, and you can't easily escape it.

If your proof system (or your type system modifications in Scala's case) cannot handle it, maybe it simply isn't powerful enough.

What often seems to be discounted is that issues such as the ones Neel listed aren't just mathematical curiosities obsessed over by program-proving theorists - they also place fundamental limits on the human ability to reason about programs. In cases where you don't have to incur that cost, it's a good idea not to.

Don't attack the feature unless you've found it to be unsafe.

We're not attack the feature so much as attacking its language-mandated overuse.

People end up using subclass inheritance as a way to do the most simple parameterizations of behavior, because that's what the languages encourage. It unnecessarily complicates program designs and makes reasoning harder.

It's the classic hammer/nail problem: if the primary code reuse and extensibility tool you have is subclass inheritance, then every problem starts looking like something that needs to be forced into hierarchical form. That might not be so bad if the underlying hierarchy implementation mechanism wasn't so unnecessarily baroque.

I agree with you on many points, but don't see how they are relevant. That there is no nice low-level object theory in which to understand computing is not a big deal, we have the lambda calculus already

It's relevant for a few reasons. First, having examined OO from a low-level theory perspective, it's turned out to have problems which have been identified quite precisely. Second, fixing those problems will require some very basic changes, and the theory and FP experience points the way to some improvements that can be made.

For example, in many cases it's possible and preferable to parameterize behavior in much more straightforward and tractable ways than open recursion; and it's also possible to build useful and powerful languages in which either not "everything is an object", or in which objects themselves don't always have to carry all the baggage of the most powerful objects.

A third reason is that the lack of a satisfying and generally useful low-level theory for OO languages* might be symptomatic of the limitations of those languages. That's a handwavy objection, I admit, but when we look back from the perspective of post-OO languages, you'll see I was right. ;)

* as opposed to a post-hoc theory designed specifically to fit OO languages, and useless otherwise.

Not your grandma's mainstream PL

People end up using subclass inheritance as a way to do the most simple parameterizations of behavior, because that's what the languages encourage.

That may be true for mainstream languages from just, say, three years ago, but aren't we witnessing at this very moment a very weird clash of the Lisp/Smalltalk/Self ideals with the C languages family? I mean, C# already has closures, eval ("Compiler-as-a-Service"), and quasisyntax ("LINQ Expression Trees"?).

I just read a tweet saying

Java 9 - continuation, value classes,big data, meta-object protocol, data integration. #eclipsecon

Modern mainstream languages no longer restrict people as much as they did just years ago, so I'm hopeful people will choose the right parameterization tools in the future.

That weird clash is providing

That weird clash is providing some of the basic components that are needed, but I don't think it's yet providing any coherent way to organize those components - e.g. something like parameterizable modules, or type classes, but without the baggage of OO classes. To get that kind of organization right now, OO classes seem to be the only real choice.

Then there's also the everything-is-an-object aspect, which makes expressing precisely what you need and nothing more difficult.

Somewhat ironically, C++ does better in these areas, being more low-level. But there don't seem to be any mainstream languages that do a good job of providing a spectrum of power in the constructs they provide, while still being higher-level than C/C++.

Then there's also the

Then there's also the everything-is-an-object aspect, which makes expressing precisely what you need and nothing more difficult.

Depending on how you interpret the above, the problem isn't everything-is-an-object, but everything-inherits-from-object. You can have OO but without inheritance hierarchy baggage.

Agreed

Agreed, thanks for the correction (and reference.)

Computer scientists are not

Computer scientists are not always the most gentle communicators of truths that might hurt...

There is truth and then there is Fox News. It is often difficult to tell when someone is communicating truth (as they see it) and when they are being ideologues.

Going further with functions

Anton said:

They may not use the word "function", but in their daily lives people depend completely on processes that transform some input to some output, which is all a function is.

I think you can actually go much further than this: people use "functional" concepts all the time in everyday natural language, even without any notion of transformation, but rather with just some notion of unique relation: "the dog's head", "my left foot", "the height of that desk", etc.

Nah

Nah, that is just sugar for dog.head, this.feet.left and desk.get_height. Which is much cooler, because you can simply invoke desk.set_height, and the desk adjusts itself.

I'm very worried when

I'm very worried when someone says something like this. That is very much an object concept, not a functional concept. As a weak rule of thumb, if nouns come first...objects, if verbs come first...functions. Of course, you'll often have lots of object-thinking in a FP program and function-thinking in an OOP program.

My sense/cents

So if a language has syntax 'obj.getValue' then it's object-oriented and if the syntax is 'getValue(obj)' then it's functional? What if a language permits either syntax?

IMO, the primary characteristic of OO is that all object values contain their methods. i.e. the implicit higher order aspect that Neel mentioned. Variations on inheritance/open recursion are secondary to me. And already, with this limited definition, I see OO as a bad idea.

On the other hand, I do agree that object modeling, whereby you name nouns and verbs as you refine towards a design, is a good idea. I also think the OO emphasis on hiding implementations is a good idea. But while there are good ideas encouraged by OO, I don't think any of the good ideas actually require or are unique to OO.

I think there is a clear

I think there is a clear consensus that open recursion is a subtle feature. Object-oriented programmers most often recognize the (directly related) difficulties of implementation inheritance, and I find it easy to explain that we shouldn't need that to be implicitly present everywhere.

On the contrary, your idea that the bundling of objects and functions is already a "bad idea" is quite new to me. Could you elaborate a bit more on why it would be worth fighting such a presentation?
Right now, the only issue I see is the focus on unary (or "only one parameter that really count") methods. This technique has troubles with binary operations which have been discussed to death, but that are rather caused by the encapsulation techniques.

It's mainly that I prefer

It's mainly that I prefer the alternative of pervasive ADTs. There was a similar discussion here: On Understanding Data Abstraction, Revisited. I think OO is more difficult to reason about, suffers from impostors and the binary method problem, and the distinction between o.f() and f(o) results in an ad hoc encoding that complicates design, to name a few points that come to mind.

Binary methods

Agreed. Dealing with the binary method problem in Fortress -- it's ridiculous that something so straightforward and commonplace necessitates its own "problem" -- is what sealed the deal in my distaste for OOP. Even with multiple dynamic dispatch (multimethods), you're forced into exactly the kind of ad hoc encoding Matt mentioned in order to write, say, an abstract class for things that have equality.

Equality just sucks

I think the problems are just really hard here. In particular equality has no good solutions that I've seen. For example, I don't know of any language where you (a) don't have binary method issues (like Java), and (b) can make two set representations that are equal to each other as sets (which Haskell, Racket, etc can't do).

Can you elaborate / specify

Can you elaborate / specify requirements for this set problem?

Sets

I'd like to be able to implement sets with lists, and with ordered trees, and compare them for equality with each other based on having the same elements. In Haskell, to even make a = b type check in this setting, we need to construct an injection to a common type for both versions of sets, and inject before comparing.

I don't understand

What's wrong with the straightforward approach?

class MEq a b where
    (===) :: a -> b -> Bool

instance MEq a a where
    (===) = (==)

instance MEq Int Integer where
    (x === y) = (fromIntegral x) == (fromIntegral y)

instance MEq Integer Int Integer where
    (x === y) = (fromIntegral x) == (fromIntegral y)

instance MEq a b => MEq [a] [b] where
    ...

The approach of injection to a common type is useful because it replaces the need to write N^2 special comparison functions with one comparison and N injection functions, but it's not a limitation.

I don't doubt that you could run into some annoyances with overlapping instances and over general inference if you tried to use this approach in earnest, but I don't see any problems with the basic functionality.

Welcome to Java

Now you've implemented Java's solution to equality, and you have a potentially non-reflexive equality relation; i.e. you have an invariant to maintain. The solution is better than in Java, but far from great.

I still don't see the

I still don't see the properties you're looking for that you think you can't achieve in Haskell. If you want to avoid manually checking that a set of comparison functions form an equivalence relation, then you need to either have all sets implement a protocol that can be used to define the comparison or have some kind of theorem proving capabilities. I don't think you're complaining about the lack of a theorem prover in Haskell -- are you?

The protocol doesn't have to be injection to a common type. For example, the protocol could be enumeration of members to a list:

class Set (s :: *->*) where
    getMembers :: s a -> [a]

instance Set TreeSet where
    getMembers = ...

instance Set ListSet where
    getMembers = ...

instance Set s, Set t => MEq s t where -- My previous MEq
    (x === y) = sameElements (getMembers x) (getMembers y)

It seems clear that you have to have some protocol common to sets that enables the comparison, though.

A small example

Here's a straightforward way to use typeclasses to perform equality on distinct things, provided that they share a faithful injection to some common type. The only irritating bit is that you have to declare the same injection both ways. But that's easy enough to automate.

class MEq a b c | a b -> c where
    dict :: a -> b -> (a -> c, b -> c)

instance Eq a => MEq a a a where
    dict _ _ = (id,id)

(===) :: forall a b c. (Eq c,  MEq a b c) => a -> b -> Bool
x === y = f x == g y
    where (f,g) = dict x y

instance MEq Int Integer Integer where
    dict _ _ = (fromIntegral, fromIntegral)

instance MEq Integer Int Integer where
    dict _ _ = (fromIntegral, fromIntegral)

That said, I can't imagine that this type of polymorphism is all that useful in practice.

Comparison

Personally, I compare values that might be too big to fit in Int with ones that do on a regular basis in Racket, where = does the comparison you've described.

Also, your solution seems to require either a projection to a third type, or has the binary method problem in that you have to define a "canonical" version.

There's a difference of philosophy here

The scheme numeric tower is really based on a notion of types as runtime attributes, while the typeclass approach is based on the static tradition. It's not hard to embed a numeric tower into Haskell with the usual union approach, but the cost in terms of efficiency is arguably not worth it. Learning to be precise with fromIntegral and realToFrac calls (the latter of which could use some optimization, unfortunately) is one of the startup costs of Haskell. However, arguably, the control they give is more useful than not, and the typeclass approach provides a sound way to extend the notion of a number in many directions.

I see the intermediate type as a constructive proof that the structures can in some sense be seen as the "same", and so a somewhat legitimate thing for a compiler to ask for. I agree that equality is hard, but that's more to do with attempting to capture intensional vs. extensional, especially in the context of infinite structures than with competing approaches (ad-hoc, parametric, nominal, structural) to polymorphism.

In general, I think its good practice to limit unnecessary ad-hoc polymorphism, and the more restrictive standard Eq provides a perfectly good meaning -- not only are these things equal under some projection, but they are equal in type (the latter being checked at compile time).

Numeric towers

I think the "numeric tower" argument is a good example for the confusion between types and what Bob Harper calls classes -- it's really a tower of classes. If you want to have something similar in a typed language you would simply have a single Number type that is inhabited by the whole numeric tower. There is absolutely no reason why it should be less convenient or less efficient than what you get in Scheme (nor why it requires type classes).

Except that no typed language does it this way, and there are perhaps good reasons for that. But types like Integer in Haskell or IntInf.int in SML are basically a simple subset of what you know as the numeric tower.

Maybe I'm missing something,

Maybe I'm missing something, but can't you have the numeric tower in Haskell by defining the arithmetic type classes as operating on different numeric type params, ie. Num a b, and then providing instances for permutations of "primitive int" and "infinite int".

This is basically what the Scheme interpreter has to do, except now you can extend it yourself and it's no longer part of your trusted runtime. No efficiency would be lost, and you wouldn't need manual coercions like S.Clover suggests, unless you're missing an overload.

That requires existentials

Well, you also need existential types then. Or how else would you store a value of arbitrary number type?

But again, I think this is a distraction entirely. The numeric tower isn't about types, it's about distinguishing different representations of a single type.

Well, you also need

Well, you also need existential types then. Or how else would you store a value of arbitrary number type?

Ideally the overloads would only map to wider types, but there is a wrinkle I hadn't considered:

class Num a b where
  (+): a -> b -> a
  ...
instance Num Integer Int where
  ...

But, instance "Num Int Integer" is not possible, because NativeInt cannot hold all values of Integer, so you either need a common type which can hold all the possible values, or some sort of rewrite rule that ensures that 'a' type param is always instantiated to the larger of the two types, or you can add another type param may make this possible:

class Num a b c where
  (+): a -> b -> c
  ...
instance Integer Int Integer where
  ...

Anyway, I know it's all academic, this just piqued my curiosity since I had wondered whether it was possible in the past.

There's a required runtime

There's a required runtime element in the way it works in Scheme -- incrementing an Int can require promotion to Integer. But other than that, a cleaner way to organize what you're proposing might be:

class PromotesTo a b where
    promote :: a -> b

instance PromotesTo Int Integer where
    promote = fromIntegral

class Num a where
    (+') :: a -> a -> a

(+) :: Num a, PromotesTo b a => a -> b -> a
x + y = x +' (promote y)

Edit: Oops, handling promotion of the first argument isn't as trivial as I'd thought. You'd need to add a new type class to handle that overload. So this isn't that slick.

Yes, I suppose to do this

I suppose to do this properly you'd need any bounded integer types to be parameterized by their value to type overflow as a promotion. Definitely an interesting problem to make this static, and user-friendly.

Either way

Ideally the overloads would only map to wider types, but there is a wrinkle I hadn't considered

Either way, it doesn't help: when you want to build something like a list of (arbitrary) numbers you still have to pick the widest type. So having a type hierarchy doesn't make much of a practical difference relative to only having the widest type.

But I don't see much of a reason to care about the hierarchy either.

styles of information hiding?

i wish the whole "which side of the expression problem do you knee-jerk come down on" thing had a more straight forward solution. even though i've read various and sundry papers about how it is all solved in one language or another, it seems like it and offshoots thereof are, of course, never going to go away.

one main thing i like about OO is i can say "thing.draw(canvas)" rather than "if(thing == ThingTypeA) { thingADrawer(thing, canvas) } else ...". the former means i can be more (not entirely of course) decoupled from the drawing implementation. the latter seems less so; it means bringing the switch statement up "higher" in the code, making it more explicit. explicit can be good, but can be bad. double-edged banana (not in the lens sense, in the plantain sense) and all that.

assuming one would like to be able to do something like "thing.draw()" in a non OO world, what would one see as the best way to do that? typeclasses? functors of some ilk? something else?

I agree with you that's

I agree with you that's desirable, and yes, I would say modules/type classes are the way to do that.

This is a mere matter of

This is a mere matter of syntax, but one can accomplish this with a recursive record expression where records can have first class field values alongside other types of field values. For example (pick your favorite language):

  makeRec (pta:Pt, ptb:Pt) = 
    rec this 
      { pt1 = pta; 
        pt2 = ptb; 
        drawOn (c:Canvas) = drawRec (c, this.pt1, this.pt2) };
  r = makeRec (Pt 0 0, Pt 100 100);
  r.drawOn (theCanvas)

Of course, one may have to define the drawOn field as "drawOn = function(c:Canvas) ..." but that is inconsequential to the example.

IMHO, the "OO notation" is completely unexciting, and OO had better have something far greater to offer. Actually, the much maligned "open recursion" is one the few really interesting things OO does have to offer for certain types of problems (ala, the typical OO GUI framework).

- S.

I'm not sure what you mean

I'm not sure what you mean by "the verb" in "the dog's head" or any of fruehr's examples. Does it make a difference to say "the head of the dog" instead?

The map is not the...

I'm very worried when someone says something like this. That is very much an object concept, not a functional concept.

How do you justify that? Without a rationale, it seems to depend on circular reasoning. If you model the problem from an object perspective, you'll model its features as object concepts. From a functional perspective, the features are modeled as functional concepts. From a relational perspective, you have relational concepts. To say something "is" an X concept assumes you've decided to model the problem using approach X. (Edit: see scottmcl's comment about the philosophical problems with trying to be more absolute about this.)

A key point about this was raised by dmbarbour in this comment: humans use multiple models to understand things. We benefit from that because we can compare and contrast perspectives, check them against each other, and gain a deeper understanding of the subject.

But programming languages and programs are more restricted - any single program uses a very limited number of models. This can create a tendency to conflate the modeling approach being used with the thing being modeled, but that's a map/territory mixup, which shouldn't be allowed to affect our reasoning when we're comparing modeling approaches.

With objects, we create and

With objects, we create and talk about identities, taxonomies, and hierarchies. I am quite certain that when someone think of a "dog's head", they are thinking about dogs, they are thinking about animals, they are thinking about heads, they are thinking about body parts. All of this is definitely in the realm of objects and I don't see any functions. They would be able to deduce that "a dog's head" is "an animal's body part." And also, how is head(dog) more clear than dog.head?

I agree with you about conflation. The idea of an object-oriented declarative language is that you are using objects to model your solution and are not contriving them to be lambdas or closures. Let's take this conversation up in a couple of weeks when I finish writing this paper, I'll have more meat for the grinder then.

This is on target. The later

This is on target. The later writings of early "structured programming" advocate Michael Jackson are superb on these "map vs. terrain" style problems. His "Problem Domains" among others is a fine and really thought provoking read.

There is really nothing wrong in itself with provisional reifications of large bodies of knowledge into simple abstractions like "coordinate grid, points, lines and shapes." It's arguable that minus such simplifications, we wouldn't have written much software in the past.

Whether these reifications are via objects, or records, or ADT's or even assembler macros that help us construct and manipulate data structures in assembler isn't unimportant, but no language "paradigm" advocate should be surprised should the complexities of reality start creeping into our programs and shaking the foundations of our simplifying abstractions.

The problem becomes more theoretical and interesting when we return to dear Ms. Grace Hopper's (no kidding) early call for widespread program code reuse. Looking for some kind of unified field theory of large scale reuse and composition of our abstractions leads to two interesting questions:

1) how "simplified" are our abstractions really "allowed" to be if large scale reuse is our goal (watch an accounting firm crash and burn trying to conduct an "OO analysis" on literally thousands of insurance product underwriting and actuarial rules);

2) are we adequately trying to exhaustively enumerate and characterize all of our abstraction mechanisms and then devise logically sound means for, and limits upon, their composition?

Interesting stuff.

- S.

Sorry to worry you

Sorry to worry you, Sean. I was aware of the issue you raise here when I made that remark, admittedly somewhat flippantly, in the hopes of raising some controversy. I was of course successful beyond my wildest dreams :) .

Still, the broader point I guess I was aiming at is that we might look to natural language (among other things) to find guidance in how to think about these things. And there might be a decent amount of linguistics (and even philosophy of language) to be done before we get too far. And there is always room for perspective and cognitive style to influence how anyone in particular looks at these things (see some of the other comments hereabouts).

One can obviously twist these things around with bad design choices: "feet" that are sub-classes of "people", but with the head, arms, torso, etc., set to null (I've seen the equivalent in code). And I'm not saying that FP is immune to bad design choices, either.

deceptive intuitions

The notion of an object is easy for humans to understand

It sure seems that way, but when we look too deeply we run into problems about identity and change, bundle vs. substance. Our alleged 'understanding' of the notion of objects seems to be rather shallow.

Further, even if we can 'understand' a world in terms of objects does not mean we should write our programs that way. I believe that humans tend to 'understand' multiple views of a system at once - i.e. the brain is a very flexible pattern-processing engine, and can easily apply multiple models to predict the history or future of a scenario. Developing a program as an object model tends to calcify a particular view unless you use big, encapsulation-breaking features like reflection or aspect-oriented programming. Humans are quickly disappointed that their OO model isn't as flexible as their brain's model. (Often, we fall back to those big, encapsulation-breaking features. But, as neelk notes, this causes issues for modularity.)

I rather like relational and logic programming because they enable ad-hoc views of a system without sacrificing rigor.

The main reason I include something similar to 'objects' in my work is to support object capability model, which is the most effective, expressive, efficient, and scalable model for secure composition to which I've been exposed. But 'capability oriented programming' uses a very different set of patterns, with a very different emphasis, than does traditional OOP.

Objects are the ultimate conceptual module because we think in terms of objects all the time

There are other concepts we think of all the time, such as relationships and social structure, news and gossip, schedules and priorities, means and motivation, beliefs and intentions, rules and laws. It isn't obvious to me that 'objects' rank anywhere near 'ultimate' with respect to intuition.

Among those alternative concepts, you can find powerful, alternative metaphors for programming. A few of them are even modular.

I'll grant that functional programming is a high wall for most developers. The 'eureka moment' for recursion takes a while for beginning programmers. Given limited time to educate, it might be unrealistic to expect a few more 'eureka' moments for monads, arrows, FRP's temporal semantics, and the like. So I totally understand your desire to seek an intuitive metaphor that give the masses of grunt programmers an effective starting point.

But I'm not at all convinced that objects are the right metaphor. I feel developers spend more time working around the object metaphor than working with it.

The main reason I include

The main reason I include something similar to 'objects' in my work is to support object capability model, which is the most effective, expressive, efficient, and scalable model for secure composition to which I've been exposed. But 'capability oriented programming' uses a very different set of patterns, with a very different emphasis, than does traditional OOP.

Could you expand on the differences between the set of patterns applied in traditional OOP and OCap? I ask because I've recently read Mark Miller's thesis and was surprised to see how much his description of POLA and it's impact on program design mirror the literature about "best practices" in OO. Pryce and Freeman's book I cited below is a case in point.

There is certainly some

There is certainly some overlap. I.e. avoiding global state is excellent for unit testing and configuration, so it is quite natural that OOP developers managed through trial-and-error to discover some capability basics. The next step is to apply it to everything else.

But you won't see downcasting, monkey patching, multi-methods, ad-hoc object extensions. You'll see very little of visitor pattern and composite pattern. And you'll see a lot more revokers, membranes, facet-forwarders, sealers/unsealers. The number of 'types' in capability oriented programming is much greater due to all the fine-grained facets, so structural typing and anonymous classes are much more convenient. (And consequently systems are bound based on interfaces, not implementation class.)

Distinct objects in capability oriented programming are often motivated more by control over information flow and less by modeling of the program or domain. This lends a very different 'feel' to the decision process. Services tend to be very fine-grained - down on the level of 'getter' and 'putter' rather than 'queue'.

You could look into some of the 'taming' done on, say, UI libraries to get into the mindset of a capability-oriented developer.

support for induction

Typed OO languages always introduce terminology to talk about closed collections of values. Examples include most-derived type (C++), type (Ada) or they add things like final classes that don't have descendants or suggest programming styles where only leaf classes are concrete and all others higher in the hierarchy are abstract. As you point out induction depends on this concept so the fact they have terminology for it isn't surprising.

None of the languages I'm familiar with do it very well but that is completely different than claiming it can't be done in any OO language.

Not so bad in practice

Neel, obviously from our work together, I agree that the problems you cite are real. However, I think you are exaggerating their effect in practice.

It's true that induction isn't generally the appropriate approach for object structures; any kind of reasoning about a set of values breaks representation independence (exceptions for classes that are final or are limited to a fixed set of cases, as is possible in Scala or Fortress).

In practice, however, relying on procedural data abstraction (existential types) together with a specification of the behavior of the methods of the object works quite well. OO reasoning does not work by breaking down data structures, or following program traces into calls. It works by understanding the semantics of an interface (usually documented in informal prose) and reasoning about clients and implementations separately in terms of that. Expert OO designers reason about programs quite well using informal techniques based on understanding interfaces, and have learned to avoid the "gotchas" (e.g. inappropriate use of inheritance) that lead to reasoning errors.

Of course, program logics are useful when you want to prove something with mathematical certainty. But we should not claim that one cannot reason (even informally) without them; the reasoning that successful OO programmers do every day is an easy counterexample.

Forms of data abstraction

[...] relying on procedural data abstraction (existential types) [...]

Now I'm confused. Are you suggesting that these are the same? I think they are not. In fact, I would argue that the difference between the two is the core difference between object-oriented and modular (ADT-based) data abstraction.

It depends on whether the

It depends on whether the existential type is open or closed; I should have been more clear about this. An open existential type is an ADT--the type is open and therefore fixed for all elements of the ADT, though it is unknown to clients.

A closed existential type is an object--many objects may have the same (closed existential) type from the outside perspective, but when you open the type of each object (typically by calling a method on that object), you find that the different objects may have different representation types.

Expert OO designers reason

Expert OO designers reason about programs quite well using informal techniques based on understanding interfaces, and have learned to avoid the "gotchas" that lead to reasoning errors.

Hi Jonathan, you could say that figuring out why this sentence is true is pretty much my research agenda. :)

The thing I find really fascinating is that good OO programs are modular, but only because they make use of the features that break the overall modularity properties of the language. As a single example, good OO architectures often make systematic use of reflection -- for example, to interpose library code between an implementation and its clients (eg., reflective delegation or proxying), or to support interface discovery (eg., IUnknown/IDispatch in COM).

This is very surprising! Of course, reflection means classical data abstraction fails, since malicious clients can detect and branch on the implementation types. Obviously, OO library designers do not expect malicious clients, but equally they do not expect their clients to completely eschew reflection, either.So characterizing what they do mean by "well-behaved client" is a very subtle question.[*]

To put things in a maybe provocative way, I find the "bad" features of OO more interesting than the "good" ones, since since finding elegant proof principles for a "bad" feature turns it "good".

[*] Here's the toughest example I know of. I want my web browser to know how much memory it has allocated, and to flush some caches whenever it is using too much memory. Note that the browser needs reflective access to its memory usage, and furthermore that its observable behavior should change depending on that. This means we *want* contextual equivalence to fail -- but what's the right generalization that we want to go through? (A good answer to this puzzle basically tells us how to mathematically characterize what an operating system is.)

Avoid memory/time observation when possible

It would seen that you need to reason about memory usage to model your example. Reasoning about performances (time and memory) of a running program is very difficult indeed.

I think you can do without and still provide a satisfying answer wrt. program reasoning. You should provide a specification of the browser behavior, from the point of the of the interested parties (eg. the user), then show that flushing the caches preserve the browser state as modeled by the specification. This ought to be true as flushing the cache should a transparent operation.

In other terms, I think that contextual equivalence is too strong *if* you observe all the behavior of your program. By using a specification/model that allows to leave out the details you cannot afford to take into account, you can prove equivalence wrt. the specification.

Of course, you then need to be sure that your specification is correct. Contextual equivalence in the lambda-calculus sense is very strong because it says that *all* specifications (insofar as they are based on "observation" of the program, ie. external interaction) are be preserved.

With such a specification, you won't be able to talk about the memory usage and number of request "saved" by the cache presence. If you want to explicitly reason about those, you need a finer specification. But they are usually not considered part of "correctness". In particular, programmer usually do *not* know how efficient the caches will be when programming; they're convinced that is will not increase the number of requests, and that should be easily provable, but all other results are usually obtained experimentally.

"Bad" features are often used in structured ways


To put things in a maybe provocative way, I find the "bad" features of OO more interesting than the "good" ones, since since finding elegant proof principles for a "bad" feature turns it "good".

This is quite insightful, and I think there is a lot of truth to it. I would add, however, that when we find elegant proof principles for a "bad" feature, those proof principles are typically capturing already existing ways of using the supposedly "bad" feature, which are structured in a way that they do not have negative effects.

I don't have a solution to your memory problem, but another good example is casts in OO languages. Clearly an unprincipled use of casts (or instanceof) can break abstraction; their use is discouraged for that reason. However, sometimes using instanceof in Java-like languages is OK. Typically, this is when the programmer is using a class hierarchy to represent a datatype; since Java doesn't have pattern matching, it must be simulated via instanceof and cast (or visitors). As long as the intent of the hierarchy is a datatype, there's no problem with this! Of course, it would be better if Java, like Scala, Fortress, and some other advanced OO languages, had a way of expressing and distinguishing both the datatype intent and the pure OO intent.

The fact that these structured uses already exist in practice, of course, doesn't mean discovering proof principles for "bad" features has no value. On the contrary, they can both help make reasoning more rigorous, and help explain, check, and refine those structured uses.

As language designers, we should also be careful about calling such features "bad." Even if they break some of our favorite principles, we should recognize that a lot of very useful programs rely on them. In many cases if we simply throw out the "bad" feature, we throw out the useful programs as well. It is better to redesign the "bad" feature to distinguish (and prohibit) the bad uses while supporting the good ones; of course sometimes this is hard ;-).

In many cases if we simply

In many cases if we simply throw out the "bad" feature, we throw out the useful programs as well.

Are you talking about backwards compatibility?

In a new language, avoiding bad features just means developers will express the program in a different or more explicit way. You cannot say that a program 'relies' on a feature, only benefits from one.

Not backwards compatibility

No, I'm not talking about backwards compatibility, I'm talking about how supposedly "bad" features actually enable important kinds of expressiveness. (you are right I should have said "benefits from")

Take casts and subtyping, features supported by many OO languages. These features are considered "bad" by some because they add dynamic checks that can fail, and generally reduce the amount of static reasoning in the program. For example, this occurs when you use subsumption to refer to an object through a supertype interface, then later you need a downcast to get the actual type back.

Certainly it is better to avoid the dynamic check, all else being equal. But all else is rarely equal, in practice. In a language without subtyping, if you want to have code that is generic with respect to the actual type, you must use explicit parameterization for genericity. The introduction of a parameter has a higher syntactic and conceptual cost than using subtyping: for example, it is not at all unusual for an OO interface to use several other interfaces (thus leveraging subtyping), but when an ADT has multiple parameters (more than two) it starts getting really unweildy to use. As a result of the higher cost of explicit parameters, they get used less often...and code is less generic.

As an aside, I believe the above is why OO languages are often considered to provide more reuse, and I believe the effect is real. However, this requires a longer discussion which is probably not germane to the current topic.

Parameterization is often a good thing. But choosing when to use parameterization and when to use subtyping+casts is a cost-benefit tradeoff. Eliminating the option of subtyping+casts because casts are "unsafe" takes away design options in this tradeoff space. If you have bad designers, you prevent them from shooting themselves in the foot--but if you have good designers you are limiting their effectiveness.

Of course I am oversimplifying: one can have subtyping without casts, and there may be language designs that provide the benefits of casts where needed without some of the drawbacks. The point I am trying to make is that blindly leaving a feature like casts out of a language design, simply because they are dangerous, may have unintended consequences that ultimately make a language less useful.

Straight jackets vs. loaded guns

The point I am trying to make is that blindly leaving a feature like casts out of a language design, simply because they are dangerous, may have unintended consequences that ultimately make a language less useful.

The ultimate expression of this statement are the different purposes between static and dynamic languages.

We could call this programming in a straight jacket. The first time I was exposed to a static functional programming language like ML, it was very frustrating because of the ceremony I had to go through to express even the simplest of behaviors. We are slaves to the type system and must think long and hard about everything. Not great for bricolage-style hacking, to be sure.

The appeal of dynamic languages is that they get rid of all the ceremony: you just express what you want very quickly. Everything is about minimizing the steps between thinking about something and (key) typing it out. However, the trade off is that we get fewer guarantees about safety, things break very quickly and we wind up spending lots of time in the debugger trying to figure out where our type errors are. They are very much loaded guns.

Language design should be about balancing straight jackets and loaded guns. Applied to features in static languages, we have to accept some that are dirty, or at least not very clean, if we are going to achieve our usability goals. Language design isn't just about formalizing everything and making sure everything is sound. The language has to have a purpose, and programmer thinking should be considered.

Strange metaphors

Earlier a language about reproductive behavior of salmon, and now Sean is developing one about balancing straight jackets and loaded guns? Why can't ye crazy fools choose something more traditional and intuitive, like snakes and ladders? ;)

Language design is about putting nice non-functional properties on paths-of-low-resistance and bad non-functional properties on paths-of-high-resistance. The idea is that developers should be able to achieve the nice properties and avoid the bad ones without much thought or effort. This allows them to focus most of their attention on the domain and functional properties.

Straight jackets are a relatively obtrusive way to raise 'resistance' (i.e. it syntactically seems like you can do something, but then you are forcibly reminded that your hands are tied), but I'm sure some developers confuse the jacket with loving hugs. Loaded guns are a dangerous way of lowering resistance (allowing developers to blast their way through an obstacle while dodging ricochets and shrapnel), but I'm sure cowboy coders love them. I don't believe we should be 'balancing' these things, but rather seeking gentler approaches to controlling the paths of resistance.

For example, a language designer can influence which subprograms can be written directly, which require repetitive 'design patterns' or 'boiler plate', and which require whole darn frameworks or a lot of collaboration and self-discipline among different development houses. Controlling which properties require more syntax, more documentation, or more attention can serve well in place of 'straight jackets'.

ultimate expression of this statement are the different purposes between static and dynamic languages

Re: static vs. dynamic dichotomy

I'm of the opinion that the static vs. dynamic distinction is mostly a source of accidental difficulty. We're forced often to work around it, using ad-hoc upgrade protocols that aren't reflected in the language and scripting language interpreters that aren't supported in our static-language IDEs. Staging, runtime specialization, runtime upgrade, continuous compilation, live programming - a lot of very useful techniques diminish this static vs. dynamic distinction to a mere question of performance optimization (i.e. optimizing startup time).

I prefer spatial distinctions... instead of 'static' and 'dynamic', it's all about 'syntactically local' and 'remote'. Makes for a very different type system.

You should think about the

You should think about the goals of your language. Is it meant for small programs that glue together powerful components? Is it meant for writing an OS kernel? Is it meant for doing some quick text processing? Once you decide that, then you can optimize for your use case. If your programs are relatively small, dynamic type checking with lots of flexibility can make sense, or at least lots of duck typing. Sometimes you really want a loaded gun. If you are implementing control software for NASA, then you want a pretty thick straight jacket. That developers would prefer one over the other is hopefully influenced by the kind of work that they are doing.

Maybe "balancing" is not the right word, maybe "balance according to your goals" is better. Some languages try to do both, like Scala, they try to have that sense of static safety while at the same time eliminating as much ceremony as possible.

I also didn't mean to imply that this tension could be boiled down to static vs. dynamic, just that static vs. dynamic is probably the best example of this tension.

Gradient

You suggest seeking a static 'balance' at language-design time to meet needs anticipated by the language designer. The alternative is to seek a gradient such that the language can be as hard or soft as the project needs.

The default must be dynamic and flexible, and you add annotations or boiler-plate to harden the language. This minimizes overheads for small (one-liner or one-page) programs.

The annotations that harden the language must not change the program's observable behavior, otherwise you cannot 'harden' a project as it grows in scope. Ideally, you want the ability to take a project from rapid prototype to final product within the same language, and reuse some of the algorithms.

There are a lot of techniques that can help achieve the above properties:

  • pluggable types
  • dependent types
  • provide the more dynamic features (such as evaluate, reflection, network) as ocaps implicitly available to the toplevel code (avoid 'main' boilerplate) but not globally
  • toplevel semantics based on communication and concurrency (i.e. dataflows, workflows, orchestration... glue), as opposed to calculation
  • easy access to modular services or service-discovery (i.e. using plugins, remote processes, URIs)

I think it's important to design languages seeking a gradient if we are ever to have scalable languages - i.e. that work for both small and large projects.

Consistent experiences

I'm going to argue against a truly scalable language because the language needs to define a focused consistent experience to guide programmers. Scala again is an example of what can happen: the language tries to be everything to everybody, but that goal is very elusive. And anyways, you split your efforts over too large of an audience.

Gradients look nice in theory, but in practice fall down in usability. If code is written in a dynamic language, it invariably makes use of the full flexibility of a dynamic type system; you couldn't go back and add annotations without completely rewriting your code. This is related to the extreme difficulty of describing many JSON sites in a reasonable static type system (like with F# type providers).

As an aside, there is very good reason to separate your rapid prototyping from your production work. Rapid prototyping is meant to explore many ideas rapidly, and it is very hard to do that if you become attached to some piece of code. You really want your code to be disposable in that "you won't feel much about throwing it away." The prototyper/tinkerer must be able to write code without having long term lifetime in the back of their head. Besides, once you have settled on a design, you have to move to a production mindset and "do it right." Language boundaries are ideal for changes in mindset.

Gradients of the form David

Gradients of the form David describes sound good in theory and don't yet exist in practice. I think your argument has holes. You're assuming that production code will always require a different mindset, rather than just additional attention to detail that could come later. Or at least I'd like to see that part of the argument fleshed out. As someone else working on such a language, I hope your argument has holes anyway.

I'm speaking from experience

I'm speaking from experience working in a design studio. We had to go through special/extreme pains to prevent the dev teams from seeing our prototypes even as product starting points, which would have been disastrous.

And that isn't even touching the fact that these languages have different audiences. What is the point of having a scalable language used by a Flash-level programmer who get's encoding some logic but doesn't begin to get types? I always wonder why many language designers see languages through the CS PhD lens, well, I guess its because we think of ourselves as primary users.

I would still like to see more work on the gradient approach. I have a feeling about what will happen, but hey...I could be wrong. But if you build such a language, you can't call it done after a theoretical analysis. You'll definitely need to do some kind of usability study to see if the gradient approach works in practice (or what is needed for it to work in practice).

I always wonder why many

I always wonder why many language designers see languages through the CS PhD lens

I've started to separate language design from logic/analysis/etc., which, in hindsight, is obvious. The latter often informs the former in huge ways, but not necessarily directly nor immediately. When a type theorist etc. claims to be doing language design, I internally loosen the idea to a more umbrella term and enjoy the talk, and, after, go back to reading sociology etc. studies :) One fun relevant (if deceptively coarse) distinction is invention vs. innovation. A language designer need not invent and it seems that our ability to invent has far eclipsed our ability to innovate!

Bob Harper's popular mathematical view of dynamic languages as simply being (awkward) unityped static languages is a great example of this disconnect. Only logically true, and thereby posing a fundamental and insufficiently addressed problem for language designers interested in them.** (Hint: building logically more expressive type systems, in this interpretation, would not be a principled way to start poking at this basic problem in the design of whatever is your favorite static system.)

**: cognizant of this for the JavaScript space, I played with a security policy layer that only had two types and left the general code unityped ;-)

In practice

Curl language is a decent (albeit, proprietary and UI-specific) example of supporting a 'gradient'.

[Matt and I have debated a lot about how we should achieve this gradient, but we both agree we should achieve it.]

Consistent Experience

the language needs to define a focused consistent experience to guide programmers

Developers would get a more consistent experience by using the same language - same syntax and semantics and libraries and IDE - for both the soft layers and the hard layers, for both the small projects and the large ones. Polyglot programming does not create a consistent experience.

invariably makes use of the full flexibility of a dynamic type system

I did not suggest using a dynamic type-system. I would not recommend at least two features common in dynamic typing: implicit coercion of data, and internal observation of abstraction failures (e.g. via exception or 'does not understand'). Potential for the syntax to express 'undefined' behavior is essential if you're going to benefit from blocking it.

I suggested dependent typing. For small projects, dependent typing gives you the flexibility you need (the analysis isn't difficult for a smaller program). As the project scales up, developers will need to annotate the interfaces between components with relatively rigid types to simplify inference.

separate your rapid prototyping from your production work

Rapid prototyping isn't something done just once. It is also appropriate for exploring new features for maturing or late-cycle projects, or while involved in a rapid edit-test cycle. If your language doesn't support the rapid prototyping, you'll eventually have the brilliant idea to use another language - e.g. embedding Lua or JavaScript into your application. That will make a developer's experience much less consistent, but gets the job done.

I don't believe getting too attached to code is a likely problem. Due to imperfect foresight and incomplete software requirements, larger applications will eventually receive overhauls. That's the time to 'do it right', or at least account for known issues. Many developers are far too eager to start over, regardless of language.

Developers would get a more

Developers would get a more consistent experience by using the same language - same syntax and semantics and libraries and IDE - for both the soft layers and the hard layers, for both the small projects and the large ones. Polyglot programming does not create a consistent experience.

You are stating this without any evidence. The difference between Flash Studio or Expression Blend and Visual Studio is very large, and they very much emphasize different roles. To make everyone happy, you will need one big IDE and one big language, and with all the features and complexity, you will probably wind up making no one happy.

Rapid prototyping isn't something done just once. It is also appropriate for exploring new features for maturing or late-cycle projects, or while involved in a rapid edit-test cycle. If your language doesn't support the rapid prototyping, you'll eventually have the brilliant idea to use another language - e.g. embedding Lua or JavaScript into your application. That will make a developer's experience much less consistent, but gets the job done.

This isn't how it works. The prototyping that a developer does is very different from prototyping for design purposes. The developer is prototyping technology and architecture, the design of what the software is already pretty much set at that point (or should be). Again, how and why we program is very diverse.

I don't believe getting too attached to code is a likely problem. Due to imperfect foresight and incomplete software requirements, larger applications will eventually receive overhauls. That's the time to 'do it right', or at least account for known issues. Many developers are far too eager to start over, regardless of language.

Again, you assume that we only code applications. I would be horrified if any part of a concept car presented at a car show wound up in the real car I bought.

Let's think about this...

This isn't how it works. The prototyping that a developer does is very different from prototyping for design purposes. The developer is prototyping technology and architecture, the design of what the software is already pretty much set at that point (or should be). Again, how and why we program is very diverse.

Here are some shortcuts that I associate with rapid prototyping (not comprehensive or in particular order):

1. Poor commenting, variable naming, adherence to standards

2. Duplication / poor factoring or organization of the code

3. Simplifying assumptions are made, validation / error checking / corner cases are ignored

4. Abstractions are violated - at some point it's discovered that a chosen abstraction was wrong, but rather than fix it properly, "hacks" are used to shuffle data to where it's needed from where it's conveniently gathered, dynamic type checks are used to branch behavior, etc.

What are some other categories that I'm missing?

Issues 1 & 2 will probably always just require clean-up, but it's not too much of an issue to clean them up after other code has been written that builds upon the bad code. Simple refactoring tools can help here (safely renaming variables, etc.). If these were the only kinds of issues, I don't think restarting from scratch would make any sense. Further, I'm skeptical that it's a good idea to even introduce this kind of bad code.

Issues like 3 & 4 are the bigger problem, but I think that there are many things a language can do to help. Not forcing an organization of data and control flow into an object hierarchy before the problem is even understood is a good place to start :). Languages can be compared on the extent to which they let you write correct and finalized code before you understand certain other aspects, such as the architecture. How much insulating boilerplate does the language require to achieve this architecture independence? My guess is that a good gradient language will be one where the answer is 'little or none'.

So at its heart, a key issue is separation of concerns, e.g. keeping the higher level architectural decisions from affecting every piece of code you wrote before you understood the architecture. I think separating concerns is hard and requires a good understanding of the concerns being separated, so I'm skeptical that anything like aspect oriented programming can magically separate general concerns in an effective way. But I think that by choosing the right abstractions for your language/libraries, you can untangle an important set of concerns in a way that enables rapidly writing certain types of code in a non-throw away fashion.

Shortcuts with "rapid prototyping"

Of course, when I think of "hacking", I always think of Rasmus Lerdorf's quotes about programming:

  • I really don't like programming. I built this tool to program less so that I could just reuse code.
  • PHP is about as exciting as your toothbrush. You use it every day, it does the job, it is a simple tool, so what? Who would want to read about toothbrushes?
  • I was really, really bad at writing parsers. I still am really bad at writing parsers. We have things like protected properties. We have abstract methods. We have all this stuff that your computer science teacher told you you should be using. I don't care about this crap at all.
  • There are people who actually like programming. I don't understand why they like programming.
  • I'm not a real programmer. I throw together things until it works then I move on. The real programmers will say "yeah it works but you're leaking memory everywhere. Perhaps we should fix that." I'll just restart apache every 10 requests.

-- Rasmus Lerdorf, creator of PHP

Yeah, that's why.

I think it tells more about PHP than about "hacking".

If it gets the job done

If it gets the job done cheaply, it shouldn't be discounted or laughed at just because its not elegant.

When it gets the job done

When it gets the job done cheaply but encourages easily exploitable designs that disclose all your customer's credit card information, I think it should absolutely be discounted and laughed at.

And this is why PL research

And this is why PL research has very little impact in industry. Our lofty idealism absolutely crumbles when it meets reality.

Edit: my point is that we should look at all languages in use seriously, in spite of how ridiculous they might seem to us. Why do users use and continue to use PHP despite its flaws? There is more going on here than just good marketing.

PHP is used because it is

PHP is used because it is widely deployed on cheap hosts, has 50,000 easy tutorials to get you started, and has large software packages written in it that require simple customizations using PHP (Drupal, Joomla, etc.). PHP is a broadly supported, flexible tool for building web sites.

What PHP isn't, is a good tool, and the data demonstrates this fact; elegance has nothing to do with it. I think there is value in making this distinction. PHP could possibly be made into a good tool, and research in this direction should certainly be welcomed not derided, but PHP as it currently stands should be.

There is nothing in PHP that couldn't easily be done in just about any other language. The realities that crush lofty idealism are these:

1. academics don't have time to put in the effort to build and support industry tools.

2. industry developers don't generally read academic research when building their tools, they rely primarily on their experience.

The growth of Scala, F#, Haskell and OCaml in the web domain is the most promising bridge I've seen between these two worlds. I can only hope these influences will filter down into other widely used languages, like Ruby and PHP, like it did for C#.

Why do users use and

Why do users use and continue to use PHP despite its flaws? There is more going on here than just good marketing.

To a great extent, I think it is marketing, even if not intentional (so 'sociology', really). More importantly, I don't think that's a bad thing and, indeed, if this is the case, not understanding the process has some important implications in both the impact of current research and the direction/methodology of current research.

For one concrete scenario for the sociology of PL, I've been reading up on adoption factors for the past ~2 years now. There are so many influences that, to make progress, I've separated a socially acceptable language from a technically acceptable one. A language designer might take this basic viewpoint and try to augment our favorite language (whether it's safe, clean, fast, scalable, ...) with socially acceptable features (say there's something about strict vs. lax whitespace syntactic restrictions). However, I think 1) there are more extrinsic properties at play and 2) process, not just features, matters.

For extrinsic properties, a common critique of static approaches (irrespective of validity) is error messages: we want an example of a failure-causing input, which is reasonably independent of the language. As another property, we can view adoption as related to social niche: just as music preferences correspond to social networks (you can only have so many 'favorite' genres due to time restrictions and pick one related to those closest around you), so would I expect language usage to be impacted (an accountant or musician doing VB/excel or Max/MSP, or an academic dabbling all over but still only primarily using few). The language and its features matter, but only in the context of the external social context: e.g., is it for a social group with no niche language? (I believe that there are many useful PL features that could be derived by analyzing social functions, but that's missing the point).

If we agree that languages are social innovations, then there's precedent for focusing on the *process* of creating and diffusing them. Languages are not mathematical objects that we simply seek out and reify with executable implementations. In the publish-or-perish academic model, I would expect research languages to match interests of the academic community: there is little time for anything else. In contrast, for a language of interest to a, say, developer community, a feedback process (including reinvention by the community members!) is more traditional/successful in most fields that have been examined. To view PL as being somehow special and distinct would actually be a surprise of scientific merit :)

Why people use flawed languages, and what we should do about it, opens up a pretty big can of worms :)

We have nested a bit too far

We have nested a bit too far for LtU, but I would guess that we need to understand human thinking (psychology) more so than sociology, although the two topics are definitely strongly related. I do not think that the emergence of social preferences is mostly accidental, there are real reasons why feature A emerges and remains popular while academic feature B flounders.

Perhaps PL differs from other engineering disciplines in that we are trying to reify human thought in terms that a computer can understand. Notations are typically cultural; e.g., ways of encoding music in print, and of course there are multiple mathematical notations preferred by different communities and efficient at expressing different kinds of theorems and proofs. Perhaps we are a bit unique in that there are hundreds of ways of encoding programs.

The social aspect of a PL is interesting though I think this is probably built around a killer app (e.g., rails) rather than an artifact of the language itself. I'm wondering how a L with direct social features would be accepted though, perhaps with direct support for easy code sharing and discovery that I've mentioned in the past. We can definitely do some crazy things in this area.

Tool time

Perhaps we are a bit unique in that there are hundreds of ways of encoding programs.

Renovating a house has made me understand the importance of the saying "the right tool for the job". I'd guess that the number of different tools runs into the hundreds there, too. (FWIW.)

The fallacy of the right tool

Is it true that French is better at expressing romance while German is best for talking to your dog?* This is complete BS, obviously, languages are fairly consistent in what can be expressed well, perhaps some languages might have 100 different words for ice, but this is more of an aspect of culture (we can create new ice words for English also if we needed them).

Most programming languages are fairly the same in what they are designed to express best, so they compete rather directly. They are not those kinds of tools, rather the abstractions in the languages are tools.

DSLs are sort of the exception, but even then the core of a DSL is fairly portable and should be considered (and often is) a more general language.

* A joke told to me by a German in the past, no offence intended.

Edit: here is a ref to the blogpost that originally pointed out this fallacy: http://schneide.wordpress.com/2009/12/21/the-fallacy-of-the-right-tool/

inConsistent Experience

You are stating [that developers would get a more consistent experience] without any evidence

Nonetheless, this is what my experience and logic tell me. I have plenty of experience with the inconsistencies and discontinuity spikes of polyglot programming.

the difference between Flash Studio or Expression Blend and Visual Studio is very large, and they very much emphasize different roles

Sure. It is only natural that languages statically designed to fill distinct roles will fail to bridge multiple roles. That doesn't make a very good argument against designing a language that can bridge different roles.

you will need one big IDE and one big language

I would expect the opposite: a small, simple language, with a few fine-grained primitives.

Consider how many responsibilities a 'class' has: namespace, object configuration, object construction, interface, ADT if nominal typing, a unit of persistence, unit of synchronization, unit of allocation and collection. Consider how many responsibilities a 'procedure call' has: send, synchronization, reply, time-step, failure handling, interrupt handling, resource management.

These coarse-grained 'packages' of responsibilities are what allow your language design to easily be statically optimized to its purpose. However, they're also a significant source of semantic noise and create problems when working across more domains... e.g. procedures are quite poor for distributed programming unless you add even more features just for that purpose (such as promise pipelining and batch-processing).

By simplifying the language, refining the responsibility-sets to much smaller packages, the language becomes far more flexible and able to effectively serve a wider range of purposes. The main cost is that you need services or libraries at a finer granularity, and your optimizer must deal effectively with a wider variety of patterns.

The prototyping that a developer does is very different from prototyping for design purposes.

Prototyping involves exploration of ideas, refinement of requirements, and discovery of design bugs. The design process is iterative, and is neither naturally nor ideally separate from prototype development. Tweak, test, tune, repeat.

you assume that we only code applications

I apologize for using the word too broadly. My statement about applications receiving overhauls applies to software in general - whether it be library, service, framework, or user application.

I would be horrified if any part of a concept car presented at a car show wound up in the real car I bought.

If it meets requirements, why would it bother you?

To put things in a maybe

To put things in a maybe provocative way, I find the "bad" features of OO more interesting than the "good" ones, since since finding elegant proof principles for a "bad" feature turns it "good".

This is no secret. The "bad" features of OO are everything at the programming level, rather than the analysis & design level. For example, OOP may require passing data to resolve nonfunctional requirements, whereas OOA/D does not even concern nonfunctional requirements and so the problem is skipped at that level of abstraction.

concrete metaphors Is this

concrete metaphors

Is this not a contradiction in terms?

No more so than concrete

No more so than concrete art.

Or "abstract syntax"

I had a teacher who said that "abstract syntax" was an oxymoron (i.e., a contradiction in terms). In some sense he was right, of course, but in another sense it's just a position in a spectrum, between concrete syntax and (more abstract) semantics.

Well, maybe not "just": the whole spectrum is full of interesting phenomena, so there's more to the story than "just" that.

"An object is quite

"An object is quite simply..." I welcome folks who think along these lines to examine the failure of Carnap's (far more fine grained and much more rigorous than any "OO design") reductionist language project, and then to read, say, Quine's "Ontological Relativity" paper which is among the finer nails in the coffin for such projects.

Bad, or no, philosophy of language is one thing, but as a basis for bad computer science it's poison. [I punt on the whole issue by making absolutely no demands at all that formal languages bear any necessary relationships to what we commonly call "natural language" - pedagogical goals, language usability and the like be damned.]

- S.

Awesome! curriculum

Debates aside, this looks to me like an amazingly good curriculum. The functional and imperative split is a really clear thinking way to divide up the techniques of reasoning and proof. The synthesis of the two in a data structures and algorithms course - treating parallel as the general case is an excellent practicum for that foundation -- turning the abstractions into a genuine toolbox. Presumably the notational conventions and stylistic conventions they establish will permeate the larger curriculum (replacing the all over the map, ad hoc, inconsistent notational and rhetorical conventions that are so common).

I think they are missing (so far, but knowing CMU, either not really or soon to be fixed) some fundamentals of software engineering component here. I don't just mean the usual thing of dividing students into groups and having them do some project. I mean something that looks at case studies of organizational modes in the real world and analyzes them partly from a business perspective and partly from these CS foundational perspectives (e.g. how do your choices of data structure relate to your business model? Compare Google, Facebook, and typical patterns in cellular telephony).

I wouldn't worry too much about OO foo. The controversy over that point seems really off the mark. This sounds like a curriculum on the basis of which students who do well will be able to master OO concepts and literature very easily and more thoroughly than many for whom it is presented as a fundamental foundational concept.

I am not really worried

I am not really worried about the curriculum, starting from FP is a valid approach. Nor did I start off with OOP, rather we started with procedural/imperative. If contemporary FP language implementations didn't suck so badly at parallel, it might even be considered a pragmatic choice. If these kids want to learn how to write fast parallel code, they'll have to go procedural eventually (C++ or CUDA).

If contemporary FP language

If contemporary FP language implementations didn't suck so badly at parallel, it might even be considered a pragmatic choice. If these kids want to learn how to write fast parallel code, they'll have to go procedural eventually (C++ or CUDA).

I do not think that this is true. Functional languages usually express computation in ways that are more easily parallelizable, by allowing people to avoid patterns that place too much reliance on mutable state.

A lot of functional or mostly-functional languages do have currently dominant implementations that don't take much advantage of parallelism, but that is not a flaw in language design, not a necessarily permanent condition, and also not universal.

There are some really excellent functional or mostly-functional languages that have implementations that take good advantage of parallelism - such as Clojure, for example.

Ray

The fact that an

The fact that an implementation for a language design cannot take advantage of parallelism is a problem with the language design.

Clojure focuses more on concurrency (doing multiple things at once), not parallelism (doing a computation faster). Where FP has its most influence in parallelism is in map-reduce style systems (which ironically enough, usually don't leverage FP languages).

Not disagreeing, but we

Not disagreeing, but we don't need to leave "FP languages" to leverage procedural parallelism. For Haskell, see:

What Haskell teaches most of all is metaprogramming.

Sure! FP languages can be

Sure! FP languages can be used for parallelism, and especially excel when meta-programming is used to generate efficient code and so their traditional overhead can be ignored (this is a technique I'm experienced with, and I got many of my ideas from Conal).

But I'm not sure parallel is big enough to bring FP languages in the mainstream. Rather, concepts from FP are peeled off into frameworks for other languages (MapReduce), or munged up beyond recognition (CUDA).

Your first sentence is ambiguous.

The fact that an implementation for a language design cannot take advantage of parallelism is a problem with the language design.

I disagree with the above statement as written, but that may be only because I consider concurrency to be as useful a type of parallelism as any, and may be because I'm otherwise taking what you wrote too darn literally. There is a possibility that we may actually be in complete agreement.

Here are three different closely related statements that I agree completely with.

If a particular implementation for a language design cannot take advantage of parallelism, that's a problem with that implementation.

If no possible implementation for a language design can ever take advantage of parallelism, that's a problem with the language design.

If the dominant implementation of a particular language design cannot take advantage of parallelism, then in addition to being a problem with the implementation, that creates a problem for the language design by making its acceptance, use, and uptake unlikely.

Are these what you meant? Or some of them?

Ray

Concurrency is not a type of

Concurrency is not a type of parallelism. Support for concurrency in a language does not imply parallel computation - e.g. we can switch between threads on a single processor. And the converse is also true: support for parallelism does not imply concurrent behavior - e.g. we can parallelize large matrix computations without introducing any observation of time or non-local sources of change.

In some ways, concurrency and parallelism are contradictory. We observe concurrency to the degree we share and coordinate resources - interaction between agents. We achieve parallelism to the extent computation resources are independent - separation of agents.

The definition of concurrency and parallelism

A quick search for "concurrency vs. parallelism" produces links:

A Stack Overflow thread, whose top voted answer cites some Sun Multiprogramming Guide claiming that concurrency is a subset of parallelism. This point of view considers processes to be concurrent if they are in progress over the same time interval and parallel if they are both running at any instance.

Another link that comes up is this haskellwiki entry that takes a position similar to yours, though possibly subtly different. I can't tell. But the last sentence of the link is "Other authors may use the terms 'parallelism' and 'concurrency' in different ways or do not distinguish them at all".

The point being that I think you should acknowledge the possible ambiguity, even if you endorse one set of definitions. And anyway, arguments over definitions aren't interesting.

Finally, regarding the substance of your post, concurrency is also sometimes supportive of parallelism: concurrent semantics often enable parallel implementations, by letting effects interleave.

The blog

Since this thread is about Bob Harper's blog, let's see what he wrote about parallelism vs concurrency. :)

Is concurrency ever useful

Is concurrency ever useful without parallelism? Even in cases like asynchronous disk I/O, the point is the let the disk work in parallel with the CPU, instead of having the CPU wait for it.

Concurrency without Parallelism

Concurrency is useful even without parallelism. Time sharing concurrency is what allowed early computers to be multi-user, multi-task and interactive. Concurrency is a way of organizing and modularizing computation. And concurrency supports local reasoning about divergence properties.

I would note that many FRP, synchronous reactive, temporal logic, and concurrent constraint implementations provide excellent examples of effectively leveraging concurrency without parallelism.

Meh

Time sharing concurrency is only useful because the users are parallel. Well, the users and the rest of the external world.

Users are concurrent. But,

Users are concurrent. But, as we are constrained to make observations within this computation we call the universe, we could not say whether God's computer exhibits parallelism...

Who told you God has only

Who told you God has only one computer ? :p

Anyway, besides the silly-obvious question of what OS(es) He's running (and what for), I'm curious ... whether and how He figured about something better than either (resp. both) OO *or* (resp. *and*) FP. ;)

(just teasing; feel free to delete if inappropriate / OT)

Perl? or Lisp?

Perl? or Lisp?

Even though for some other

Even though for some other notion of time users might not be parallel, in the notion of time within our universe they are parallel.

True. I thought about that

True. I thought about that just after I posted.

Definitions, definitions

I've noticed that people with a strong background in FP you tend to see parallelism and concurrency as the two distinct things you talk about whereas more imperative programmers don't see the distinction anywhere nearly so strongly. Which isn't surprising: in an imperative setting most parallelism is also observably concurrent and on modern multicore/networked machines most concurrency is parallel.

Well, there is such a thing as functional OO.

It is entirely reasonable to have an object oriented language where there are no mutation methods. You can just have various kinds of constructors.

For example, if I have a list, and I call "cons" with that list, and a new element, it returns a new list. Which may share storage with the first list under the hood, but in the absence of mutators, you don't need to care.

You can do this with any kind of complex data modeling type, too -- not just lists. You can have "wheel" that inherits from "thing" and "tire" that inherits from "wheel", and so on, all with dozens of data attributes - and it will still work functionally.

So, yeah, I don't buy the idea that OO is anti-functional or anti-modular. I'd be willing to discuss the idea that most people are doing it wrong and most languages we've developed so far encourage doing it wrong, but there's nothing about OO or inheritance as ideas that militates against modularity or pure-functional computing.

Ray

Bah, Humpty Dumpty Definitions

I don't buy the idea that OO is anti-functional or anti-modular. I'd be willing to discuss the idea that most people are doing it wrong and most languages we've developed so far encourage doing it wrong

OO is defined by how it is taught, the design patterns and idioms, how most languages do it, etc.. And the ability to encapsulate access to resources (including state) is a big part of OO analysis and design. Rather than an extraordinary claim so many other people are doing it wrong, or that you'd somehow succeed where they did not, make a more ordinary claim: that there is something wrong with OO.

The argument that functional OO is OO based on some specific subset of features (inheritance, for example) is a dubious one. The real question, I think, is how effectively you can model the various OO design patterns. If your programming model needs new design patterns, it's a new paradigm and deserves a new name.

Almost missed that.OO is

Almost missed that.

OO is defined by how it is taught, the design patterns and idioms, how most languages do it, etc.

With which I largely agree.

Indeed, besides its unsatisfactory naive way --though somehow strangely "intuitive" and quite accessible-- to "devise" programs (*), the second biggest issue for OO has always been it seems, and still is, also, de facto, the too many variations in people's views when they attempt to even define it, precisely.

Compare this to Functional Programming, which is much closer to be as much a practice as it is a theoretical activity (around functions and calculi, duh). YMMV.

And I'm from the mainstream OO culture writing this.

Now, relevant to this :

The real question, I think, is how effectively you can model the various OO design patterns.

Bingo bis repetita.

I often wished that this type of work, for instance (link #2, Karine Arnout's thesis) had received more attention from the OO community, precisely, as I believe it to be the only right way to evaluate languages with better objectivity in regard to their suitability for their thus-claimed "paradigm" (to join your point).

I mean, assuming one only cares a minimum about improving the situation for OO against its problematic surrounding fog...

Anyway, IMO, then people could have saved some valuable time off the fun, but not always effective, debating thing, with or without FP folks around. (I'm not alluding to the discussions on LtU specifically, here. It's a general remark re: ever since the first OO languages started to compete with each other)

And time is also energy, somehow, not just money. ;)

Energy is actually precious in this universe! We have to trade matter for it, let's not forget. :)

(*) if only because it tries, first, to model its programs' input and output with vastly approximate AND much arbitrarily, artificially made, abstract, "snapshots" of the "real world objects" (in any sense, arbitrarily, too) at a time T -- and no matter what/who provides the input -- and OO reasons much less, or much later, on the computation itself or if whether its models can fit in a logic, like domains do for functions. An approach which easily seems rather weird, from a strictly logical point of view... (esp. to the FP camp, unsurprisingly). Also in the presence of the so-called "contracts" that OO must at least pretend to care about eventually, but not even always actually commits to, etc...

Ditto

I wish I had actual content to add, but since I don't, I'll content myself with observing that this post completely nails my own feelings about OO as a multi-decade practitioner cum (overly) passionate convert to (impure) FP, not just for theoretical reasons, but ultimately practical ones.

belated question

Only coincidence makes me chime-in. I have a question I'd be interested in seeing debated pro and con, trying to sell as both true and false, looking for the fuzzy partially true value after summing over all the cases.

Q: Is pattern-based (or type-based) dispatch in SML or Erlang style functional or object-oriented, and is it ever both any sense?

(Some questions you can answer yes if you're willing to squint and simulate, or answer no if you insist cost and/or universality is definitive.)

I was designing and coding an immutable distributed message thingy in C (to handle a small distributed conversation you don't need to know about) and wanted to put parts of multiple stages at one endpoint right next to each other. And that's when I noticed I was imagining branch structure in an SML method, or Erlang branching based on message type. When I did several searches, picking different words to describe that dispatch style, the result was interesting and prompts my question.

Scala supports both very

Scala supports both very well "extrinsic" dispatch via case pattern matching (ala SML) and "intrinsic" dispatch via methods in traits. It used to be that extrinsic dispatch was seen as the best solution to the expression problem, but multiple inheritance in traits actually solves this problem very well also. The only difference then is where are you putting the code (keeping in mind that you can cross cut class hierarchies using traits)? Case matching makes sense when you have a few cases, or the dispatch is very intertwined with some local logic and is unlikely to be repeated (and therefore doesn't need to be a "method"). Use methods when you want an abstraction and it makes sense to distribute cases into various types. I still cringe when I remember the mega-case matches in the Scala compiler, although I'm not sure whether its "right" or "wrong."

In my opinion, case-based matching can definitely be object-oriented when you are matching on sub-type relationships, but its really a gradient, where the OO-ness depends on the OO-ness of your abstractions.

I definitely privilege

I definitely privilege giving an essentially functional interpretation (or see it as an informant for a design or implementation rationale) to this sort of things.

But since I am no Functional Programming expert, coming only with my "tainted" strongly OO-biased background, I won't elaborate much beyond this:

I think it'll be wiser and more productive for you to think about it in FP terms than OO ones, because I don't know of many other reasons to do pattern matching than for, either:

1. checking against the membership of "values" to the "type" (*) that the patterns contribute to delimit (and, from that membership inference, fork on to do something else)

or

2. pattern match to map some "input" on to some "output", e.g., via a form of rewriting, or further function applications down the stack, etc.

(#2 above being, strictly speaking, just a specific, but pretty frequent tactic for the more general dispatch principle in #1)

In either cases, it's not far fetched, I think, to consider that, anyway, you're precisely being aware of having a domain, a map, and a co-domain. All three notions at the core of FP that will "force you" to consider more interesting and/or critical properties, such as : can I know if it's partial or total ? if rewritten, does it always terminate for the (input's) domain I'm interested in ? what "thing" my patterns, when defined this or that way, actually recognize, to begin with ? is it deterministic ? could that make profit of laziness or can I afford to stay eager/strict ? etc, etc.

On the other hand, I suspect that if you try to interpret it exclusively in OO terms sort-of "forced upon it", you might eventually end up overlooking important aspects of that specific form of computation itself going on, there (and that's what it's all about in the end), by being too busy and distracted with artificial considerations induced by the OO bias (no matter the language is).

More concretely : you may have hard times to converge to your self-satisfaction with the OO types and sub-types that your language (C++ ?) allow you to devise and combine, you'll start asking yourself which part of the values' state should or must be stripped off, where and why to put which "contracts" (assuming that'd make any sense), etc, etc.

Pattern matching and/or term rewriting are powerful design or implementation tools for many kinds of outer problems, and personally I conceptualize them fundamentally as pure (thus, side effect free) function/maps from one input "language" to the output one.

(It's a more general mental state I'm in for 3 or so years, now, btw. I don't even think mostly in terms of types and types systems of the host languages that I'm using anymore, be they OO or of some other hybrid kinds. They're almost boring. Instead, I try to recognize the hidden, unnoticed languages and phrases and data flows that make the essence of the applications or librairies I'm involved in composing. I don't think "values", "types", "program" as much as I think app/domain-specific "phrases", "languages" and interpretations/rewritings; each time crossing fingers for me to be able embrace them vs. the users specs and requirements, thru pure, stateless linguistic transformations.)

So, anyway, giving them an OO interpretation, in today's OO languages, is just too much effort and/or too much dependent, prone to handcuffing yourself with arbitrary schemes of thoughts that follow, be it in a design process context or an implementation one.

Now, I have no problem using common OO thinking to model the state machine inside, e.g., a VoIP softphone, or parts of a custom UI dealing with 3rd party librairies, applied adapter patterns, or other whatnots, etc.

But I leave it, a priori/by default (as much as I can), out of the way of my use cases for pattern matching-driven techniques.

'HTH.

EDIT
(*) the abstract one, that is

(Oh, and yes: not all, but some OO languages are FP-friendly enough, to some extent anyway, so I hope it's very clear that I never claimed anywhere above that those techniques can't be applied while using, say, a "mostly OO" language. I think C++ is one of them.)

Meh. There are too many names flying around already.

I get kinda sick of buzzword bingo. Somebody makes a tiny variation on something well-known and names it something different. Meanwhile three other people have also made the same variation independently and named it three other things. And in at least a couple of those four cases, they'll have picked names other people are already using for different tiny variations. Too many names for things not different enough to need them, ultimately retards progress by promoting confusion, IMO.

What I described is usually called functional Object Oriented programming. I'm cool with that. It's functional programming. It's also OO programming. Both are well-known terms being used correctly.

Ray

The confusion you describe

The confusion you describe is created when people "pick names other people are using for different tiny variations", or in other words when they fail to use a new name for new variations. You promote confusion, too, by trying to call "OO" a paradigm that does not express OO design patterns.

Of course, it's also a problem to use a new name for a minor variation - one that admits all the same design patterns as the original. If an education in OO for Dummies will apply effectively, then you can reasonably call it OO.

"Both are well-known terms being used correctly." I suppose you also call your car a "horseless carriage" with that justification. Darn those retarded kids for using new names for old concepts. Anyhow, I disagree with your claim that `OO` is being used correctly. Functional Object Oriented may be inspired by OO, but that doesn't mean it's OO.