A Proposal for Simplified, Modern Definitions of "Object" and "Object Oriented"

I originally posted a proposed definitions and rationale about a week ago. But I've revised them based on feedback and I think I'm ready for the PL community to review the proposal.
An object is a first-class, dynamically dispatched behavior. A behavior is a collection of named operations that can be invoked by clients where the operations may share additional hidden details. Dynamic dispatch means that different objects can implement the same operation name(s) in different ways, so the specific operation to be invoked must come from the object identified in the client's request. First class means that objects have the same capabilities as other kinds of values, including being passed to operations or returned as the result of an operation.

A language or system is object oriented if it supports the dynamic creation and use of objects. Support means that objects are easy to define and use. It is possible to encode objects in C or Haskell, but an encoding is not support.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Much Ado

No offense intended, but it seems like this exposition is a lot of informal definitions and a little bit of history. TaPL's treatment of existentials is only briefly mentioned. In fact, the dismissal of existentials because "they can be used for other things" is a total non-sequitur. Similarly, the paper making a distinction between objects and ADTs could make its point much more concisely.

The fact is, existing type theory can model "object oriented" features quite well, as compositions of more primitive ideas (like existentials + recursive types + record types). If you're looking for a definition, that's the place to go (and if you deem existing definitions insufficient, you should offer something with a logical theory that's at least as rigorous).

Yes, we need informal definitions

You are missing the point. I am not proposing a new formal definition, I am proposing standardized vocabulary that enable people to talk about objects, especially if those people have not studied PL theory. The formal definitions are clear (mostly) but our informal descriptions are weak.

By the way, existentials are not needed to formalize objects, as my previous work and chapter 18 of TaPL demonstrates.

Update: several people have also asked me for definitions, because the OO vs ADT paper never really comes out and provides a simple definition.

The necessity of existentials

Or "what does bounded quantification mean?" or "what does partial application mean?"

"Merely" proposing standardized vocabulary is proposing a formal definition. That's exactly the point of common formal definitions.

Informal definitions aren't particularly valuable here, except as introductory motivations to formal definitions, IMHO. You wouldn't do physics without calculus, you shouldn't do PL semantics without the relevant logic.

Sure, you can partially reason about "objects" in terms of bounded quantification (as in Chapter 18 of TaPL) or partial function application, and in both cases you'll compile to some intermediate stage probably expressed in terms of existentials (e.g.: in what way do +(1) and \x->x+1 have equivalent representations? exists T.rec X.{data:T,fn:X->int}, instances of which are undoubtedly objects).

My point is, if anybody really cares about this subject, there's a very rich theory available to them to analyze all sorts of different aspects of programming languages.

Existentials

You are making things more complex than they need to be. (+1) and (\x->x+1) are just members of Int->Int, or Num a => a -> a if you are Haskell programmer. I have never found anything essential in Pierce's use of existentials for objects. It is not wrong, but the existential can always be eliminated without are loss of generality.

As for definitions, we will just have to agree to disagree.

Getting to my point

(+1) and (\x->x+1) are just members of Int->Int

And how is that equivalence realized? That's my point.

Why is a behavior a

Why is a behavior a collection? I would propose you flip it around - if you must have collections, have a collection of named behaviors.

But do you need collections? or names?

Actors only have one behavior, after all, though it might (if it so chooses) distinguish on value. I quite favor OO models where there is no extrinsic notion of methods - i.e. where any distinction is performed by the object. Cloud Haskell distinguishes messages on type, Erlang on arbitrary patterns; E language and Smalltalk and many more OO models don't use named methods except as syntactic sugar for programmable dispatch.

I tend to understand named methods as a compromise to avoid richer type system. If we had dependent types, we could build rich protocols and constraints without the semantic pains of namespaces.

It is not easy to find words to describe this concisely

The dictionary definition of behavior often allows a range of different responses to different stimuli. Therefor the idea of a collection is already present in the concept of "behavior".

I tried to avoid "name" and use something "identifiable" ... but I ended up needing the concept of names in order to be able to define "dynamic dispatch". You see it used there in the definition. I try to be a little fuzzy about this, to allow Erlang style, but still concrete enough to match with our intuition about multiple operations.

Names aren't necessary for

Names aren't necessary for dynamic dispatch. I agree that different behaviors for different stimuli is at least vaguely related to the notion of 'collection'.

Names

I agree that they are not necessary for the concept. I just found them necessary for writing the definitions in a concise way, without introducing many other ideas that I didn't want to get into. So its a question of wording and presentation, not meaning.

Methods are present in practice

I think one of the (not to be neglected) goal of wcook is to capture the definition of "object-oriented" as actually used in practice, rather than what he or you would like better as an object-oriented system. That's probably the reason why he captured the idea of collection in his definition.

Methods are only an

Methods are only an occasional part of the practice. That was half my point. Not only do I favor such OO models. They exist to be favored. A description of the OO practice should account for them.

And in my extensive discussions with @w7cook on this subject, I must say he's more interested in an "insightful" definition of OO than one that reflects practice. For example, he's quite intent on removing state from the definition. Unlike method names, state is universally part of the OO practice. (I don't mean that every object is stateful.)

"this"/"self" and open recursion

The definition seems to say that a simple record of functions qualifies as an object, especially if its "easy" to write. Many different records with distinct functions could satisfy the same record type.

But one of the things that makes standard objects distinct from simple records of functions is "this"/"self" reference and open recursion such that if an object has a bar behavior which calls this.foo then the foo called may be something that bar couldn't have been "aware of" at the time it was defined. The way foo gets (re)defined could be through inheritance or prototyping or whatever.

Or perhaps that's covered under "the specific operation to be invoked must come from the object identified in the client's request"? If so, that's not clear especially with the later commentary saying that open recursion isn't necessary to the definition.

Open recursion needed for inheritance

Good point. I say that inheritance is not required for objects, therefore open recursion is not required either. Some form of basic recursion between the operations is probably needed for any useful object to be defined, but there is no requirement for the recursion to be open.

Excluding inheritance from the required features is controversial, I know. But I am convinced that inheritance, while very useful, is not absolutely essential to create a program that is recognizably object-oriented.

Inheritance <: Delegation

Language support for inheritance isn't necessary for OO. But the basic delegation pattern represented by inheritance is common to OO. It can be achieved readily enough by has-a, especially in languages with programmable dispatch methods. (Doing it by hand per method can be quite painful, but also counts.)

No need for implicit 'this'

if an object has a bar behavior which calls this.foo then the foo called may be something that bar couldn't have been "aware of" at the time it was defined

Languages without an implicit 'this', and instead explicit passing of the receiver (typically as first argument) have this ability, too, so I don't see how 'this' is central to OO.

Arguably those languages

Arguably those languages aren't OO, but allow a shallow encoding of OO.

trivial syntactic matter

I view this as a trivial syntactic matter, without any bearing on the question of whether a language is OO or not.

You don't lose (nor gain) any expressivity by requiring senders to pass the receiver as first argument. object.method() becomes method(object). And even in the implementation of an object's methods, nothing changes since method() (on the implicit 'this') is just a shorthand for this.method() [so it's trivially expressible as method(object) in languages with explicit receiver.]

Not trivial

I disagree that this is a trivial syntactic difference. Scoping works completely different. In fact, the "method" in method(object) and in object.method() aren't even in the same semantic class: the former is a variable, the latter a label. One consequence is that the former is alpha-convertible while the latter is not.

I don't think this

I don't think this distinction is universal. method in method(object) is a label in a language with first-class modules, where modules are records. Your distinction does indeed seem like a limitation of the encoding of OO, not the definition.

Hm

Hm, I don't follow. Which module system do you have in mind? I agree that modules essentially are glorified records, but I don't see how that changes the variable/label distinction, at least not in principle. (In practice, module systems try to hide it somewhat, but it still exists once you desugar enough to get a view of the underlying semantics.)

Edit: The main point I was trying to make is that in the function call syntax, the "method" is bound externally, while with method call syntax, the method is internal to the object, at least conceptually. IMHO, that has quite some -- fundamentally different -- implications.

The example I was thinking

The example I was thinking of where the distinction doesn't exist was Leijen's first-class labels. Indeed, I'd say any pure definition of OO must include the possibility OO systems with first-class messaging, and dually, any definition of functional programming must allow systems with first-class labels.

The deeper issue is that I

The deeper issue is that I have removed inheritance/delegation/open recursion from the definition. My point is that OO has two kinds of extensibility.

1. The first concerns how objects are used, and is given by dynamic dispatch. This is the same kind of flexibility given by higher-order functions (which are always dynamically dispatched)

2. The second concerns how objects are created, and is provided by inheritance. Inheritance is a general idea that can be applied to derivation from any self-referential structure (ML modules, recursive types, objects, classes, interfaces, state machines, etc).

I have said #1 is essential to the definition of OO, while #2 is not. That is, you can have objects and OO without inheritance. The Go programming language is an example.

What was wrong with the old definition?

I learned the following: An object

  1. has an identity
  2. has a state
  3. has behavior (in the form of methods)

I really fail to see the advantage of this new, other, definition for objects.

Removing ambiguity and orthogonal issues

Its a good question. The biggest change in my definitions is to the word "object-oriented". Previously it was generally understood to require inheritance, classes, and perhaps other things, which I have removed.

As for objects, your definition might apply to modules in ML, Ada or Modula-2, and perhaps even to type classes in Haskell (since state isn't required to be mutable. It also doesn't specify how the operations are bound, so static binding might be allowed.

I guess that my point is I want a definition that is very precise about the essential characteristics, and omits things that are non-essential. I have, for example, removed mutability as an absolutely essential characteristic of an objects. I did this not because mutability is not useful, or almost necessary in practice. I removed it because it is orthogonal to the issues that make objects unique.

William, I like your

William, I like your definition, in particular as opposed to the old one which required "identity and state". Identity and state are intertwined with mutable objects, and that's something we need to get away from. So I am really glad to have a precise definition that concentrates on behavior and is silent about identity and state.

Not OO

Well. I had this discussion before on G+, and my counterargument is that without identity and state you're not doing something that is recognizable for a large audience as OO. Maybe it's better, I don't know, but I would prefer that one wouldn't call it OO, but something else.

The thing is that identity and state give rise to OO modelling, as made concrete by UML, in which one can reason over object diagrams, state diagrams, sequence diagrams, and class hierarchies in class diagrams. (There are more, of course.) I might not like it, but that's OO for a large audience, and I don't think this definition is better than the old one I learned out of a number of UML books.

Agreed. Again, a technical

Agreed. Again, a technical definition of OO is hard to grock without some "why" context. I dont find the UML argument very strong, but design is a big part of what make objects "objects" as opposed to just some bits in memory.

Identity and state are very important for my own definition of OO; it is also the primary distinguished between OOP and FP. We definitely need saner ways to manage them, but to discount them as non essential...we are thinking about different paradigms.

OOP from Message Passing Concurrency

I used to very strongly feel that identity and state were irrelevant to whether a language was "OO" or not. Cardelli's (immutable) sigma calculus, and the fact that utilizing state-passing in typical "OO" languages loses you no more than it otherwise does, reinforces this. I also felt that implementation inheritance (and to a lesser extent inheritance in general) was unnecessary and undesirable. I tended to lean toward delegation. Really, though, I and, I feel, most others were picking and choosing which features to include mainly on tastes. While I could justify/rationalize my choices, I didn't have some independently compelling model whose features I was enumerating.

I thought about this off and on over several years and during that time I learned about object capabilities, got in to Haskell, learned about the pi calculus, and deepened my understanding of formal programming language theory and things like category theory. Several things became clear.
1) The lambda calculus was fundamentally important within and far beyond computer science.
2) Functional programming was essentially based on the lambda calculus and had benefited enormously from that base.
3) Object-oriented programming had no similar base, and notably not even the lambda calculus served as such a base.
4) The OOPL research literature was rife with isolated "intuitive" metaphors and isolated ad-hoc formal systems (such as the sigma calculus or your nuObj calculus for that matter). I.e. they were formalizing existing practice rather than deriving it.
5) From things like object-capabilities and connections to coalgebra, it was clear that there was something to OOP. It was NOT just a pragmatic pile of ad-hockeries.

Ultimately, after writing programs in the raw pi calculus and writing other message passing code, I found myself drawn toward patterns rather similar to OOP. The literature suggested that I, at least, wasn't alone in this lack of creativity, but the blue and deep blue calculi cemented this as something more significant. I now view (that is define) OO as based on concurrent message passing calculi in the same way FP is based on the lambda calculus. The pi calculus, or more conveniently the deep blue calculus, gives rise to something resembling typical OOPLs. Other calculi, such as the join calculus, with your Funnel taking the place of the deep blue calculus, provide variants.

Taking this perspective, which seems reasonably well motivated historically though not as much in current practice, provides pretty clear answers to what the "features" of OO should be. One thing that's imminently clear is that OO is inherently stateful. With this comes a weak notion of object identity, but something like Scheme's eq? does not seem to appear fundamentally. An eq? can be constructed if desired by cooperating objects, as you yourself demonstrate with Funnel, but it can't be imposed. Providing a pervasive eq? doesn't seem incompatible and this ambivalence about the neccesity/desirability of eq? is reflected in the object-capability community. Implementation inheritance does not exist in this view except via delegation. It could be added, but there is nothing justifying it. Interface inheritance is more natural, but is more of a concern for type systems than the underlying calculus. Encapsulation is rather important. Connections between coalgebra, concurrency, and OOP seem more natural in this light. Similarly for connections between concurrent calculi/logical frameworks, security, and object-capabilities. Finally, while this may just be an artifact of how we like to formalize things, this perspective leads to something more akin to E-style lambda-based objects rather than prototype- or class-based objects.

I think Smalltalk may have been one of the earlier and more influential drivers away from this conception of object-oriented programming. The goal for Smalltalk was really late binding. From the perspective of having everything be as late-bound as possible, "everything is an object" makes sense. From the perspective of an object being a (constellation of) concurrent process(es), having 1 be a concurrent process seems absurdly extreme even if technically doable. This is consistent with the lambda calculus and functional programming where everything could be a function, but it isn't really desirable to take that view.

To conclude, I don't want to say that this is a good description of what people typically mean by "OOP", nor do I have any intent of going around and telling people that their definition of "OOP" is wrong. I do feel, though, that this is much closer to what OOP ought to have meant and perhaps should come/return to mean, and is certainly a much more actionable and principled foundation for extending (this notion of) OOP.

Object Thinking

I have my own designy definition: object-oriented programming is about thinking about your program in terms of interacting and somewhat encapsulated objects. Some of those objects are virtual computer manifestations, while some of those objects exist in the real world and interact via IO (yes, the user is an object :P).

But now, how do we support object thinking? When I was young, I rolled my own objects in C (which otherwise supports procedure thinking), add my little v-tables, whatever. But the only reason I got there is because I wanted to think about my program that way and only then did I accidentally reinvent OOP (reinvented is too strong, I exposure to Smalltalk so I was unwittingly biased). My point is that my "what" of objects at the time emerged from why I wanted to use them.

William, I think your definition would be much stronger if you included some more "why objects" then you could argue how the "what are objects" matches back to the why.

A language or system is

A language or system is object oriented if it supports the dynamic creation and use of objects.

Am I the only person on here who disagrees with the "or system" part? To me, object-oriented systems are totally different from object-oriented languages. Object-oriented systems like the Smalltalk environment and CLOS are living things you can directly manipulate.

As for collection of named behaviors, I understand OO interfaces as a record of operations.

Systems have other properties

Hi John. The intent of the definitions is to state required properties, but the properties are not exhaustive. Object-oriented systems may have many other properties, but they are required to use behavioral objects. I understand that we have overloaded the term "object-oriented" until it means many things, and many things to many people. I'm trying to unbundle it. You could have a "highly reflective dynamic object-oriented system" which added more capabilities. But I would argue that these additional capabilities are not part of the definition of "object-oriented". I'd say OO langs and OO systems are different because langs and systems are different, not because they have different idea of objects. What do you think?

Just polymorphism then

This definition essentially says polymorphism is the essence of OOP. I agree with that. Especially it leaves out concepts like encapsulation, private state, parametric polymorphism, subtyping, and inheritance.

What I dislike about the definition is the use "may" and "can". For example, you could say "a behavior is a collection of named operations with shared state".

Avoid subjective words like "easy". Personally, I find it easy to encode objects in C. Does that mean C supports object-oriented for me, but not for other people?

Detail: client's -> clients

"Polymorphism"

Except that the use of the term "polymorphism" for (essentially) dynamic dispatch has always been pretty much a category mistake, and is usually avoided in PLT circles. Objects generally are no more polymorphic than integers. They just happen to have a more interesting behaviour, because they can encapsulate computations. Like any random first-class function, for that matter.

Not enough

As I commented above, I don't think that's enough. Here's some Haskell

    {- the interface -} 
    data Animal = Animal {
      speak :: Int -> String, 
      move :: Int -> String
    }

    {- One type of animal -}
    bird = Animal {
       speak = \repeats -> concat $ replicate repeats "cheep ", 
       move = \repeats -> concat $ replicate repeats "flap "
    }

    {- Another type of animal -}
    dog = Animal {
       speak = \repeats -> concat $ replicate repeats "bark ", 
       move = \repeats -> concat $ replicate repeats "run "
    }

    *Main> speak bird 2
    "cheep cheep "

    *Main> speak dog 2
    "bark bark "

This exhibits exactly what OOers tend to mean by "polymorphism" and is very easy to write. The same thing would take 3 times as many tokens in Java.

Yet, IMHO, what I've done does not qualify as object oriented programming in an object oriented language. It's just functions and records, and in Haskell records are basically just sugar for functions and algebraic data types.

Method syntax in Haskell

Even OO-like method call syntax is available in Haskell:

bird `speak` 2
dog `speak` 2

Looks like object-oriented

Looks like object-oriented programming to me. Why do you say that it isn't? Sure, things will get a little more complicated when the interfaces are recursive, as in a stream interface. Adding many of the non-required OO features, including inheritance, subtyping, encapsulated state, etc, is also nontrivial. But those look like objects to me.

Someone commented that I need to fix the definition to avoid using "easy". I could say "provides useful syntactic support for creating and using objects". In any case, should I say that Haskell is OO or not? It is pretty easy to do OO programming in Haskell, that is well known.

I didn't count it, but it looks like about the same number of tokens in Java. Why the "repeat" method is not included in the String class I will never know. In Ruby it would probably be fewer tokens than Haskell.

interface Animal {
  String speak(int x);
  String move(int x);
}

class Bird implements Animal {
  String speak(int x) { return StringUtils.repeat("cheep", x); }
  String move(int x)  { return StringUtils.repeat("flap", x); }
}

class Dog implements Animal {
  String speak(int x) { return StringUtils.repeat("bark", x); }
  String move(int x)  { return StringUtils.repeat("run", x); }
}

Tokens and stuff

Mea culpa, the Java code is roughly twice as many "essential" tokens not three times (where essential ones are mandated by the language and inessential ones are just gaps in the standard libraries).

Ruby would definitely be fewer tokens because you don't need to declare an interface or specify types.

Why isn't it OO? Because as soon as I want to do anything interesting that simple encoding falls flat on its face. If some animal's "move" needed to refer to its "speak" then Haskell can do that, but not nearly so trivially. It definitely moves away from "support" into "can encode" territory.

Self in Haskell

See my reply below.

By the way, adding open

By the way, adding open recursion and even inheritance via "self" to Haskell is relatively easy as well:

data Animal = Animal {
  speak :: Int -> String, 
  move :: Int -> String,
  act :: Int -> String
}

base self = Animal {
   speak = \repeats -> "",
   move = \repeats -> "",
   act = \repeats -> (self  `move` repeats) ++ (self  `speak` repeats)
}

{- One type of animal -}
bird self = let super = base self in Animal {
   speak = \repeats -> concat $ replicate repeats "cheep ", 
   move = \repeats -> concat $ replicate repeats "flap ",
   act = (super `act`)
}

{- Another type of animal -}
dog self = let super = base self in Animal {
   speak = \repeats -> concat $ replicate repeats "bark ", 
   move = \repeats -> concat $ replicate repeats "run ",
   act = (super `act`)
}

fix f = f (fix f)

b = fix bird
d = fix dog

*Main> act b 2
"flap flap cheep cheep "

Now we're getting somewhere

Now this is OO programming. But once you have users writing fixpoints then you've moved out of "supports" territory hence it's quite reasonable to say that Haskell is not an OO language.

What if instead of the

What if instead of the identifier fix I use the identifier new?

Does Haskell support OO?

William, you write "A language or system is object oriented if it supports the dynamic creation and use of objects. Support means that objects are easy to define and use. It is possible to encode objects in C or Haskell, but an encoding is not support."

So, by your definition, I would say Haskell is OO, but you specifically cite Haskell as not supporting OO. I agree with other correspondents that the culprit is the word 'easy'. You need to replace it with a more specific criterion, which won't be easy!

Grammar and Diction

I don't see the problem using using "may" or "can". The word "client" is introduced in the second sentence of the definition paragraph because it is needed later. I don't see a case where "clients" or "client's" is misused. You are right about "easy". I'll change that.

Objects vs. Functions

I have difficulty distinguishing between objects and first-class functions using William's proposed definitions. Given that Will argues 'varying behavior for different stimulus' to be sufficient for a collection, even that can be observed within a function (varying results for different inputs) - e.g. just use a message type that allows pattern matching.

IMO, the judge of objecthood is support for OO patterns. Many GoF patterns (including the most important ones) require state. And more important than GoF patterns are the object capability model patterns. Today, object capabilities are the best result from and the best justification for OOP. Object capability patterns are all about controlling state and authority - grant, delegation, attenuation, isolation, etc..

The difference between functions and objects is that objects can encapsulate authority, while functions can only borrow it from or pass it to the caller. I.e. this difference corresponds to first-class functions vs. first-class procedures. Procedures may observe, influence, and reference state resources.

(Whereas we can have "everything is an object", we cannot have "everything is a function" since there must be an ultimate 'caller' that can observe and influence the world on the function's behalf.)

Objects don't need to have "private mutable state" to be about state, and not every object needs to be mutable for state to be essential. Ignoring authority to observe and influence state from the definition of OO is, IMO, to ignore one of the most essential aspect of objects (the other essential aspect being that they're first class, can be shared and configured at runtime).

Patterns

You are the only person I have ever heard proposing that the definition of object should be based on OO patterns. There are many reasons why tying objects to patterns is a bad idea. One is that OO existed and was used in significant projects long before OO patterns were formalized. You could ague that they patterns existed but weren't written down. However, that is a stretch. More importantly, patterns focus on difficult situations, not the basic cases. Thus they tend to emphasize the problems in using a language, not on the easy standard cases.

I think it would be better if you focused on OO design as a component of the definition of objects. For example Wirfs-Brock's CRC methodology or any of the other OO design methodologies. These are better because they focus on the common case, not the outlying case as in patterns.

Also, you are moving into the question of why objects are they way they are and how to use them correctly. These are important topics, but I don't think they belong in a technical definition.

Do you seriously believe

Do you seriously believe "it's a stretch" that, for example, state and strategy patterns existed before being written down? You believe command and adapter patterns are "not basic cases"? Perhaps you should embrace a more realistic understanding of patterns. Realistically, all GoF patterns existed before being written down - the GoF was naming patterns exhibited from existing applications, not inventing patterns from whole cloth. Realistically, there are many more patterns they chose to not publish - after all, nobody wants a book about the boring, common or trivial patterns everyone can figure out.

If it makes you happy to call it "OO design", that would be acceptable to me. In essence, it's the same thing - communicating patterns for structuring multiple objects and the communications between them in order to achieve common goals. The relevant bits are that:

  • objects are defined and recognized with respect to object systems
  • object systems are defined and recognized in terms of their externally observable properties
  • the definition of object systems allows them to be distinguished from the systems they are commonly contrasted with - e.g. functional, procedural, logic, constraint, type systems, etc.

We recognize object systems in terms of software design patterns. Even Nygaard and Kay emphasize how programs are structured, designed, organized, and regarded above syntactic support for 'objects'. We can recognize OO and objects even when they are modeled in non-OO languages.

I am not attempting to define objects in terms of "how to use them correctly". Many OO patterns that naturally emerge are quite awful, or require a lot of state management that is not essential to the problem or domain. But any definition should be consistent with how OO is recognized and distinguished from other programming models today.

Your pet definition fails in that regard.

Patterns

My point is that patterns are just one part of the picture. They typically (at least originally) were both *common* and *domain-independent*, so they cannot describe many of the important object structures that arise as a result of detailed analysis of a particular application. These application-specific particulars are what I meant by "basic cases". As a result, I think patterns are not a good basis for a definition of objects or object-oriented programming. Also, something like the command pattern would be equally useful for implementing the Undo feature in a functional program. Thus they are not necessarily object-specific.

Someone else suggested that I include more about design in the proposed definition. I have been thinking about it and plan do make some revisions.

PS: By the way, I respect your opinion and that is why I'm engaged in this discussion with you. There's no reason to add in little jabs like "pet definition". It just brings down the entire tone of the conversation. OK?

application-specific

While application-specific particulars are the real "basic cases" in OO programming, they are generally not domain-specific. Rather, they are specific applications of domain-independent patterns.

In general, domain modeling with objects turns out to be a mistake - i.e. design an application with polymorphic Person and TaxReport objects and you end up with a business simulator rather than a business data processor. This happens a lot to new OO programmers because, for misguided didactic purposes, most toy examples of OOP are simulators (duck say quack, cow say moo).

For professional OO developers, in my experience and in discussions I've had, the base cases are generally not domain specific beyond simple value-objects (which would be served as well by plain-old-data as by ADTs or OO) for representing data and commands. Rather, objects are a way to reduce coupling and improve configurability internal to the application's communication structure. The common cases for leveraging OO properties are generally not domain specific.

Even developers of games and simulators will often advise domain-generic approaches to structuring objects (cf. type object). OO has not proven ideal, nor even very suitable, for domain-specific techniques. (Though this has been mitigated in languages with multimethods; multimethods mix in a bit of logic programming, and logic programming is great for domain-specific purposes.)

Nygaard's and Kay's descriptions of OO don't even mention the problem domain. They're all about the program's internal structure. And that is as it should be.

something like the command pattern would be equally useful for implementing the Undo feature in a functional program

True. We recognize systems by combinations of patterns and their prevalence, not by any specific pattern.

RE: "no reason to add in little jabs" - I apologize for how I expressed that opinion.

OO: know it when I see it

You are the only person I have ever heard proposing that the definition of object should be based on OO patterns.

I had been resisting the urge to comment on this thread because of this :) Basically, whether they're in the definition or not, OO patterns seem essential for a useful definition of objects. I agree that they need not be in the definition itself, but if the definition does not imply what is intended -- a language in which OO is practiced -- that suggests the definition is for something else or what is intended is somehow inconsistent.

Two cases popped up in the reasoning: the ability to create abstractions and inheritance. You recognize support for creating objects varies, but then ignored it as orthogonal and (me editorializing) controversial. I would like it supported somehow. Ideally, with a notion of strength or consistency (the 'lambda cube' is quite elegant in this). The other vivid case was inheritance. The ability to share code is problematic for many PL researchers (e.g., superficially, guaranteeing Liskov substitution), but that doesn't mean inheritance should be ignored as an irrelevant or inherently undesirable part of OO. Indeed, in surveying programmers in what they value about OO systems, I saw inheritance ranks much higher than interfaces. (I didn't even consider polling about different forms of dispatch.)

The definition should allow us to tackle what is meant by OO, such as the ideas above. Whether that is easy or even possible is a different issue. Even if the support is not directly in the definition, pretty basic litmus tests of a grounded definition would be in analyzing these ideas.

This is the point that I've

This is the point that I've tried to make in this thread, though I haven't been able to articulate it successfully I guess. Definition needs context.

next steps

It seems to me that people are reasonably happy with the definition of "object" (although taking out mutable state is still controversial).

But I can see that there are lots of problems with my definition of "object-oriented". This is a more difficult thing to define, because it is a style or a focus rather than a technical construct.

It also seems to me that "inheritance" is a property more of "object-oriented programming" than of objects themselves. In other words, inheritance is about how to make objects, not about the exactly nature of objects themselves.

My proposed definition was minimalistic: its OO if it "supports" objects. People have had trouble with the precise meaning of "supports". I'm not sure what to do here. One problem I have with the idea of including patterns or design is that a definition should not imply a value judgment, in that you could do bad OO programming and it would still be OO.

Do you think its possible to define OO in an objective way that is useful and would accurately describe what it is?

Do you think its possible

Do you think its possible to define OO in an objective way that is useful and would accurately describe what it is?

No. Compare your definition to what I was accustomed to telling students. An object has identity, state, and behavior. A class is a blueprint for a collection of objects. All that relatively naturally translates to UML diagrams where a box can be an object (the name of the box is its identity, the state is a valuation of its attributes, etc) or a class (describing the name, state, and behavior). From these notions, 'typical' OO features 'emerge' such as class diagrams with inheritance, an 'is-a' relation, or membership, an 'has-a' relation, etc.

Honestly, all these definitions are colloquial and formally pretty much debatable. One might wonder about: what exactly _is_ identity, what exactly _is_ inheritance, what exactly _is_ behavior. Nothing is precise. Fortunately, students hardly wonder about the precise semantics of things, and that is a good thing, since I see OOAD, design, as an inherently 'sloppy' mental exercise. To me, the essence of OOAD isn't much more than: Derive/draw a picture, a mental model, and implement it (making the design precise.)

Consequently, I don't think OO can be defined in an objective -precise- way, and I don't think there are any merits to that. Well, except for the study of an aspect of OO, such as precise typing systems, for instance. I.e., in a paper an academic can dive into an aspect of 'the essence of OO' by presenting a very precise meaning for an OO language with OO types. Or study inheritance in a philosophical manner with your definition.

I wouldn't call that OO, but an academic exercise at getting certain aspects precise in order to study them. But only aspects.

(Actually, I also have a philosophical objection to your definition. You say: "An object is ... behavior." Behavior of what? Doesn't the term 'behavior' imply an observable quality of 'something,' which forces that a 'something' is a more fundamental notion than 'behavior' itself? Short of a 'panta rei' answer, I think you need another term.)

No. Compare your definition

No. Compare your definition to what I was accustomed to telling students. An object has identity, state, and behavior.

I think Cardelli has studied objects more than most anyone here, and his object calculi are functional (edit: by 'functional', I mean 'pure', not the meaning 'based on functions'). He also provides imperative extensions in the same vein as extensions of the pure lambda calculus with mutation.

The definition given to your students accurately describes all modern, deployed OO languages, it simply does not encompass all possible OO languages.

Well, you can't please

Well, you can't please everybody. True for both Cook and me. But I am not too bothered if Cardelli's academic work isn't covered by my definition.

[Note it isn't my definition. Just stuff out of UML books which I find serve their purpose in explaining the OO mindset. And I don't think UML is particularly bound to a programming language. The definition also fails on Javascript, in which -some claim- it is possible to do OO programming too. ]

all possible OO languages

The definition given to your students accurately describes all modern, deployed OO languages, it simply does not encompass all possible OO languages.

The argument you're presenting seems to require equivocation, i.e. using 'OO' with two different meanings. If marco chose to stick with his definition, he could argue that all possible OO languages have identity, state, etc. because languages that lack those features are, by definition, not OO. QED.

If we're going to argue about definitions, we should try to make it clear that this is what we're doing. Definitions are judged differently than arguments.

Cardelli's Theory of Objects does not, by the way, demonstrate that his effect-free object calculi would be suitable as an OO language. The study of effects was deferred because it was important enough to study on its own, after the foundation was laid. First sentence of chapter 10 of Theory of Objects: "Object oriented languages are naturally imperative". The book introduces effects in chapter 2.

The argument you're

The argument you're presenting seems to require equivocation, i.e. using 'OO' with two different meanings.

There is no equivocation if you use a definition of OO that encompasses both imperative and pure OO languages.

The study of effects was deferred because it was important enough to study on its own, after the foundation was laid.

Exactly, so the foundations of OO do not inherently require effects. We can look at the pure object calculus and still recognize objects and object-oriented programming, like some of the OO patterns you say are essential for defining objects to begin with. That's telling.

First sentence of chapter 10 of Theory of Objects: "Object oriented languages are naturally imperative". The book introduces effects in chapter 2.

For those who are interested, here is the opening page of Chapter 10. It's not at all clear to me that this statement is intended as a definition of objects as inherently imperative, as opposed to simply a recognition of the current state of the field, ie. all cars naturally had 4 wheels, until we invented ones that had 3. All definitions are working definitions subject to refinement as needed.

I've designed languages that

I've designed languages that I've called OO that weren't imperative but still had state and identity. I would also claim that declarative + OO + state + identity are not mutually exclusive. Its even possible to have a OO language without methods (say, a data-flow language that supports object connections and object inheritance).

Only if we look at popular OO languages do we find standardization of their companion paradigms. But its definitely not true for possible, niche, experimental OO languages that one is unlikely to use.

So, then what is meant by naturally? I'm guessing this word has something to do with natural selection (animals naturally don't have 3 legs as they wouldn't survive to reproduce). So perhaps the claim here is that in order for an OO language to survive, it must be imperative? But then that gets at the crux of the issue: we could make claims that feature A naturally implies feature B, meaning you can't have a successful language that has A and not B, even if such a language is possible to imagine, design, and construct. Since all my counterexamples are fairly niche, I can't really argue against that.

in order for an OO language

in order for an OO language to survive

This phrase seems problematic. Perhaps separate it:

  • in order for a language to survive...
  • in order for a language to be recognized as OO...

It might be that a language doesn't need to be imperative to survive, but does need to be imperative to be recognized as OO. Or perhaps the ability to alias state would be sufficient. Or maybe OO has nothing to do with properties of the language, and is really in the hands of marketing...

So perhaps the claim here

So perhaps the claim here is that in order for an OO language to survive, it must be imperative?

One could argue that every usable language is ultimately imperative because, as Jones said, a language that performs no I/O is useless. Any OO language will enable side-effects in some manner, the question is merely whether the side-effects will be referentially transparent (as in Haskell), or uncontrolled as all current OO languages in use.

Whoa...what? Imperative

Whoa...what? Imperative programming is basically step-by-step recipe programming, it does not necessarily imply side effect nor IO, though its often found useful for that; and you can write perfectly state free referential transparent code with procedures if it floats your boat. You can write imperative code in Haskell (using state/IO monads), and you can write declarative code in C# (using functions); you can even control your side effects if you want. You can do IO without writing imperative code (data flow), and we often even have weird mixes of imperative and declarative IO (data binding).

So far OO has been paired mostly up with imperative control at the language level; anything else (data binding) is done at the library level. But contemporary OO does have fairly good support for encapsulation, its not completely uncontrolled.

I'm curious how you'd write

I'm curious how you'd write imperative code without mutation. These seem inetricably linked, which means imperative programming implies side-effects.

Edit: could you elaborate on your meaning of dataflow I/O?

Imperative code without

Imperative code without "real" mutation are simply functions that are expressed with reassignment. My best examples here are earlier versions of GSL/HLSL, where code is definitely not doing IO or even messing with memory beyond array reads; it is actually quite functional...yet the language designers decided it best to make the language imperative anyways, and we are only one step away from functional by going to SSA.

Dataflow I/O...think Quartz Composer: there is definitely IO going on, that's the whole point of the language. But you enable this behavior by wiring up your program without any imperative code at all. Basically, you have an event source and you wire it up to an event sink...maybe transforming it in between. And then the IO just happen, you don't need to be directly involved in the flow for IO to happen, you just make it so the flow can happen.

it is actually quite

it is actually quite functional...yet the language designers decided it best to make the language imperative anyways, and we are only one step away from functional by going to SSA.

Isn't SSA just A-normal form? I'm not sure I'd classify that as imperative programming. Once we introduce the store via which we write memory, then we're doing imperative programming. The mutation was still that final step, so it doesn't seem separable.

Re: non-imperative I/O, given your description as a dataflow program, it sounds like we're declaring a device (the display) to have a purely functional behaviour. Can all I/O be represented in this fashion, or just I/O that just happens to share this sort of character?

If the latter, then I'm not sure I'd classify the language as being capable of I/O, but rather it containing a domain-specific abstraction that happens to encompass a subset of I/O of interest.

I'm aware I haven't advanced a precise definition, and I'm trying to avoid circularity in how I define I/O so that it doesn't necessarily imply imperative programming. It just seems that a program must be capable of expressing some arbitrary interleaving of updates via its I/O abstraction. From this, we build domain-specific abstractions that restrict the interleaved updates to those we consider correct, like dataflow. I hope that makes some kind of sense.

Declarative I/O

All I/O can be modeled declaratively. cf. Dedalus or discrete event calculus. I/O doesn't need to be imperative.

Very little I/O can be modeled purely - at least, assuming we want open or extensible systems. Purity requires there be no aliasing, no dependency on environment, no implicit observers.

E.g. we declare the device (the display) to have a purely functional behavior. Can we explain how this declaration reaches the display? and when? What happens when there are concurrent declarations (from other agents or processes)? What happens when we want to change the display function for another one? These are the questions that characterize I/O.

a program must be capable of expressing some arbitrary interleaving of updates via its I/O abstraction. From this, we build domain-specific abstractions that restrict the interleaved updates to those we consider correct, like dataflow.

That's actually an abstraction inversion. I.e. you're starting with a more powerful and expressive technique, then applying layers of discipline to control it.

Sure, it might seem like the natural approach for programming CPUs, because CPUs are very eventful (and events are a major cause of accidental complexity). If we were starting with FPGAs or GPUs, it would be clearer that all this arbitrary interleave is something to avoid unless necessary.

If we start with a synchronous concurrent base, we can build stateful systems atop it. We can even still model promises, mutexes, queues, etc.. But such techniques are often unnecessary, symptoms of accidental complexity. You can reduce the amount of state to something much closer to essential.

The cost, of course, is that such declarative approaches are further from modern CPU models, and it takes a clever compiler to achieve performance.

HLSL/GSL are definitely

HLSL/GSL are definitely imperative languages. As shader pipelines begin to accept more CPU-style instructions (or think geometry shaders), they are transitioning fairly easily to more general purpose imperative languages, which would have been possible if they were originally designed as functional.

For IO, think of your monitor: you have to raise the right electrical signals at the right time in order to get the image you want on the screen; that is very imperative. But the coding in which I enable that, plugging the cable into the VGA port, is definitely declarative. So you can always enable IO by simply declaratively routing and transforming opaque wires, even if what happens inside the wires is imperative.

Obviously there are limits: at some point you might need to directly generate or receive that electrical signal on the wire, and then you are in imperative land. But that functionality could just as well be encapsulated inside a black box...so it "depends." But to say you always need to be imperative to do IO is incorrect.

Also, transcoding can make things confusing here: I can transcode imperative on top of a declarative language by simply "declaring" instructions; this is most definitely imperative programming even if the language isn't supporting me directly. This sort of happened in SuperGlue sometimes as I tried to make it more expressive. I can also transcode declarative code on top of imperative code; i.e., WPF databinding. Transcoding always make the story much more nuanced.

HLSL/GSL are definitely

HLSL/GSL are definitely imperative languages.

I'm still not on the same page. The HLSL samples I've looked up all look like they're in SSA form, which jives with what you said earlier. SSA form is considered functional programming by many, ie. not one step away, which I believe was your original claim.

Re: transcoding, isn't this just interpretation?

Re: any sort of I/O without imperative programming, perhaps I'm being overly skeptical, it's just that I've read a lot of papers on type and effect systems, regions, etc. and all of them seem to be focused on properly ordering effects via the application of functions. Perhaps that just biases me to think I/O is imperative, so I'll have to mull it over awhile; I'm not yet convinced that any sort of I/O could be represented in Dedalus or via dataflow without also embedding some core assumption that makes it all work out in the end (like call-by-value functions).

Effects achieved through

Effects achieved through message passing are imperative - i.e. messages effect an observable, temporally ordered sequence of discrete computations and state changes. Unless you switch away from messaging, effects in OO will be imperative.

If you switch away from messaging - e.g. instead use declarative signals or dataflows - many people would hesitate to call the result OO. However, use of state and aliasing would still allow recognizable expression of every OO pattern (including messaging, by explicitly modeling an inbox, aliasing it, and influencing its state).

General purpose programming does need to model IO, but we cannot conclude that it's necessarily imperative. OTOH, many of the alternatives for non-imperative, impure effects (such as concurrent constraint programming, or discrete event calculus) are neither widely known nor recognizable as OO.

Sean and I are closer than most to creating viable alternatives to imperative, message-based OO. Though, I'm not so fond of OO that I insist on using that descriptor.

side-effects will be referentially transparent (as in Haskell)

Side-effects are not referentially transparent in Haskell.

Exactly, so the foundations

Exactly, so the foundations of OO do not inherently require effects

You could validly say that the foundations of Cardelli's object calculus do not inherently require effects. But you're missing a step in reaching a conclusion that all "the foundations of OO" were laid prior to introducing effects.

All definitions are working definitions subject to refinement as needed.

Indeed. In some conceivable futures, we might be using "OO" to describe Snusp language. Definitions do change. But if we go around pretending to be prophets and using future definitions today, the best we can hope for is to confuse our audience and to present inconsistent arguments that might reduce to `OO is not OO`.

You could validly say that

You could validly say that the foundations of Cardelli's object calculus do not inherently require effects. But you're missing a step in reaching a conclusion that all "the foundations of OO" were laid prior to introducing effects.

I already mentioned the step, and in fact it's based on your own argument: the OO patterns expressible in pure languages are still recognizable.

the OO patterns expressible

the OO patterns expressible in pure languages are still recognizable

As phrased, this is a potentially vacuous form of argument. Compare: all the unicorns and rainbows you can express with Intercal are still recognizable.

I agree that the pure object calculus does resemble OO within the limits of what we can express. I don't find this nearly as profound as you seem to.

I'm a little mystified by

I'm a little mystified by how you can claim that OO patterns define the nature of OO, and in the next breath call it a vacuous argument.

It's not intended as a profound argument, it's merely intended to show that the character of OO is not defined by state and identity.

I said your phrasing

I said your phrasing potentially admits even vacuous arguments. That is, you said: "the OO patterns expressible in pure languages are still recognizable", but you did not argue even one OO pattern can be expressed, nor that a significant subset of patterns can be expressed, nor even that important or common patterns can be expressed.

We can say anything is true of all elements of an empty set.

you can claim that OO patterns define the nature of OO, and in the next breath call it a vacuous argument

First, I have not (in this thread) called anything a vacuous argument, much less my own claim.

Second, I have never claimed that OO patterns "define the nature of" OO. I did claim "the judge of objecthood is support for OO patterns". There are significant differences between the two. For example, my claim allows that we define OO without ever mentioning patterns so long as the definition ensures the patterns are admitted.

Third, I do not mean ability to express just a few select patterns. I mean all of them, or at least those most valuable or common to OOA&D - adapter patterns, monitor/observer patterns (which require aliasing), object capability patterns, etc.

it's merely intended to show that the character of OO is not defined by state and identity

If that is your intent, you have not succeeded.

My statement on pure OO,

My statement on pure OO, while imprecise, is not so imprecise that it admits any interpretation. The pretty clear implication is that most OO patterns are expressible, otherwise there wouldn't be much point to attempting to use patterns to distinguish OO. Only the patterns requiring global state changes are difficult to express in the presence of purity, like capability revocation patterns.

Only the patterns requiring

Only the patterns requiring global state changes are difficult to express in the presence of purity, like capability revocation patterns.

Why do you preface this sentence with the word "Only"?

And what do you mean by "global state changes"?

Why do you preface this

Why do you preface this sentence with the word "Only"?

Because in reviewing the OO design patterns during this thread, the only ones that looked truly difficult for a pure language are the ones specified in my sentence. Is there another possible interpretation of "only" in that context?

And what do you mean by "global state changes"?

I mean a pattern that exploits mutation in a way that implies global source code changes when representing the pattern in a pure language, instead of a small set of local transformations.

Capability revocation is an obvious example, as it would require explicitly passing around the whole store or an ST monad to thread the store implicitly. One or two of the behavioural patterns may also qualify, like Observer.

Is there another possible

Is there another possible interpretation of "only" in that context?

Rhetorically, the word asks the audience to dismiss or ignore a point. Use of words 'merely', 'just', 'simply', 'only', etc. are common signs of bias of perspective. Cf. Just is a dangerous word.

"the only ones that looked truly difficult"
"Only the patterns requiring global state changes are difficult"

Try the same phrases without the word 'only', or even with the opposite emphasis:

"common, possibly essential patterns requiring global state changes are difficult"

a pattern that exploits mutation in a way that implies global source code changes

We can model stateful objects in a pure system by returning an updated version of the object along with any response. This regresses the state burden back to the caller, who ends up threading an object graph through a series of operations. Not very different than threading any other store, really. An application written in this style requires a global discipline, i.e. a by-hand global code transform just to model local state across messages. Would you include this pattern?

If so, you might want to review just how much of Cardelli's chapter 6 relies on exactly this global discipline.

If not, I suggest instead you focus on aliasing of state: the ability to effect a change on one reference then observe that change on another. I suspect this to be closer to your intention.

One or two of the behavioural patterns may also qualify, like Observer.

Many patterns require state. More than you might suspect, since it isn't always obvious without context. For example, adapter pattern doesn't require state in the trivial case where messages correspond 1:1, but does require state in the common case that the adapter must join messages.

I never was big on patterns.

I never was big on patterns. Basically, my view on patterns is that 1) "I'll bloody well invent a new pattern when I need one." and 2) "My students are better off when they can bloody well invent new patterns when they need one, instead off applying the wrong pattern badly." so I am a bit reluctant muddling into your discussion.

But there is one thing. The whole purpose of OO, to me, is to model a program as a set of collaborating objects. UML makes that explicit by allowing one to define collaboration diagrams to specify how objects interact to implement a specific result.

OO without state? I just -pun intended- don't see it.

Of course you should invent

Of course you should invent new patterns when you need them. But the reason that patterns become patterns is that they are suitable in a wide variety of situations - i.e. even in the projects that use new patterns, you'll tend to use a lot more old patterns. The more patterns you have, the less often you'll need to invent new ones.

Patterns are the "crafting techniques" of any paradigm, experience distilled from trial and error. You can learn them through study or through invention (often re-invention), but learn them you shall - if you are to succeed with a paradigm, anyway. The "only" problem with learning them on your own is that you'll make a lot of your own errors, rather than learning from the errors of other people.

To say OO is more about the patterns rather than the material is analogous to saying carpentry is about the techniques rather than the wood. I.e. in the unlikely case we could transfer all those techniques to another material, we could still call it carpentry. Yet, if we leveraged an entirely different set of techniques for processing the material - e.g. using CNC and lasers to shape wood - it would be difficult to call it carpentry.

I believe that OO is about the development process, about the expression and techniques of human programmers, not about the product or the material. Therefore, OO is about patterns.

to model a program as a set of collaborating objects

I don't believe the connotations of "collaboration" are entailed by OO. At least, that aspect seems very weak in comparison to certain other paradigms like multi-agent systems, blackboard metaphors, and concurrent constraint programming. If collaboration was the essence or purpose of OO, I think OO would be stronger at it. OO systems tend to be built of passive objects rather than agents that actively collaborate. Rather than the "social" model implied by "collaboration", the materials and patterns of OO are more oriented towards an understanding of programs as physical objects.

Of course you should invent

Of course you should invent new patterns when you need them.

Well. I admit it's one of those things where I am probably mostly wrong, but there's something to it. No biggy, just one of those quirks I live with.

I don't believe the connotations of "collaboration" are entailed by OO. ... OO systems tend to be built of passive objects rather than agents that actively collaborate.

I think you're wrong. The overhead of implementing languages which pass messages is just too large, and Java/C/C# now dominate OO software development, therefor we now colloquially mostly 'call methods' instead of 'pass messages' (the later the more popular phrase from -say- the smalltalk world and OO design.) Stated differently, to an OO designer 'calling a method' is a lousy misnomer for 'passing a message.' Moreover, the fact that objects are mostly passive doesn't imply that they don't cooperate.

The message passing model, or mindset, from Smalltalk dominates the OO field. Which in turn dominates software design.

The overhead of implementing

The overhead of implementing languages which pass messages is just too large

Work-stealing implementations and buffers can pass messages very efficiently and even leverage cache locality effectively. It is not unusual to get the mean message passing overhead down to less than 10 CPU cycles per message (albeit at some cost to mean latency).

Further, a good compiler can inline the stateless actors and reduce some stateful elements (e.g. actors modeling stateful cells) to simple lookups (and queue updates for later processing, since we don't need to "receive" those updates right away). Recursive messages can be unwound to a finite depth. In my experience, we can typically eliminate the vast majority of messages at compilation or JIT, thus eliminating many latency and buffering overheads (while remaining compatible with message-passing semantics), if we wish it.

But I think you misunderstood. Even actors are passive objects by my understanding. Sure, each one is concurrent, but actors are still passive between processing messages. Any initiative or purpose for an actor must be applied from somewhere else.

the fact that objects are mostly passive doesn't imply that they don't cooperate

"Collaborate" and "cooperate" don't connote quite the same concepts. There are some meanings for 'cooperate' that might apply to some OO systems.

Here is an exercise: look at an engine for a vehicle. Certainly, it makes sense to use the word concurrent to describe the operation of different parts. Maybe even synchronous (not to be confused with sequential). But which elements are collaborating? Which are cooperating? Does it even make sense to use these words?

Work-stealing

Work-stealing implementations and buffers can pass messages very efficiently and even leverage cache locality effectively. ...

Okay. I am tempted to respond, but I think we dive too much into the specifics of OO language implementation.

But I think you misunderstood. Even actors are passive objects by my understanding. Sure, each one is concurrent, but actors are still passive between processing messages. Any initiative or purpose for an actor must be applied from somewhere else.

No, I don't think I misunderstood. UML even makes the provision to state which classes are 'active' which is part of an elaborated design. No, it certainly isn't true that OO assumes all objects/classes to be 'passive'.

"Collaborate" and "cooperate" don't connote quite the same concepts

Fine, I meant 'cooperate' as in 'collaborate'. Typo. ;-)

[ Again, I mention UML since it, to me, epitomizes the OO paradigm, or OO 'thinking'. ]

Patterns are like cooking

Patterns are like cooking recipes: the dimension of one's personal taste, in the first place, for the target meal is likely almost as important as the accuracy and ease of use of the recipe proper.

And by "one" I mean either an individual programmer or the full dev team he/she is in, along with their culture and various kinds of legacies (current codebase, libraries, etc).

To me it is kind of intriguing to see people still struggling to give a definition of OO, formal or not, thriving to get the broadest possible acceptance. It always seemed to me that OO's arbitrary choice to model programs after rather, say, biased
notions of objects, types, modules, etc was somehow less important than the actual, higher level, claims and pride for the objecthood support:

composition, reuse, separation of concern and contracts (or so called) via official public interfaces and encapsulation, and so on.

OO as hyped up 30 or so years ago in the mainstream, just my historical theory, was appealing to a broad audience already a bit aware (though maybe not deeply enough) of the evil of uncontrolled side effects, that found the "receiver . message ( arguments )" syntactic scheme very speaking to a (I suspect) mostly unconscious mental association with : subject . verb ( complement )

"How cool. See how natural it is ?! That just can't possibly be wrong to program this way !" (okay, mocking a bit)

Back to them, as far as patterns go, some well known, and influential, OO proponents have already stated for quite a while now, that patterns can be good, and components can be too, and "good patterns" (cf. my remark about tastes above), when turned into components, are even better anyway... for what OO claims to be important and wants to fulfill.

Coincidentally, though, componentized patterns, contemplated with the envy and OO bias to design and implement software, also turned out to NOT be as easy (at all) to obtain as one first wished for.

I share your cynicism about

I share your cynicism about OO. It certainly hasn't lived up to any of that marketing hype. OO code lacks justification for any claims of being more reusable, configurable, modular, maintainable, etc. than code developed with equal discipline in competing paradigms. It is difficult to justify any claim about OO if you can't even define it. And the "intuitions" offered by OO seem to mislead people to develop business simulators when they want something else entirely.

Well, in all honesty

I share your cynicism about OO.

Well, in all honesty, it was just my take more as a Devil's prosecutor than anything else, since, I liked it or not, I most often had to accept be classified more as a "OO programmer" if only to make a living, with few opportunities over the years, in the market jungle, to emphasize better on some other "saner" skills or experience.

But that's fine.

It is difficult to justify any claim about OO if you can't even define it.

I got that. I sure didn't mean to say the OP is "wrong" in trying so. I suppose such efforts can still make sense.

If we're going to argue

If we're going to argue about definitions, we should try to make it clear that this is what we're doing. Definitions are judged differently than arguments.

Fair point.

Among the classics, there's always Meyer's "early" (definition), I suppose relevant to recall:

"Object-Oriented Design

We shall simply offer a definition of this notion: object-oriented design is the construction of software systems as structured collections of abstract data type implementations.

The following points are worth noting in this definition:

• the emphasis is on structuring a system around the objects it manipulates rather than the functions it performs on them, and on reusing whole data structures, together with the associated operations, rather than isolated procedures. Objects are described as instances of abstract data types; that is to say, data structures known from an official interface rather than through their representation.

• The basic modular unit, called the class, describes the implementation of an abstract data type (not the abstract data type itself, whose specification would not necessarily be executable).

• The word collection reflects how classes should be designed: as units which are interesting and useful on their own, independently from the systems to which they belong. Such classes may then be reused in many different systems. System construction is viewed as the assembly of existing classes, not as a top-down process starting from scratch.

• Finally, the word structured reflects the existence of important relationships between classes, particularly the multiple inheritance relation."

(Citing: SIGPLAN Notices, 1987)

I've always found Meyer's definition interesting not so much because of its intrinsic value (or flaws thereof), as debatable as others I suppose, but rather because of Meyer's strong and consistent commitment to his own proposed definition, a posteriori, and both as a language designer and as a language implementor.

And isn't a consistent commitment a form of an interesting dual of argumentation that one shouldn't overlook either ?

Anyway, turns out the resulting OO language, IMO, was / is far from being among the worst that OO proponents may have seen, to say the least..., but (sadly enough) widespread acceptance (use) by the industry definitely seems to be much of an independent animal to deal with.

Why do you believe patterns

Why do you believe patterns and design indicate value judgement?

The patterns themselves can be judged in terms of consequence and context, but certainly not all patterns are good ones. Anti-patterns are also quite prevalent. For paradigms, the good generally comes with the bad.

It seems to me that people

It seems to me that people are reasonably happy with the definition of "object" (although taking out mutable state is still controversial).

I suspect what's left in your definition is non-controversial, but what is missing is controversial. This is the typical challenge with reductionism :) Patterns may simply be a good test of the definition. Can I construct a full OO language, by the definition, and test that it fails to support inheritance, factories, etc.? What does that say about the OO language and the definition of OO? Likewise, what if I can construct a non-OO (by the definition) and perform the expected patterns? [ Universal Turing machines and fancy program analyses... uh oh. ]

My proposed definition was minimalistic: its OO if it "supports" objects. People have had trouble with the precise meaning of "supports". I'm not sure what to do here. One problem I have with the idea of including patterns or design is that a definition should not imply a value judgment, in that you could do bad OO programming and it would still be OO.

The lambda cube is a great example of where you fall on a spectrum being neither good nor bad. Consider the case of inheritance. I don't follow the literature on distilling the essence of OO etc., but a relevant lesson from meta object protocols / faceted values / proxies / mirrors is of consistency: I would expect a scale of inheritance would be the ability toread/write otherwise encapsulated object environments / state. (This seemingly borders on the expression problem). If an object can do it, so should an object extension. This really gives two ways of moving up/down the spectrum: via functionality (is there first-class / second-class X) and consistency. There are trade-offs: making something first-class may challenge consistency elsewhere.

Do you think its possible to define OO in an objective way that is useful and would accurately describe what it is?

It's a moving target in both the notion of utility and accuracy: PL interests change and new languages will probably keep showing up (imagine talking about OO pre and post Self, aspects, ...). That doesn't mean it's a useless endeavor, though I am curious as to what you hope to get out of this.

A typical exercise for testing kernel semantics is to express richer semantics as sugar. This case is a bit harder in that you only have a meta-semantics (well, not even). Maybe the current approach is inherently controversial?

It's culture, man

Do you think its possible to define OO in an objective way that is useful and would accurately describe what it is?

That depends what you mean by 'objective' and 'accurately describe', but I suspect that as far as your purposes are concerned, my answer would be no.

Basically, I think that OO is, more than anything else, a cultural transmission or tradition of praxis, with all that that perspective implies. Precisely what technical foundation is recognized as epitomizing this culture is variable in both time and space. Like other cultural traditions, OO is prone to branching and splitting, rejecting/revising its past while claiming to embrace it, fabricating definitions for itself that are baldly in tension with what it actually embodies, and so on and so forth. This point of view allows us to make sense of the fact that different branches of the OO culture may not even recognize each other, today's OO may insist that the OO of the past (or, for that matter, of the future) is heretical in any number of ways, it may claim to have always been a way that it has obviously only recently become, etc.

None of this should surprise us: an even cursory study of the history of religion (even restricted to a single family of religions, say Christianity or Buddhism) would be most instructive here. And as in the case of religion, I don't at all mean to say that individual claims made by particular people at particular times cannot be subject to judgment. Obviously if someone claims in the name of religion that humans walked the earth with the dinosaurs, well, we can certainly look at the evidence. But to view such contingent doctrinal theorizing as the "essence" of Christianity is the height of historical naivete. Are today's doctrinal arguments relevant to the rest of our public life? Yes. Are they related in some complex way to the entire history of the traditions from which they emerge? Yes. Do they define those traditions? Absolutely not.

So, to bring this back to earth: no, I don't think that any technical definition of OO will be sufficient. In fact, I think any such technical definition is just yet another contingent elaboration of history-up-to-now, one more layer of sedimentation. All traditions have their essentialists. (Of course, all traditions also have their relativists, so I don't claim to be able to escape the mire either.)

I don't expect this messy point of view to be pleasing to the rational, mechanistic and formal view which is so often a defining characteristic of "nerds." (And I ought to know, as a proud nerd I have a very strong streak of that myself, and often find my own point of view somewhat distasteful.) But I do think that a useful definition of OO will be first and foremost cultural/historical, and will be able to survey both the tools made/used by that culture, the tools rejected/ignored, the terminology and incidental fetishes (any definition of OO should also be able to account for the rise of "agile" methodology), dare I say mode of dress and hairstyle? Such a definition would probably be completely useless for some purposes, but I do believe it would be very fruitful in general, and I believe as well that it would be more honest.

I don't know, it seems Will

I don't know, it seems Will is on the right track. There is a common theme to all OO languages, and that is a certain type of procedural abstraction resolved dynamically in some way based on the object. Class-based, untyped, prototype-based OO languages all share this element, albeit with the resolution occurring in different ways.

Parting ways

We may have to agree to disagree here. I don't mean to say that his definition is bad or useless. I think it's a perfectly reasonable and useful definition for some purposes. But as we've seen in the rest of this thread, I think it's doomed on the one hand to exclude some languages/systems/people who claim to be doing OO (and have a reasonable historical basis for that claim) and doomed on the other hand to constant questions like "But this Haskell code seems to fit your definition, so why isn't Haskell OO?" The only way to resolve that is by drawing ever finer and more arbitrary distinctions, or else blowing hard against the wind and saying "Well, the definition is king, so I guess Haskell must be OO!"

Definitions of programming paradigms are like political maps. Some of the borders may correspond to geographic, ecological or cultural features that have always been there (rivers, for example, or different spoken languages), and others may through long-term influence come to be mirrored in the geography, ecology or culture (walls or fortifications, for example, or deliberately maintained wasteland, or the emergence of new dialects). Lines on maps have power to change reality, so we should be careful where we draw them. But we also deceive ourselves if we mistake the map for the territory.

On a purely technical level, incidentally, I don't completely buy your claim that "There is a common theme to all OO languages, and that is..." What about languages that use multi-methods? What about the actor model? Must we then regard actors as objects, plain and simple? Of course lots of OO languages all look roughly the same at this particular point in history, and even more use superficially similar jargon to describe radically different models. But I think "all OO languages" is pretty broad...

I'd say multimethods fit

I'd say multimethods fit into OO too, as they dispatch on the runtime value. Is this not considered OO?

Are actors not generally considered objects? Some people certainly think so. The capability operating systems (KeyKOS, EROS, etc.) expose an object-oriented development model, but the objects are processes that communicate via IPC, ie. actors. I don't have any statistics on the prevalence of this view actors=objects though, assuming that's relevant.

But as we've seen in the rest of this thread, I think it's doomed on the one hand to exclude some languages/systems/people who claim to be doing OO (and have a reasonable historical basis for that claim)

The broad definition is pretty inclusive, so I don't recall anything mentioned that's incorrectly excluded.

Excluding type classes is trivial simply because they are not first-class. But this system supporting first-class type classes is definitely object-oriented. I don't know, it doesn't seem nearly so vague or ill-defined in my mind.

Re: Multmethods

I'd say multimethods fit into OO too, as they dispatch on the runtime value. Is this not considered OO?

Multimethods, at a meta-level, are quite difficult to explain with OO. Where do these methods come from? How does the system peek into my "opaque" objects and know which behavior to dispatch? Who is receiving the lists of arguments? What black magic is this?

They aren't impossible to explain, fortunately. I've modeled multimethods a few different ways - with a registry of methods in a shared space (spaces being modeled with objects), and another time with a tuple space for the parameter lists. But to register the methods or establish precedence for agents still requires an explanation... generally in the form of a global discipline or an interpreter.

Seems a bit dubious to say "multimethods fit into OO". But I think it would not be controversial to say that multimethods are more than expressive enough to easily model OO systems. I.e. "OO fits into multimethods".

One approach

Slate attempted to resolve this by implementing multimethods using extended method dictionary tables that included "role" metadata (encoded as bitmasks), essentially indicating for each entry which positions the current table owner "fit".

So, logically, the method dictionary became a map from symbol+position to method.

Dynamic dispatch

Perhaps I'm missing something, but doesn't the characterization

... means that different objects can implement the same operation name(s) in different ways, so the specific operation to be invoked must come from the object identified in the client's request

apply just as well to static dispatch as it does to dynamic?

Dynamic Dispatch

Perhaps it is not worded as clearly as it could be. But the idea is that operation actually comes from the *runtime value* of the object identified in the request. Since "the object" is not known until runtime, there is no way that this kind of dispatch can (in general) be performed statically. Can you suggest a rewording that would make it more clear?

(This is exactly the same as a first-class function, which is dispatched dynamically to the actual closure value at runtime. The particular code to be called when invoking a first-class function cannot be known statically)

Dynamic dispatch

My feeling is that you have to talk about object references and object values in order to make this clear. By polymorphism, an object reference of type T can assume values of any subtype of T; with static binding, it is the type of the object reference that determines the implementation of a method, and with dynamic binding it is the type of the object value. It's only "exactly the same as a first-class function" in a language with mutable (function) references - the distinction is void in a language in which each reference is bound to exactly one value.

Dynamic Dispatch of Functions

Jeremy I disagree with most of what you say here. By focusing on subtyping, you confuse the issue and miss the simple basic point: there can be two different objects or two different functions that *behave differently* even though they have exactly the same type. Thus "object polymorphism" can occur even if there are no subtype relations involved. The "value : type" relationship that is important here, not the "type : type" relationship. Discussions of OO have often promoted this confusion by suggesting that object polymorphism is somehow tied to inheritance. But in fact it is not. The other source of confusion is when people consider classes as types. Doing so exposes representations and blends in notions of existentials and ADTs into the more pure object-oriented style, which is based only on interfaces as types.

Secondly, I assert again that dynamic dispatch on OO is the analogous to calling a first-class function, even without mutable function references. This is because functions can be passed to other functions, and at that point it cannot (in general) be known statically what function will be invoked by a given call. Mutable references are not required.

Secondly, I assert again

Secondly, I assert again that dynamic dispatch on OO is the analogous to calling a first-class function, even without mutable function references.

Indeed, and to formalize this somewhat: Type Inference for First-Class Messages with Match-Functions. So polymorphic sums = messages, pattern matching function = dynamic dispatch in object.

Concurrent objects are Actors

How about concurrent objects are Actors (as in Actor Model) :-)
* See overview publication at http://arxiv.org/abs/1008.1459
* See overview video at http://channel9.msdn.com/Shows/Going+Deep/Hewitt-Meijer-and-Szyperski-The-Actor-Model-everything-you-wanted-to-know-but-were-afraid-to-ask

Old fashioned

Since Hewitt brought this thread back.... I'd be fine Marco's class based definition (above). Or something like "An object is a data structure and associated routines for data processing using that data structure". That being said, I read the discussion above about dropping inheritance. I like PIE (polymorphism, inheritance, encapsulation ) as the essentials of object orientation. I wouldn't consider any language without inheritance to be object oriented.

So possibly I'm missing the point.

Re: Inheritance

Inheritance seems peripheral to OOP. Inheritance is a feature of type systems or some (allegedly) convenient object construction models. Inheritance is neither a property of objects nor essential to any OOP design pattern.

Inheritance

Inheritance is neither a property of objects nor essential to any OOP design pattern.

We disagree on this one. I think the has-a vs. is-a distinction is the core OOP design pattern. Classes are in a hierarchy and objects are an instance of classes.

I guess what would you consider a commonly thought of OO language that doesn't have inheritance?

Has-a and Is-a

Polymorphism supports the 'is-a' property in every relevant technical sense, and does not depend on inheritance. Encapsulation provides the 'has-a' property - the ability to hold references to hidden objects. Even if you consider those the "core OOP design patterns", it isn't clear how that serves in defense of inheritance as an essential OOP feature.

There are many OO and actors languages that either lack inheritance or marginalize it (e.g. relying on dynamic duck typing). I think of E and JavaScript off the top of my head, but I remember reading about more of them.

I do acknowledge that inheritance is correlated with OOP, and there are many variations on inheritance (e.g. traits). But there are also non-OOP systems with variations on inheritance (e.g. frame technology, and some variations on logic programming). I think inheritance is pretty much orthogonal to OOP, and is more about composing concepts declaratively.

Encapsulation provides the

Encapsulation provides the 'has-a' property - the ability to hold references to hidden objects.

What? HasA is taxonomic, it allows us to describe objects that have other objects (and is not novel to OOP at all)...encapsulation is a separate thing and we can even apply to IsA in certain cases (as in dreaded private inheritance!).

Inheritance is not orthogonal to OOP, its more of an optional mechanism in support of it. Frame systems, as classically defined by Minsky, are very object oriented, even if they are also rule driven.

E and Javascript

OK those are good examples.

I wouldn't consider Javascript an object oriented language. I consider it functional, things like first class functions and map as an iterator. Javascript makes use of Microsoft's distinction between "object based" which they applied to Visual Basic 6 and "object oriented" which applied to VB.NET C++, J++... AFAIK Javascript's own designed consider themselves
"object based"

a) data encapsulation
b) data binding and access mechanisms
c) automatic initialization and garbage collection of objects
d) operator overloading

but not:

e) inheritance
f) dynamic binding

where (e) and (f) are required for object oriented.
________

Polymorphism supports the 'is-a' property in every relevant technical sense, and does not depend on inheritance.

I'm not sure if I agree with that. Polymorphism is about the behavior of functions, so Haskell for example I think has good polymorphism. I will agree that good polymorphism can get you good dynamic binding but what you can't do is override them (excluding type classes which IMHO mostly are OOP). I have a tough time even defining OOP without inheritance so just going there I'd say the key is the ability for base class methods to be overridden.

In terms of duck-typing. Yeah that allows you to override. A language that was otherwise object oriented in feel that made use of duck typing to implement inheritance I wouldn't have a problem calling object oriented.
____

In terms of it being essential I'm sort of waiting for your argument. The very first basic examples:

Corolla 2348769 is an instance of Corolla which is a Toyota car which is a car which is a vehicle. If you can't do that, then I don't see how you are doing object oriented programming at all.

In practice, OOP is not

In practice, OOP is not about domain modeling, but is rather about program modeling - describing and organizing a program with metaphor of a collection of physical components (this is a stack, that is a queue, there is a mailbox and a pipe and a state-machine). Even in game and simulation development, best practices would have you modeling entities in terms of what you can do with them (movable, renderable, clickable, animated, destroyable).

Unless you're writing a simulator - badly - I think you would not model your Corolla 2348769 the way you describe. It is my impression that OOP is traditionally taught poorly, with hierarchies of animals and vehicles that have nothing to do with object oriented programming in practice. It is unfortunate that people unthinkingly return to such examples when explaining OOP, despite their irrelevance.

In OOP practice, the technically relevant 'is-a' relationships are those that describe roles, interfaces, or contracts. Organizing code with hierarchical inheritance relationships is unnecessary to achieve technically relevant 'is-a' relationships. Polymorphism is essential.

Polymorphism is also an overloaded word. The meaning of polymorphism in OOP refers to "the ability of objects belonging to different types to respond to method, field, or property calls of the same name".

typeclasses

OK good that's an excellent observation. You are right I'm using the definition of domain modeling not program modeling.

And you are right that OO in practice tends to be more about program modeling. So that's a good argument for why inheritance isn't fundamental, this distinction between domain and program modeling. I'm not convinced yet, but I do see your point. But what would you consider the distinction between your functional definition "clickable" and type classes. I.E. "X has a click method" and "the click method applies to all Y" seem like just looking at the problem from different directions. Moreover type classes seem to capture more directly the "what you can do with them" rather than the "what they are".

I have to think about your distinction a bit more but feeling like you are describing type classes not classes in the OO sense is the first thing that comes to mind. And that's what's initially worrying me about accepting the program modeling distinction vs. domain modeling.

Typeclasses, Collections, Existentials

Typeclasses by themselves don't offer OOP polymorphism. I.e. you can have a Clickable type, but you cannot represent a list or collection of heterogeneous Clickable types. The additional requirement for OOP-style polymorphism is existential quantification.

While Haskell extensions can represent existentials, existentials remain inconvenient to express and use widely in Haskell programs.

Existentials through GADTs

A natural presentation of existentials in ML languages (Haskell included) was suggested by in 1992 by Läufer and Odersky, An Extension of ML with First-Class Abstract Types.

As far as I know, they were not added as is to existing ML implementations, but have made a comeback as a subset of GADTs (which combine existential types with type equality constraints). While not part of the official Haskell specification (either 98 or 2010, I believe), GADTs are commonly understood as a stable part of the GHC Haskell language, which is what people actually use, and have been recently added to OCaml as well (where existentials where previously accessible as first-class ML modules, which were heavier to use).

I think this is a satisfying presentation of existential types in functional programming languages of the ML family. I make no claim as whether this suffices for OOP programming (presumably you'll want syntactic sugar to bundle together the several different aspects of OO you're interested in).

Scala has existential types written explicitly `T forSome { type X <: Foo }` without a form of datatype wrapping, which allow implicit rather than explicit conversion to the existential type. This may be an even better solution on the long term.

I have a personal thinking design space of "Parametric polymorphism" and "subtyping" being two knobs that can be either explicit (annotated) or implicit (inferred) in your language, knowing that having both implicit tends to work badly (ML languages have implicit polymorphism and, sometimes, explicit subtyping, OO languages tend to have explicit polymorphism (generics) and implicit subtyping). I'm not sure whether existential types are a third dimension, or should be considered as a part of parametric polymorphism, or a part of subtyping.

I recall that there was a

I recall that there was a problem with encoding OOP with existentials + type classes, because it would lack some kind of extensibility, though at this moment I'm not sure exactly what it was. Does somebody know?

Can you elaborate on what you mean by implicit polymorphism? Would you consider dependently typed languages to have explicit polymorphism?

clickable

That's an interesting definition of OOP: type classes (methods) + heterogeneous data structures on a type class. What would you do with such a data structure in a strongly typed language with Haskell? You wouldn't want to do something like ls = map (click) cs where cs are clickables unless the output of click was always the same sort of thing like IO(). I see this as you only can't construct trees or lists on Haskell "clickables" because Haskell is semi-hostile to heterogeneity at the data level.

But if everything lined up you could do:

data Clickable = A A' | B B' | C C'... 
click2 (A x)  = click x
click2 (B x) = click x

or just dump the type class entirely and just define click on Clickable.
etc...

In a dynamically typed language all this complexity about data structures would just go away.

And certainly regardless of language if the developer spent a lot of time tying clickable methods to clickable data structures I'd call that object oriented design. But ultimately that still answer the question as to why I should consider this an object oriented language. I'm still not seeing what is wrong with the commonly used definitions.

Aliasing

Clickable is not the best example. As you mention, if the Click operation causes an update in IO, one could just as easily create a list of click-functions (all of type `IO ()`). Existential types are better motivated if you also plan to evolve the values, e.g. for `class Clickable a where click :: a -> T -> IO a` (enabling a change in a value when clicked at time T).

Note: I do not (and did not) suggest that OOP is "type classes (methods) + heterogeneous data structures". Where does encapsulation fit in that definition? What about aliasing?

I consider aliasing to be another essential feature of OOP. By 'aliasing' I mean the ability to update an object through one reference then observe the update through another reference. Aliasing is essential for design patterns that join multiple event streams or grant or attenuate authority through objects. Aliasing is weaker than object identity, but does require a pervasive model for state and effects.

(Aliasing is not explicit in many definitions of OO. But common definitions of OO date before Haskell and other languages began to make pure functional programming feasible and practical.)

New

Many people would consider the ability to dynamically create `new` objects to be essential for OOP - i.e. to create a `new Foo()` that has unique state from all the other instances of Foo. I used to be one of those people.

But I've learned that if I have aliasing without `new`, I can still securely partition and logically arrange views of static state resources. Metaphors of discovery and composition replace dynamic 'creation', and are sufficient for every OOP design pattern I've attempted (even capability security patterns). In many ways the resulting discovery-and-composition metaphors seem more natural, having nice conservation properties: we can't ever create objects from thin air in the physical world, and 'objects' of the physical world really just logical views of sensory data anyway.

Disfavoring `new` also helps with orthogonal persistence, live coding, time-traveling debuggers, plugins and extensions, and other features that require cross-cutting access to application state. I've begun to consider `new` to be a harmful feature of many OOP languages. Today I favor aliasing of external state and stateless objects observing and influencing a stateful grid, though much work is needed to make these techniques more convenient and efficient.

Models vs. Language Features

Concurrent objects need to be modeled using axioms and denotational semantics as in the Actor Model (see post above).