Why are objects so unintuitive?

*quick note: I don't have much formal training in PL's, so if I'm wrong, please correct me. For reference, I know python, C++, C, JAVA, OCAML, LISP, prolog (in that order of familiarity).

The quick one liner:
Why are objects that we use in programming so vastly different from real-world objects?

The slow multi-liner:
Human languages have the ability to talk about the state of the world, and all of them contain nouns, which rather than describing a particular thing, they are supposed to indicate a set of things that contain a certain property. Cars can be driven; boxes are containers; speakers produce sound. People from different cultures that have different languages, even with a difference in nomenclature, typically have common words that describe sets of things the same way. In addition, modern science has found mathematical relationships between classes of things with precision. All this to say there is such a thing as "real-world objects," it is not arbitrary (and suggests that there is a right way and wrong way of describing things.)

Most programming languages have a concept of objects, and allow programmers to define them and describe relationships between objects. The idea of objects helped programming with type-checking, encapsulation, code reduction, etc. In fact, an argument can be made that the job of the programmer is to figure out a representation (real-world object -> code -> binary data) of a problem and a process to solve the said problem.

Yet even with the amount of programs being written and problems about objects being solved, rather than converging on a complete representation of real-world objects, objects in programs seem to diverge, where an object from one project is different from an object from another project and is typically different from how a normal person thinks about the real world object.

When the representation in code is the same as a person's understanding about a real world object ("common sense" or common understanding), the person can process and reason and be productive with the code with ease (like.. integer object types). On the other hand, if the representation is not the same (unintuitive), then the person has to go through the documentation/look over each line to match up the person's internal representation with the representation in code, making programming difficult (any large scale programming project).

Working Definition:
Class of Real World Objects: A set of things that has the same property. ex: a cardboard box (made from paper, rectangular, contains stuff)
Relation: A relationship between objects. A relation between object A and object B describes a constraint on some properties of A and B. ex. a ball is "in a" box (constraint on the location of the ball)
Inheritance: A particular relation between object A and object B where object A has a subset of the properties of object B. (an ISA relation) ex. a paper box "is a" paper object. a paper object "is a" physical object
Functions: a relationship between the parameter and the return value

Objects in OOPL:
Objects in object-oriented programming languages (the most straightforward implementation of objects) are defined by the name of the class, with a list of attributes and functions. Relationships between objects are implemented through inheritance, templates, functions, or attributes. Many problems come from the use of inheritance, where the base class has to contain a subset of the derived class, so there is no implicit way of sharing attributes between derived classes if it isn't in the parent class -- unless you restructure the inheritance tree to no longer resemble any real objects (which require constant refactoring with each additional class). Note, functions in the class definitions are relationships between the parameter, the self object, and the return value.

Objects in logic programming:
Logic programming (the most straightfoward implementation of relations) defines objects in terms of relations. Atoms can be objects, and classes may be declared with single term predicates.

brother(A,B) :- isHuman(A),isHuman(B),father(C,A), father(C,B).
Defines the relationship "brother" for objects of class human

ownsPet(A,B) :- isHuman(A),isAnimal(B),has(A,B).
Another example of a relationship. you can find who owns a pet named kitty (?-ownsPet(X,kitty).) and you can find the pets that a person John owns (?-ownsPet(john,X).) (think about how to program this simple relationship in OOPL -- adding owner to class animal and pet to class human would require two fxns and messed up classes).

Some logic programming languages are object oriented, and with clear definition of objects and relations, come the closest to the way we think about objects. However, logic programming is (typically) done on horn clauses, and uses backtracking to solve the query (run the program). Therefore, you can't put all human knowledge into a prolog program, and still expect it to solve your query in a reasonable amount of time.

Conclusion
Object-oriented programming give very good benefits to the programming environment (encapsulation, abstraction, code reduction, code reusability, type correctness), but common approaches limit the definition of objects, creating objects that no longer describe real world objects and sometimes decrease productivity in the process. Rather than thinking "what objects there are, and how to use the relations between them to solve the problem," a programmer has to think about "what objects can I implement that will give me all the benefits without any of the problems."
Logic programming approaches, on the other hand have a particular control process, which make intuitive implementations of real world objects unfeasible to solve.

So here's the discussion: How do you improve object and relation definitions in a way that is "natural" and useful to programmers? Is there a provable reason why objects in programming languages are so unintuitive? will they ever be?
[and bonus question: benefits/pitfalls of object-oriented functional langages?]

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Real world objects are not PL objects.

One of the best leaps a programmer makes during their path from newb to godhood is to realize that 'objects' don't map well to real world objects. Programs work better, designs are more clean... code sucks less when a object in code represents the embodiment of a singular concept; a singular set of invariants rather than an object. Is that a failing of language design? A quirk of calling it object oriented programming rather than state oriented programming?

Probably.

But back on point, since you seem to want a language where 'objects' do map to objects.

How do you improve object and relation definitions in a way that is "natural" and useful to programmers?

How do I personally improve them?

I start with changing the type system to a structural one. As you said, many of the problems come from the use of inheritance. The real world does not fit into a nice tree, and disparate libraries certainly don't integrate well enough together to agree on what the tree should look like. This is one of the key strengths of dynamic languages, and something that languages like Scala are moving towards.

It allows objects to be a subtype of another concept more easily/more naturally. It allows object compositioning/multiple inheritance more easily.

I've a few other ideas, but are not really as sure that they're good ideas. Moving away from the inheritance tree is, in my current estimation, a good idea.

No multiple inheritance in Java, C#, VB.NET

I start with changing the type system to a structural one. As you said, many of the problems come from the use of inheritance. The real world does not fit into a nice tree, and disparate libraries certainly don't integrate well enough together to agree on what the tree should look like. This is one of the key strengths of dynamic languages, and something that languages like Scala are moving towards.

Agreed, and not having multiple inheritance, mixins, or traits in the mainstream languages seems to be problematic. As you stated, the real world doesn't fit into a nice tree.

You are missing the point.

As you stated, the real world doesn't fit into a nice tree.

The idea behind objects is that they are *peers*. You do *not* model object-oriented programs as a tree!!!!!!!!!!

Never.

Ever.

Unfortunately, many people get this aspect of OO completely wrong. Ask any seasoned developer what they think of Model-View-Controller and they'll say its good. However, quite often their code has an emblematic OO code smell: "Controller Trees".

There is a very, very, very important reason why we try to normalize our behavior substitutions to fit a tree structure, however. Provided that the objects in the problem domain are peers, then we can dynamically substitute the behavior of peers to get rich, run-time behaviors.

Compare this to frame-oriented programming, the antithesis of OO. Frames model programs as lattices. As a result, there are no constraints on what structure your object hierarchies can be in. How do you intend to typecheck something like that? Moreover, how do you assign responsibilities to subsystems and divide out tasks among team members, if your code is a Spaghetti-modeled lattice?

how would you do that?

how would you move away from the inheritance tree, if inheritance is where most of the strength of OO comes from? the good thing about OO is when you can treat the derive classes the way you do the base class, while adding and modifying some aspects of it. Multiple inheritance helps, but are there alternatives?

I've a few other ideas, but are not really as sure that they're good ideas.

care to share? I'd love to hear any + all suggestions.

Also, I realized I'm making an assumption if that a person's idea of an object = real world object = object in code, it will be easier for the programmer to code, code will be reusable, things will go smoothly, and the sun will shine. Which may or may not be true, ... and is a bit related to your first point

There is a difference

There is a difference between inheritance (including the contents of a base class) and subsumption (being able to use a subclass where a base class is expected).

They often coincide, but need not necessarily be 1:1. You said you use python. Objects there need not inherit from some interface to be used with a method, even though the method presents a sort of interface (supply these members or else!). Some type systems' traits or mix-ins do stuff like that statically.

care to share?

Eh, perhaps after a night's rest. For now:

I think units of measure are good (though I'm so-so on this implementation).

Seconded

Real world objects are not PL objects

That's crucial. I think that the most important task when teaching OOP is disabusing people of the idea that objects should be based on real world objects. This myth, coming from OOD, is one of the things that makes OOP as terrible as it is.

Real world objects are not even "real world objects"

Really, they're not. Before even getting into a discussion about OOD, I would take issue with the assumption that the world naturally divides itself into discrete "objects" with natural properties. Not that these distinctions are entirely arbitrary, either.

For a computing-informed perspective on this, I recommend Brian Cantwell Smith's On the Origin of Objects. I know that this is not to everyone's taste, however...

but where went those online papers?

Like "God Approximately" -

"The world simply doesn't come all chopped up into nice neat categories, to be selected among by peripatetic critters - as if objects were potted plants in God's nursery, with the categories conveniently inscribed on white plastic labels."

previously on LtU

Edit - found them

Why is it a myth?

Ehud, I think you miss the point or are simply using your experience as allowance for hand-waving. Could you please share real world examples where you failed to use OO effectively?

Sure, there are edge cases where OO poorly describes a problem domain, but for systems integration it is extremely valuable.

In my experience, OO is really good when there is a stable description of the problem domain.

Alternatively....

Ehud: if you feel it will take less time, you can instead enumerate all of the circumstances you know of where it is possible in principle to use objects effectively. :-)

I'm not just being snarky. All of the mappings we have so far from real-world problems to software realizations are either bad (in the sense of imprecise and convoluted) mappings or map to pretty bad models (in the sense that the models are either devoid of semantics or have semantics too complex for the working programmer to really understand). OO "works" because it is the least-bad and least-incomprehensible mapping that has been identified to date.

That's not a bad place to be, and it's a fine reason to use OO programming, but it shouldn't be confused for an endorsement that OO is in any sense the "right" model for programming.

OO "works" because it is the

OO "works" because it is the least-bad and least-incomprehensible mapping that has been identified to date.

Hmm, it looks to me that it took off because it had plenty of trivial and concise examples giving the illusion of simplicity and plenty of code reuse and extensibility.

Contrast with examples discussing ADTs, where extensibility is often not even discussed, and any person not steeped in all the theory could easily reach the conclusion that OO is superior.

OO "works" because there is

OO "works" because there is no consensus or objective definition regarding what it means for a paradigm to "not work" (or, for that matter, to "work"). It is a word without meaning.

That, and effort justification.

Seriously, most OO programs I've seen spend most of their code in design patterns to escape their OO shell, and the rest in non-OO algorithms. Somehow, people point at this immense struggle to escape the turing tarpit, observe marginal success, and call it 'working'. I call it 'overhead'.

(To be fair, this isn't unique to OO. People do the same thing for FP, i.e. pointing at monads as though to prove FP 'works'. I've never understood this phenomenon; I cannot distinguish type-0 computation models by ability to model other paradigms. Only local reasoning and expressiveness distinguish paradigms.)

Stable Problem Domains

In my experience, OO is really good when there is a stable description of the problem domain.

James Gosling said something to the same effect a while back. But having a "stable description of the problem domain" seems to be more of a throwback to the days of closed, monolithic systems and waterfall designs.

Actually, I'm not bashing on OO, but Java-style OO (single inheritance, single dispatch, no extension methods). A language like Scala seems to more "easily" map from a domain to a model.

Possible things of interest

I always found PSYCHOLOGICAL CRITICISM OF THE PROTOTYPE-BASED OBJECT-ORIENTED LANGUAGES to be an interesting paper.

Subtext wants to be a programming system that mimics the way that humans "naturally" think about putting together code - an automated copy-n-paste system.

For me, the bottom line is that modern, mainstream OO languages have been sold to us as a bill of goods. There's really nothing "real-world" about mainstream OO. I believe Alan Kay lamented the fact that he didn't call OO message-oriented programming.

Message-oriented programming

I believe Alan Kay lamented the fact that he didn't call OO message-oriented programming.

I may have heard Alan Kay say that once, but I'm not sure. What he has lamented is that he coined the phrase "object-oriented programming", because he said that got everyone focused on the objects, which wasn't his intent. What he meant to get across was the power of messaging. He said what's important is the relationships between objects, and that the real abstraction is in the messages, not the objects.

In a couple speeches he kept using the Japanese word "ma", which roughly translates to mean "what goes on between things". He said the best English translation (though it's still rough) is "interstitial". His original vision for OO was that each object was in effect a computer. If you layer on the idea that any computer can emulate any other computer, that gives you an idea of what he was driving at, but the "computers" were supposed to be incidental. Messages are a representation of interaction. That's what's important. The objects are mere endpoints for that interaction. I use the analogy of the internet. Computers on the internet are just "boxes". What's important is what goes on between them, the interaction between the endpoints. The internet and OOP were formed in the same environment. The objective of each was to decompose what had been monolithically designed systems into finer grained models that would scale into massive systems without breaking.

To expand the point even further I saw a message he posted on the Squeak developer list which basically said, "You don't have to follow my model for objects (member variables and associated methods). Create your own. What's important is messaging."

If you really want to see what he was driving at you need to look at Smalltalk (or Squeak, it's modern version). Some of his ideas were adopted in modern OOP dev. systems, but the important ideas were lost.

My sense of what he was getting at with OOP was ultimately to create a system where people could create their own languages to model their own systems. If you look at Smalltalk it's very easy to create your own DSL. Objects and messages were a linguistic infrastructure for achieving that. That's just my read of it. I could be wrong.

Another important thing to note is that Kay did not consider Smalltalk and his ideas of OOP to be "the final answer", not by a long shot. He's used some historical analogies for it, like "a minor Greek play", or "Gothic architecture". These are very old by modern standards. The problem he says is that just about everything else that's used in what we call modern programming (even OOP as it's popularly known) is analogous to things which are much older and archaic. He uses analogies to ancient Egypt and pyramid building for them. What he's long hoped is that someone or some group will come along and "obsolete" what he's created, in other words create a new architecture that is so good it eclipses what he helped create almost 40 years ago. He's said with the exception of Lisp (which he calls the most beautiful language yet created), which predates Smalltalk, he hasn't seen that yet.

Thanks for sharing this Kay

Thanks for sharing this Kay gem, I've not seen it before.

In particular, I've never heard him say the following before:

I would say that a system that allowed other metathings to be done in the ordinary course of programming (like changing what inheritance means, or what is an instance) is a bad design. (I believe that systems should allow these things, but the design should be such that there are clear fences that have to be crossed when serious extensions are made.)

The fact Kay realized this fine point of design in the late '60s (according to him) is why he is a Turing Award Winner.

I know Lisp programmers who even today don't understand this point - their code is succinct but the API has a massively unnecessary learning curve due to unclear boundaries.

Sometimes my coworkers object to me paying extraordinary attention to detail about what the boundaries are. However, if we don't pay attention to boundaries, we may as well all be Netron Fusion COBOL programmers munging VSAM records and EDI data formats.

and it has all fallen on deaf ears?

anybody know of things which could reasonably be said to be excursions along the path of the 'ma'?

Listen to your 'ma'

Some things of 'ma':

  • process configuration languages in general (including dataflow languages)
  • protocol verification, consistency verification, etc. involving multiple processes and more than a single message+reply
  • transparent persistence
  • location transparency or translucency
  • process accounting & distributed policy management
  • resumable failures, especially in a distributed system; e.g. ability to request a renewal of an expired security certificate then immediately continue operation without restarting
  • service registry, discovery, and matchmaking based on needs, roles, and such; late binding and dependency injection
  • language-support for distribution or mobility
  • transactions in general, but well exemplified in distributed transactions (e.g. transactions in the process calculus, transactional actors model)
  • garbage-collection in general, but well exemplified in distributed garbage collection
  • versioned object graphs and pointers; support for reversal and branching, support for merging and achieving consistency
  • synchronization and coordination patterns; workflow patterns, streaming, temporal logic
  • security models (object capability model, type-based capability models, exclusive transfer of rights, etc.) and especially secrecy/privacy models (e.g. based on dataflow)
  • deadlock detection and prevention
  • graceful degradation, automatically switching object references to backup resources
  • automatic recovery/regeneration after temporary disruption or partitioning is alleviated (switching back to primary sources)
  • support for cascading failure / termination / fail-fast where desired (becomes a problem to solve once code is distributed)
  • any support for promise pipelining (passing promises between objects)

Exemplified by projects such as Orc and CeeOmega, support for 'eventual send' in E language, research into the 'kell calculus' on Oz/Mozart, at least one implementation of transactions for process calculus, and more.

My own language design is very heavily focused upon achieving useful properties in the interstitial space, involving distributed transactions, automatic distribution, regeneration after loss of a node, support for tunneling of 'context' information to support cross-object concerns like process-accounting, logging, and security... and so on.

When I first read Kay's words (years after he spoke them) I saw them as a confirmation of a conclusion I had already reached a couple years before. I favor the word 'interstitial', which I learned after reading Kay's comment; prior to reading it, I simply didn't have any single word to describe the challenges objects/actors/processes face across space and over time.

Regardless of whether Kay's words fall upon deaf ears or whether he's preaching to the choir, I would say there is still plenty going on in this area.

thanks!

thanks!

It is not a matter of deafness.

It is a matter of investing extra time up front for greater productivity later on.

The trick is to "make stone soup. not boiled frogs".

Today, one of my coworkers commented that my adding two new classes to the system to increase cohesion and decrease coupling was probably unnecessary, and he disliked the fact it added an additional step he had to go through to complete a task. I basically responded that we should follow the Google Search Principle: hiding the additional steps through administrative tools that step us over the additional steps (and also allow us to step into the complexity when necessary). We can also provide reasonable defaults for dependency injection. For instance, either whitelist or blacklist security policies could be the default, depending on what we're doing. These defaults would be hoisted into some sort of schema authoring tool that defines the injections a framework defaults to.

Most tools simply do a poor job adapting to complexity, but these tools are a direct reflection of what programmer's ask the tools to do; I don't just want a tool that does rote tasks, I also want a tool that is self-forming, self-describing, self-reasoning, self-healing, self-*. I want a tool that programs with me, and yet also a tool that wants me to control it. Most tools control the programmer. Such goals are not easy and are a labor of love, require lots of iteration, and I cheat in many ways to avoid big up front design - "make stone soup. not boiled frogs".

As an aside, sometimes you just look at APIs done by big corporations and wonder what the hell they were thinking. For instance, .NET 3.0 is a train wreck from an OO perspective - I refer to it as "a mix of OO and architect's-kitchen-sink". All the metathings they introduced for dependency analysis should've been rolled up into a MetaObjectProtocol.dll. Instead, there are two duplicates of DependencyObject and DependencyProperty, each in their own assemblies (one for WCF and another for WPF). On top of that, the actual API has holes in it that allow the dependency analysis process model to be subverted (i.e., there is no DependencyPropertyUnsetValue singleton denoting "nil" in the dependency analysis subsystem, instead there is a DependencyProperty.UnsetValue() method that calls the object() constructor, and so an object instance is used to denote "nil"). And I still feel much of the guts could've been folded into the VM as a VM innovation. They also should've realized a Uri is the ultimate metathing, since it is what identifies every object in your system. Had they realized this, they would've completely re-written the Uri class and made a new one for .NET 3.0. Just my humble opinion, of course.

Re: reference to Kay's comment

Re: comment-48119…

⨯-reference: comment-62754.

is encapsulation a telephone game mistake gone wild?

so wait a second. if it isn't the objects so much as it is the messaging... but we've got this whole thing of OO where people are all like, "hey, encapsulation is a good thing!" and yet we end up with problems because of it (e.g. the OO side of the expression problem coin) vs. just having "data is the core, then algorithms around/on top of that." so while there might be some things to be said for OO (as long as we make sure to disallow inheritance :-) is it really just sort of a cosmic bad wild goose chase?

and, so, furthermore, uh, what actually is it that is so magic about messaging? is it something about abstraction and loose coupling (and possibly separate processes)? like, if it is all statically checked and doesn't allow for loosey-goosey "message not understood but lets just keep it between the two of us and not blow up the whole program" of smalltalk and ruby, then does it not have the key things Dr. Kay would advocate?

Encapsulation is a good

Encapsulation is a good thing for subsystems, ie. large systems of aggregate components. I think encapsulation in the small is not so good, as it impedes equational reasoning.

Encapsulation is not a bad

Details that are hidden can be changed. Details that are hidden can be secured. Encapsulation serves a definite purpose in modularity, composition, and security.

The mistake is not encapsulation.

A mistake is the belief that everything-is-an-object is a good thing. A mistake is the use of objects for data-types. A mistake is the use of objects for domain modeling.

what actually is it that is so magic about messaging?

The value in messaging is that it is not magical at all. It corresponds well to physical processes (albeit not quite so well as conservative logic), and yet it still provides an efficient model for computation and natural approaches to both synchronization and concurrency.

But I don't think that's what you're asking.

The direction OO took has been largely focused on modifying the objects (mixins, classes, inheritance, etc.) and treating objects as containers. A focus on messaging would have resulted in a different path. Instead of objects being given all those gaudy wrappings, the messages, queues, the relationships and links between objects would have received the features. OO would have gone a different direction - configuration of communications, dependency injection, dataflow and orchestration, automatic fallbacks, synchronization patterns, distributed transactions, and so on.

I believe that OO would have turned out much better with a focus on messaging rather than gaudy trappings for the objects. I'm not certain Alan Kay feels the same, but his statement above suggests he at the very least feels considerably more attention should be paid to the messaging and providing 'features' in the interstitial space between objects. I described several such features in the list above.

Encapsulation gives you a fiddling layer

Any time you introduce a symbol as an abstraction for some value, the symbol should only represent the value and not its internal details.

Also, I would never, ever suppress message not understood. It is part of protocol design and therefore boundaries to have a well-defined way to respond to failure. Failure is a critical event notice, and shouldn't be responded to in ad-hoc ways such as adding properties to a class, imho. If you have to add properties at run-time, chances are your design is not very stable.

If you want a decent introduction to messaging, check out "Advances in Object-Oriented Metalevel Architectures and Reflection", edited by Chris Zimmerman.

That being said, encapsulation is not always a good thing, and in some contexts premature encapsulation is possible. Continuation-Passing-Style optimizing compilers are a good example of how to break syntactic encapsulation, while preserving the semantics of the subexpression. Tail call optimization is fundamentally an encapsulation breaking procedure.

Any time you introduce a

Any time you introduce a symbol as an abstraction for some value, the symbol should only represent the value and not its internal details.

Note, this describes both parametricity and encapsulation. I think the former is essential, but the latter not so much.

At work, we use the phrase

At work, we use the phrase "Stick-built" to describe a solution we've not yet parameterized, because we've yet to understand the permutations and are still solving the problem in a mostly ad-hoc way. However, once parameterized, we don't junk the phrase encapsulation, because it still means a lot.

Encapsulation is what makes objects peers; a network of inter cooperating parts formed on the fly. The Law of Demeter captures this fairly well, and it effectively specifies **the effect of** constraining system parameters in various ways, some better than others for encapsulation purposes. [Edit: For a related discussion on this and trade-offs in class modeling, see Walter L. Hürsch's Should Superclasses be Abstract?]

Parametricity helps only in the case of inheritance, by forming type-based compiler-checkable contracts for awkward scenarios such as Template Methods. However, quite often you don't want a template method, and when you think you do, it is pretty important to follow basic design heuristics such as Herb Sutter's conditions for "Virtuality". To me, template method implies some ownership of a protected resource, and ownership implies control of the lifetime of that resource.

There are four levels of specification: defined, implementation defined, unspecified and undefined.

Visibility modifiers and security managers define what capabilities clients have access to.

However, encapsulation is not nearly as important a principle as data independence.

I also must confess to a strong bias against the fashion for reusable code. To me, "re-editable code" is much, much better than an untouchable black box or toolkit. I could go on and on about this. If you’re totally convinced that reusable code is wonderful, I probably won’t be able to sway you anyway, but you’ll never convince me that reusable code isn’t mostly a menace...
- Donald Knuth, Interview by Andrew Binstock

The larger problem here is ABI compatibility and interoperability with old languages - it is what forces most premature encapsulation.

love the knuth quote

the idea that we want to extend our code w/out ever changing the source files bugs me a lot.

Nonetheless...

... the idea that we'd want to extend other people's code and services without changing their source files seems pretty straightforward.

There certainly needs to be a balance of the two - ability to modify source-code and modify a service you "own" on one hand, and the ability to extend systems without such modifications on the other.

Abstraction not Encapsulation, and not just data hiding either

The point of OO is not merely encapsulation.

Neither is it abstraction (which you can do with procedural and functional programming just as well).

Neither is it 'simply' about messaging.

So what is 'so magic about messaging' is, as Alan Kay put it in The Early History of Smalltalk: doing encapsulation right is a commitment not just to abstraction of state, but to eliminate state oriented metaphors from programming.

Regards,

Lance

eliminate state oriented metaphors

apologies for probably being dense, but for me at least, that could stand further explication :-) i mean e.g. where does that fall on the spectrum of say monads to dependency injection to getter/setters? (i shall go try to follow the lead of that quote online and see what i can learn.)

Data and Reality

This seems more like a data modelling/ knowledge representation pondering... The best book I've read on these kinds of things is Data and Reality. Very expensive on that link though!

Any program is only going to model the behaviour of an object that it cares about. If you think about where "real" behaviour comes from, it is the physical structure of a real world object. A real world object can sometimes be indistinct, and it may be that lots of people have disimilar mental models, or even different mental models for the "same" object in different contexts. So, do we model indivdual quarks, leptons and quantum gravity (hm...) and wait for emergent behaviour? No. We model observed behaviours that are relevant to the enquiry at hand. Anyway...

Logical programming languages do not make this problem go away. Who decides on the terms? The granularity? Its still just the guy writing a program to model a situation. This is what leads the "Big Knowledge Representation" guys to have massive fights over ontologies, and you can look at the progress of the semantic web to see how thats coming along.

Anyway... back down to earth... I found this "DCI" article quite interesting in at least trying to formulate a way to express the different roles the "same" object may play. Worth a gander.

Data and Reality

that book is one of the best books on information science ever written, and as such a wealth of information on how to design programmable systems in-the-large

If you note, I am the big heckler in response to the DCI article thread. That article was poorly written, and hopefully my feedback will sink into the authors' brains.

If you really think anything in that article is a new idea, then you need to crack open a good model-driven architecture object-oriented analysis text.

is the 1978 one okay?

.. that's the only one they have at my library =/.

Yes, I have the 1978 version

Yes, I have the 1978 version and am unaware of a different version.

I've heard Brian Cantwell' Smith's book exhorted in various circles, and let me just say it is a mistake. It was not only a painful read, but filled with logical fallacies. One of these days I'll post an Amazon.com review ripping it to shreds.

The basic problems these days are in correctly applying theory

It is amazing how many APIs are poorly designed. Just look at .NET Framework 3.0 and WPF and WCF.

- You need a Ph.D. in distributed systems research just to understand some parts of WCF. (There are some good parts.)

- WPF has so many complex interdependencies that it basically violates every law known to man as to how MSFT ever finished building it.

Also look at J2EE. Poor "object-based" specs like these are the darlings of component vendors, because the spec always requires you to pay for some third party library to accomplish anything. B/c vendors love them so much, marketing $ get pumped into promoting these specs.

As a result of examples such as these three big failures, practitioners equate OO with these awful APIs/projects/specs.

However, OBJECTS *ARE* INTUITIVE. The hard part is understanding the problem domain, but that is what Brooks warned us about in No Silver Bullet, anyway!

Not always a good thing.

However, OBJECTS *ARE* INTUITIVE.

I agree wholeheartedly. The problem is that while objects are intuitive, they so often promote bad intuitions. It is very hard to assess how much this is a consequence of the model and how much a consequence of the developers. And of course, the "best" solution is the one that chooses the right impedance match between the two.

So you have the magnitude right, but the sign still worries me...

objects promoting bad intuition

I see two bad consequences of objects being intuitive, one cognitive and one social. The first seems ironic: given a tool easing effort to reason, many developers reason so much less they manage to retain flawed results. The same way work expands to fill available time, we also see effort contract to suit ease of tools.

People tend to excess optimism. When a problem is almost solved, folks decide they're close enough and stop trying. A bad social consequence I mentioned seems related: many folks aim to solve problems in plausibly superficial ways, so as long as it looks nearly right, this counts as "showing up" to garner social credit; failures are just bugs.

Sometimes I seem productive only because I try hard to think of all ways code can go wrong, and there are so many. Objects can simplify, with fewer ways things can go wrong. But we still have an obligation to aggressively look for remaining failure modes due to complex semantic interaction, including failures in an object model's runtime.

Perhaps elegance due to object clarity can mislead developers to ignore subtle issues, such as edge conditions in contracts.

Harmful Intuitions

I agree wholeheartedly.

Too many OO applications start their lives as domain simulators. This 'false start' is implicitly encouraged by the such textbook examples as Person (Employee, Manager), Vehicle (Car, Tank, Helicopter), Animal (Cow, Duck). This 'intuition' is at the level as wondering whether the little men living inside your television would like tacos and ice-cream as a reward for all their hard work.

It doesn't help that OO is simply awful for domain modeling. You'd be better off with ye'old relational and procedural. OO makes difficult to describe ad-hoc queries, to tweak rules, to introduce complex relationships, to support many simultaneous 'views' of a system, to integrate external data resources.

Developers eventually learn that OO is for modeling program elements, not the domain. But this only happens after a lot of black eyes, bruised knuckles, bitten arses, and learning to walk around in a clunky suit of boiler-plate mail. Developers battle with concurrency, IO, persistence, observer patterns and reactivity, reentrant callbacks, cache maintenance, etc. Boiler-plate is all those design patterns they end up using: functors, queues, futures, pipes, DI, visitors, exception tunneling, messages, et cetera.

It is unclear to me how developers come through imagining that OO actually helped. Perhaps it is some mix of cognitive dissonance, effort justification, selective memory, and blub paradox.

I'd prefer we find some good physical intuitions. My RDP intuits a sort of 'people watching each other' model, aware of the eyes upon them, capable of only local actions (like waving). But, if we must make a choice, I think it would be better to favor an initially unintuitive language than one with known problems. (But we certainly don't want counter-intuitive! By unintuitive, I mean that the language is mostly neutral for intuition.)

I'd even suggest the possibly unintuitive philosophy of making the difficult things easy and easy things possible. For example, with OO we could reject ambient authority, embrace the object capability model, and this makes a 'hello world' program and FFI integration more difficult but better supports various in-the-large robust composition properties. There are many things we can do along those same lines to support concurrency, coordination, persistence, partial-failure, graceful degradation, resilience, robustness, reactivity, real-time, modularity, distribution, security, upgrade, debugging, testing, integration.

Besides, we can build new intuitions in our languages. Invariants and local reasoning offer a justifiable bases for intuition. Ocaps are quickly become intuitive, since it is easy to understand 'nobody can interact with this new element until I give them a reference'. By comparison, the OO modeling of programs is based on the unjustified claim that programs should be a concrete representation of a bundle/substance model that is volatile in our minds.

OO makes difficult to

OO makes difficult to describe ad-hoc queries, to tweak rules, to describe complex relationships, to support many different 'views' of the system, to integrate external data resources.

That's o.k. but most of time you don't need it just like there are no industry jobs for writing software that controls and coordinates swarms of robots, evolve the global brain and other sophisticated SciFi and MIT/DOD technologies. General purpose language are used for general purposes i.e. they shall facilitate 99% of the common tasks which are fairly unsophisticated.

In my day job I work together with embedded systems developers who often lack software engineering 101 knowledge and practice ( modularity, separation of interface and implementation etc. ). OO was the vehicle to stuff it into programmers throats like structured programming was before. Since this succeeded only half way the same is now iterated with FP, with great promises, much snake oil but only mild success as it seems.

I'd even suggest the possibly unintuitive philosophy of making the difficult things easy and easy things possible.

As Nietzsche said he hadn't found yet a single philosopher who lived at the heights of their own claims and demands. We have to see yet the very first non-trivial algorithm / design pattern of yours that solves an actual problem in language implementations and not a huge bag of ideas and an even greater bag of requirements which are tweaked and stacked up faster than anyone can follow and which is dead expertise that finally fades into oblivion. This is not good philosophy and it is no science, no art and no engineering.

I agree that you don't need

I agree that you don't need domain modeling for most applications. I had thought that clear from the paragraph immediately before the one you quoted, but I guess it isn't because simulation is just one form of modeling. My point was that OO manages a double whammy - it encourages you to do the wrong thing and it is relatively poor at what it encourages you to do.

Anyhow, I'm not denigrating OO just because I want to replace it with something that helps solve problems in my day-job (which is, nominally, developing software for controlling and coordinating swarms of robots). I admit that's part of it. But I also believe that OO is worse even than its predecessor - the old mix of procedural programming and multi-process architectures where whole processes might be considered objects interacting on a shared bus or blackboard or database, or configured externally. I'd favor actors or Erlang processes or FBP to objects. At least with process per object, it's rather obvious you shouldn't be describing composite patterns and visitor patterns and so on with 'objects'.

As Nietzsche said he hadn't found yet a single philosopher who lived at the heights of their own claims and demands.

Nor will I, I am sure.

(It would be sad, indeed, if philosophers only ranted about maintaining the status quo. Those who demand systemic change will always face a barrier to living at the heights of those demands.)

But I'll raise you another quote. Ralph Waldo Emerson advises: Speak what you think now in hard words, and to-morrow speak what to-morrow thinks in hard words again, though it contradict every thing you said to-day.

That is the advice I follow.

We have to see yet [...]

Are you feeling impatient?

Skepticism is, of course, warranted. OO and FP and CORBA and COM and many other paradigms and architectures have made promises then failed to deliver. You have nothing but my enthusiasm to suggest RDP can do better. People have a difficult enough time grokking FRP without having used it, so I don't expect you to grok temporal RDP just by looking at a description.

You seem frustrated that I provide a huge "bag of requirements". But I think it important to develop and share such lists for three reasons. First, it is difficult to improve languages after they are in use, and incremental design is a lot less flexible than up-front design, so it's very important to have the huge bag of requirements up front. Second, I'd like more eyes on the same design problems I'm trying to solve because, honestly, the problem is a lot bigger than I am. Third, I see a lot of people trying to improve things without any real direction. Can you tell me what it means for a paradigm to "work" in a manner that will usefully distinguish paradigms? Did you remember the corner-cases and scalability issues and partial-failure conditions and concurrency properties of real applications? A list of desirable properties, even described informally in terms of user stories, is one way to judge the worth of a paradigm. (Greenspunning, frameworks, and non-localized design patterns or self-discipline are symptoms of paradigm failure, IMO, but that's not a popular view.)

I'm quite happy to move slowly in the right direction. It would certainly be nice, IMO, if more people used compasses based on pervasive language properties, even if they don't precisely agree with me in terms of priority.

I see OO as a paradigm that moved with the speed of fad in the wrong direction. Developers spend too much time fighting the paradigm (observer pattern, visitor pattern, ORM), or subverting it (with global data, singleton pattern, stack inspection, multi-methods, etc.), or working around bad implementations (dependency injection, abstract constructor patterns, troubles with synchronous and asynchronous IO). E is as close to OO 'done right' as I've ever seen, and Gilad Bracha's Newspeak is also excellent, but even OO 'done right' scores poorly on the gamut of requirements and user stories I consider important.

Zenotic inertia

Zenotic inertia is stating the impossibility of beginning because there is an infinite list of tasks which have to be performed before anything can be performed. It's not quite that I'm in a hurry but surely too impatient for analysis paralysis and bad infinity.

It is difficult to improve languages after they are in use, and incremental design is a lot less flexible than up-front design, so it's very important to have the huge bag of requirements up front.

It is easy to improve a small and dense code base even for a single author. In my own case I'm reworking a code base of 5-7 KLOC for a couple of years ( apart from professional activities ). I wouldn't even call the design "incremental" and progressive. It is cyclic, critical and revolutionary: big step changes caused by discoveries made while working with the material. No matter how hard you think about requirements, you never get there by musing about how it should be and whether the list is complete. It's like working out a proof for a theorem which drives the invention of techniques which are impossible to conceive without doing this particular proof. It doesn't matter if it takes 5 or 10 years or lasts forever when the stream of requirements doesn't end - it usually does and being subjective = opinionated about what goes in and what is left out is sound.

In some sense I also mistrust "language design" when it is decoupled from implementation and the invention of algorithms. Just look at the current trend to make all data-structures immutable solely based on preconception what is hard and what is not so hard in code analysis. Without getting a feeling what is actually hard to do and where hints / decisions are absolutely unavoidable you just stick with your prejudices. Same with the relative importance of problems. You can spend a huge amount of time advancing a formalism which does not solve a problem which is complicated or even tedious to solve without it.

Codebase Inertia

easy to improve a small and dense code base even for a single author [...] cyclic, critical and revolutionary

That may well be the case while nobody else is sitting on top of that code base. Platform technologies - such as languages and protocols - are different due to the inability to fix a massive codebase built atop them while attempting critical and revolutionary changes. Alan Kay expressed the following regret:

Alan Kay: I complained at the last OOPSLA that -- whereas at PARC we changed Smalltalk constantly, treating it always as a work in progress -- when ST hit the larger world, it was pretty much taken as "something just to be learned", as though it were Pascal or Algol. Smalltalk-80 never really was mutated into the next better versions of OOP. Given the current low state of programming in general, I think this is a real mistake.

You can easily become a victim of your own success if you 'release early release often' with a platform technology. Critical, revolutionary changes are no longer feasible after a language "hits the larger world".

I've been writing code, prototyping, sketching implementations, running through a long list of user-stories. I've decided to cut several paragraphs discussing my progress (too big, off topic), and just say that I'm a little irritated at your implications that I'm a pretentious, prejudiced idealist that has spent all his time stuck in requirements analysis rather than trying and improving upon solutions. Besides, the process of trying things out has been my greatest driver for new ideas.

Prior to RDP (a new paradigm I developed earlier this year), my efforts were focused around layering and organizing aspects of existing paradigms (actors, dataflow, functional, relational). I had no plans to release an implementation until design had stabilized for a few years. But, since RDP is an original paradigm, I plan to release RDP (via Haskell library) even if I later find something better.

In some sense I also mistrust "language design" when it is decoupled from implementation and the invention of algorithms.

Language design should stretch beyond state-of-the-art (else, what's the point), but must also be grounded in implementation concerns. Our approaches to this dichotomy seem a little different. I'm happy to throw a bunch of ideas into the air then try to land them, whereas - if I'm understanding you - you'd prefer to keep at least one foot on the ground even for your revolutionary changes. Following this metaphor I think, perhaps, that my approach is more likely to have a ton of language designs crash, evaporate, or simply float away. But it also has a better chance of surpassing the mountains before me.

Just look at the current trend to make all data-structures immutable solely based on preconception what is hard and what is not so hard in code analysis.

I disagree: there is no sole reason, and those reasons are not preconceived. We developers have been bitten aplenty by mutable state.

Any given datum is generally useful for influencing more than one behavior, and thus often is observed concurrently. The single-observer scenario, where linear programming applies, is a rare specialization. It has been well observed that reasoning about structure that is mutating is 'hard'. And that's even before you concern yourself with 'corruption' by buggy or malignant observers. Languages with mutable data structures often force developers to explicitly copy large data structures many times in order to process it in a sane manner and resist data corruption.

Beyond that, mutable data-structure semantics does not scale well, i.e. for applications sharded across hosts or between GPU and CPU, the mutation semantics introduce high synchronization costs.

Mutable state should be regulated, avoided when practical, and well distinguished. (I'm not anti-mutation. I just want all the mutants to register!)

Without getting a feeling what is actually hard to do and where hints / decisions are absolutely unavoidable you just stick with your prejudices.

It is a little unclear, but I think you're arguing against a sufficiently smart compiler. With that, I'd agree; simple and naive implementations should be feasible, and depending on 'sufficiently smart' compilers is a stupid idea.

However, languages can do much to make us more productive and intelligent by increasing expressiveness and local reasoning. To a very large extent, the same properties that help humans reason about code will also help machines. The converse is also true: the properties that an Integrated Development Environment can reason about can help the humans. Thus, a good language will tend to support a great deal of code analysis by machines by simple virtue of becoming better for humans.

I have no objection to using hints and annotations to support the implementation in achieving performance. But I would disfavor pragmas that affect language semantics, or at least consider them a bad language smell.

You can spend a huge amount of time advancing a formalism which does not solve a problem which is complicated or even tedious to solve without it.

I agree. And it's even worse if that sort of formalism gets implemented and becomes popular. OO would be a fine example of a huge amount of time and resources advancing a language design that does not solve any of the complicated, tedious problems it initially promised to solve (such as code reuse).

Or maybe too intuitive?

A while ago I read an article "How intutitive is object-oriented design?" (ACM link, free preprint available here). The article unintentionally displays some of the things I find wrong with OOD teaching. Quote:

For example, in a hotel reservation system, the class fax could inherit from the class email, since a fax object requires more handling (such as scanning and digitizing), hence has more functionality, than an email object.

When OOD is taught like that, I'm not surprised to see people mixing up real-world objects with formal ones and creating awkward inheritance hierarchies.

How do you improve object and relation definitions in a way that is "natural" and useful to programmers?

Stop calling them objects. Give them a scary name so that they don't seem intuitive anymore. ;)

There was a discussion on the Haskell mailing list recently whether 'monoid' shouldn't be called 'appendable' as that's "less scary and more intutive," and many felt that this is misleading and misses many of the applications as a monoid is a more general and precise concept than 'appendable'. I agree to this. Programs are formal entities, and they do contain abstractions that do not directly map to any real-world intuitions. The less the programmer is tempted to draw on intuition, the more it will help him.

So, ultimately, I'd say "don't adjust objects to match intuition, adjust programmer's attitude to not rely on intuition." Unfortunately, that's the exact opposite of what you are looking for.

There are copious

There are copious semi-famous examples of college textbooks on programming or general OOA/D with horrible examples. The best one is:

public class Circle : Point {
//...
}

I agree to this. Programs are formal entities, and they do contain abstractions that do not directly map to any real-world intuitions. The less the programmer is tempted to draw on intuition, the more it will help him.

I think you are missing the point those of us in the minority tend to proselytize: objects in the large, functional programming in the small.

"Monoid" says nothing about what portion of your development team owns a given component. Objects do.

iiuc, CTM agrees

"objects in the large, functional programming in the small."

Quite the opposite

Functional programming in the large, objects in the small.

I don't follow

Programming in the large would be geared towards specifying interfaces between separate parts of a system that are developed in isolation. I can't see that functional programming is geared towards modularity and isolation.

But I'll admit that I may be missing your point.

It is due to concurrency

The reason for my opposite view is due to concurrency issues. When you want a large system to work well with multiple independent threads, it is quite important to make sure that you avoid sync-points. A very neat way of achieving this is by requiring that objects avoid mutating global data, hence in effect working in a functional manner. This makes it much easier to reason about large systems as a whole.

In the small, each thread/job may be functional or object-oriented, from a whole system point of view - it doesn't really matter as long as inputs to the system are read-only.

To return to the topic of

To return to the topic of the OP: Aside from a subset of man made devices, it has not been my experience that "real world objects" have well defined interfaces...

Example?

I think the point David Barbour mentions earlier comes in here. If you're trying to model a situation, then relying on a well-defined interface is folly, since as the situation changes so may the structure of the problem domain. Centering a model around situations is generally not a good idea, unless you are building a fuzzy controller intending to use heuristics to solve an intractable in hard real-time problem such as a self-balancing robot that must compensate in hard real-time.

The ultimate form of modeling a situation, in my humble opinion, is Presentation-Abstraction-Control, an architectural pattern that for most applications takes Model-View-Controller to its logical and disastrous conclusion.

Typically the code smell is obvious to me: permutations are treated too specifically at the highest level of abstraction, requiring case statements to be written on an ad-hoc basis. Often the programmer will try to conceal the problem by pushing complexity into factories. Encapsulating the ugliness into factories actually makes the problem worse, because ultimately nothing meaningful to the problem domain ends up being encapsulated and permutations are still being treated too specifically at the highest level of abstraction! This is where the worst form of OO coupling takes place: instantiating relationships by passing concrete object references. As a result of such a coding maneuver, the constructor needs to consider several layers of STATIC configuration, as well as the context in which the factory was called, and data still on the heap or stack from prior stages. As a consequence, superclasses control which subclass leaves are called, as opposed to modeling subclasses based on specialized properties of the superclass.

It is no coincidence that the same well-known architect who pushed Presentation-Abstraction-Control as a good idea also pushed for frame-oriented programming.

Disagreed! Functional

Disagreed! Functional programming in the small, message-passing in the large (which is what I think a lot of people here mean by "objects").

Yes. However, I'm not sure

Yes.

However, I'm not sure you *always* need fire-and-forget semantics. Synchronous reactive semantics possess as much if not greater clarity, but would require an "automated truth maintenance system" to make possible as an in-the-large alternative to message-passing.

don't know what you mean by "atms"

but isn't occam/csp synchronous by default, and claims to be composable and provable (e.g. fdr tool)?

semantics/connotations are often a root problem

yup. part of all of this depends on what people really mean when they say objects or object-oriented. which apparently hasn't been resolved ever much at all or at least there continue to be enough different interpretations to cause trouble.

Is this a problem with OOD or the OOPL's we have?

don't adjust objects to match intuition, adjust programmer's attitude to not rely on intuition

People have been saying the comments that the OOD principles don't work in the real world, and it's best to differentiate objects in the real world and objects in code. Which I totally agree with (and is the motivation for this post). If we were talking about using C++ to make a large scale project, I would also think in terms of how best to organize code, rather than trying to represent real things.

But is this a problem with the principle or the implementation? If you were to make a new language that reinvents OOPL, what would you do differently than C++/JAVA?

btw, the links are great =D.

Perhaps a problem with the approach

But is this a problem with the principle or the implementation?

Perhaps it's a problem with particular approaches to programming. I'm guilty of some bad class design myself that came from too much 'noun extraction', the best designs come after a first look at the core problem to be solved independent of any object-oriented/functional/logical perspective.

Next, I try to work out the concepts/abstractions and find some general properties which I would like users of my API to rely on and try to enforce as many in the type system as possible.

Ideally, that would dictate most of the design.

If you were to make a new language that reinvents OOPL, what would you do differently than C++/JAVA?

This won't nearly reinvent OOPL, but one thing that would immediately come to mind is better support for delegation. In many cases, I've found delegation to be a better choice than inheritance.

I'm a bit in a hurry now. So just to ask once more, I understood you correctly that you wish to figure out if one could build an OOPL in which objects do behave more like real-world objects?

What is Intuition?

I thought the link was pretty lame as it almost entirely focused on newbie and knee-jerk mistakes. The good idea behind OOP is not this kind of jumps-out-at-you intuition, but rather the intuition that comes from trying to model things and relationships exactly as they are.

IMO, that OOP tenet is good. You can't model everything, but that should be the goal, the default. Not modeling some aspect of the real world so that a simplifying assumption can be made is where the real design decisions are. IMO the main failings of OOP languages occur when they don't allow a faithful model of the situation.

failings of OOP

IMO the failing is when people attempt to model a situation.

... except if they're writing a simulator. You can sometimes get away with that, even though it doesn't allow for backtracking and trying multiple paths and all the other niceties of constructing a dataflow-logic approach.

OOP is best used to model the program, not the domain.

A likely title

...even though it doesn't allow for backtracking and trying multiple paths at once and all the other niceties of constructing a dataflow-logic approach.

Why would modeling the situation prevent this? I don't have much experience with constraint systems (just unification type solvers), but my feeling is that explicit modeling of what's being governed by the constraint solver is typically better than having constraint resolution as the general execution model. You can certainly model explicit dataflow and do back-tracking search though. The fact that OOP tends to not model time/mutation explicitly is one of its failings.

OOP is best used to model the program, not the domain.

In the beginning, you have a domain and no program, so this seems question-begging to me.

Why would modeling the

Why would modeling the situation prevent this?
[...] my feeling is that explicit modeling of what's being governed by the constraint solver is typically better than having constraint resolution as the general execution model

You misunderstood; in no way did I intend to suggest you shouldn't model the domain/situation for purpose of simulation. What I intended to suggest is that you can do better than OOP for modeling the domain/situation for purpose of simulation.

If the domain-objects are modeled in OOP, it is difficult to subject them arbitrarily to different views, to copy them for divergence and backtracking, etc. Such operations as 'copying' get replaced by sending a message to the object and asking it to clone itself, which is significant semantic noise and doesn't readily support automatic optimizations.

More relevantly, the events being simulated in OOP are generally not reversible, and the issues such as what happens for a given event is encapsulated. The former prevents backward chaining when asking questions of a model, and the latter makes it difficult to tweak the model as a whole in order to modify the simulation behavior (e.g. a tweak to an axiom of the model, or adjust one's confidence in an observation).

In summary: OOP does far better with modeling programs than the domain because it is far easier to work with invariants introduced by programmatic construction than it is to distill invariants out of real-world observations. Further, "domain objects" are just one way of grouping and viewing domain data, and OOP encapsulation tends to resist logical transformations for viewing/grouping data. Finally, all modeling techniques are ultimately related to prediction and analysis, and most of these techniques require branching; the encapsulation and 'command/message' orientation of objects is contradictory to this purpose and requires one work painfully around the OOP language rather than with it. These forces greatly resist effective use of OOP in domain/situation modeling.

If OOP is used instead to model the modeling system - i.e. to model a system of axioms and confidence rules, forward and backwards chaining, etc. (or, as I said above, to model the program) - the end product will (in general) be more flexible, more modular, would have more reusable components, would expose more invariants for the classes. The end result of doing so, is essentially a new language for constructing programs on-the-fly. That - configurable modularity - is the strength of OOP. (Admittedly, OOP could do even better with a dependency-injection / object-graph configuration language component.)

You can certainly model explicit dataflow and do back-tracking search though. The fact that OOP tends to not model time/mutation explicitly is one of its failings.

Modeling dataflow and backtracking is modeling the program. That's something OOP does (moderately) well.

OOP is best used to model the program, not the domain.
In the beginning, you have a domain and no program, so this seems question-begging to me.

In the beginning, you have a domain, a set of user-stories, a (presumably OOP) programming language, and no program. E.g. the domain is tax reporting, the user-stories are the automated detection of alarming reports. Modeling the domain - i.e. creating 'tax report' objects - is unnecessary to accomplish this user-story. Modeling the user-story also isn't necessary.

Using OOP, I would suggest only modeling the dataflow (data access and processing) for the tax-reports, model the rules/heuristics that raise the alarms, and model the necessary data-management and data-fusion required to support the rules/heuristics.

OOP languages

I agree with much of what you've written. Also, there is a difference between what idioms current OOP languages support and the idioms they encourage. You can certainly model almost anything with a sufficiently heavyweight encoding, and some of my comments were directed at what's encouraged, not at what's possible.

Modeling the domain - i.e. creating 'tax report' objects - is unnecessary to accomplish this user-story.

Hmmm, you haven't described the problem in much detail, but from what you've said I don't see why you wouldn't want to model 'tax reports' explicitly except to work around poor modeling faculties of whatever OOP system you're using.

Being poor at modeling is essential to OOP

You seem to be imagining that merely having an OOP system with better 'modeling' facilities would address the problem.

I think the contrary. The poor modeling facilities of OOP are caused by essential properties of OOP. These essential properties include encapsulation and message-passing polymorphism. These are the same properties that make OOP powerful for configurable modularity, scalability, and object capability security. Attempting to blend modeling support into an OOP system will necessarily result in a language that lacks the essential properties of OOP.

Since I believe this, I do not believe that modeling 'tax reports' in OOP is something that we would 'want' to do. Ever. We might need a tax-report value object as an artifact of the dataflow system (e.g. if tax-reports are dumped into the system as complex form objects), but that's just plain-old-data and could be done just as easily in functional or procedural, and is really not 'OOP'.

I suspect that any blended hybrid aiming to improve modeling in OOP will have the worst of both the modularity and modeling, and very few advantages of both. E.g. adding multi-methods gives one an alternative to message-passing polymorphism that is better for specializing operations and modeling rules, but destroys the distributed-scalability and damages the 'runtime' fraction of modularity in OOP, and is still inflexible for ad-hoc queries and cross-cutting relationships when compared to dedicated modeling paradigms.

In PL papers, I see people coming repeatedly to the same conclusion: that layering, not blending, is the answer that achieves combined features in both functional and non-functional properties, and that the main cost of achieving this without loss to performance is in taking advantages of layer invariants for cross-layer optimizations in the compiler. I hypothesize from this that, rather than improving the modeling facilities in OOP, we would do slightly better to sandwich OOP between higher layer configuration languages for interactive modeling (e.g. using objects for data-fusion, subscriptions, rules) and lower-layers for the non-interactive data-processing (e.g. pure functions and values, maybe support for relation-values and micro-queries).

But layering has its own cost - a greater complexity.

There is a difference between what idioms current OOP languages support and the idioms they encourage. You can certainly model almost anything with a sufficiently heavyweight encoding, and some of my comments were directed at what's encouraged, not at what's possible.

True, but I believe that what's encouraged, and especially the focus on simulations (animals, vehicles, etc.) in texts teaching OO, is often something that should be discouraged. As noted in the first comment, the path from 'newb to godhood' is to learn that one shouldn't even try modeling objects from the real-world as objects in OO.

Your comments (modeling the situation, wanting to model the 'tax reports' explicitly even when it is unnecessary, etc.) and those like them (being extremely prevalent in OO books and the education), are what lead people to start each program by writing a business simulator when what they want are business data processors. They'll eventually get where they want to be, but the process will be filled with pitfalls and discontinuity spikes and revolutionary changes to their program architecture.

OP was designed originally for simulations (thus "Simula"). It's bad at them (compared to logic languages), but they still greatly influences the education of OOP. This should be undone; we should teach OOP simulations last, if at all. No ducks-go-quack, no cows-go-moo, no animals 'speak()'. Functor objects, dataflows, signals, events, dependency-injection, data collection management and database integration, and other components of programs should be taught. One doesn't even need 'visitor pattern' or multi-methods for these things.

People should walk up to a problem dealing with tax reports and think: 'well, "tax report" is a domain object. Thus, I almost certainly don't want a 'virtual' tax report object. If I have one at all, it will be a concrete value-object class used for input or output purposes.'

What they shouldn't do is think: "hey, well I have tax reports so first thing I need to do is add a virtual root for reports, then subclass that for tax reports, and so on." A person who can't see any reason they wouldn't want to do this is likely a person who hasn't learned the hard way that OOP is bad for domain modeling.

I have a feeling, however, that no significant changes in education will occur before we move away from the word 'objects' entirely, possibly in favor of 'actors' or 'first-class processes'.

OOP

I'm not defending OOP. I don't like OOP. I don't think message passing everywhere is natural. While it gives lots of freedom to replace methods, it scatters related code and makes it hard to know what the semantic requirements of a given method are, and thus makes it hard to replace methods correctly. If the messages are mutating objects as they get passed around, things are even worse. I think the natural unit for encapsulation is something like a module, not an object.

What I am defending is the concept of making programs that model the world. If, at a high level, your program would be described as taking 'tax reports' and producing 'alarms', then defining those types and related signatures should never be a false start. Otherwise your language has a problem supporting top-down design. Whether or not there should be code dedicated to reports in general depends, but you shouldn't be able to code yourself into a corner either way.

I suspect that any blended hybrid aiming to improve modeling in OOP will have the worst of both the modularity and modeling, and very few advantages of both.

The hybrid I prefer looks much more like functional programming than OOP, and to my thinking has better modularity and modeling properties than OOP, which isn't saying too much.

The idea of layering is interesting, but I don't think it needs explicit language support. Using modules, you should be able to plug things together at an appropriate level of abstraction without changing languages.

What I am defending is the

What I am defending is the concept of making programs that model the world. If, at a high level, your program would be described as taking 'tax reports' and producing 'alarms', then defining those types and related signatures should never be a false start. Otherwise your language has a problem supporting top-down design.

I vehemently disagree! To say a language has a problem supporting top-down design merely by virtue of it not modeling the world is to ignore the vast multitude of top-down designs that do not model the world. Among these include top-down designs based on modeling the program or service in terms of processing requirements, observations, dataflow, data fusion, rules, hooks, command distribution, etc.

Further, since the vast majority of programmatic tasks are not world-simulators, I say it should be a false start to model the world in the vast majority of cases. Ideally, it should also be obviously a false start, so that people don't bother with it.

The idea of layering is interesting, but I don't think it needs explicit language support. Using modules, you should be able to plug things together at an appropriate level of abstraction without changing languages.

Layering of tasks can be performed through careful library design. Self-discipline and some care allows one to write 'pure' functions even in languages with Meta-object protocols, like Ruby.

What language support gives you is more invariants, with the lower layers having more invariants than the higher layers. Nice properties to make invariant in lower layers include: guaranteed termination, determinism, confinement, capability, evaluation-order independence, freedom from certain side-effects, freedom from certain errors, freedom from deadlocks, etc.

These invariants in turn make the language easier to reason about, especially with regards to black-box composition. This allows programmers greater confidence in the result of composition (e.g. in terms of security, deadlock, partial failure and recovery) without requiring they become informed of implementation details of the modular components. Invariants also make a language easier for an automated optimizer to reason about, with the obvious (potential) benefits.

I also believe (but cannot confirm) that layers make it easier for programmers to determine where given a feature 'should' be added, essentially providing a rubric by which to judge features (in terms of real programmatic properties, such as invariants), and also forces them to think about how (and whether) features (such as a 'tax report' type) will fit in with the other features. This should help them get off to a correct start, i.e. making the 'obvious' way to do it also be a correct one.

Lazy, functional programming at the bottom gives a few nice invariants at that layer, but fails to scale once we start piping information or carrying dialogs through a large number of functions, callbacks, plug-in modules. In-the-large, we need invariants related to communication, coordination, concurrency, dataflow, security. FP doesn't offer that.

dmbarbour, you are fun to

dmbarbour, you are fun to listen to. do you have a blog?

No.

No.

et tu?

z-bo's blog?

Not for public eyes

We have an internal corporate tumble blog at work where we post links to interesting data visualizations and UI toolkits. It's very superficial and not that interesting.

However, I want to write some articles in the future about how to review really large APIs and make quick judgments on them, and also explain thing such as what questions you should ask beyond what the vendor's marketing material tells you. In particular, .NET 3.0 is an interesting case study. Despite the fact it is "too big to re-implement", it features the best UI toolkit to date. So I can't just say "the design is bad", because, really, compared to what? Sometimes you have to say, "the design encourages bad practices you must fight off with these practices I use".

Naked objects

I suspect that any blended hybrid aiming to improve modeling in OOP will have the worst of both the modularity and modeling, and very few advantages of both.

Do you know of Naked objects? It seems like a successful modeling system (and they are actually modeling 'tax reports') but it retains interesting OOP properties, encapsulation and code sharing among them.

Update:

I do not believe that modeling 'tax reports' in OOP is something that we would 'want' to do. Ever.

A very insightful statement.

(Naked objects previously on LtU.)

Just wondering... Have you

Just wondering...

Have you read Richard Pawson's book Naked Objects?

I have, and it basically contains no content whatsoever.

It's very colorful, though! And, wow, that book's binding is sturdy.

Well, it's a managers book, so my teasing is somewhat unnecessary.

Bottom line: Naked Objects doesn't go nearly far enough, because it doesn't address event-driven concerns in most enterprises. A complementary manager-level book would be The Power of Events by David Luckham (Rational Software co-founder and Stanford University professor). The UIs NakedObjects creates are not nearly as workflow oriented as they claim to be, because they are not context-aware and do not focus the user on tasks. However, this is a criticism of just about all UIs. Yet, if NakedObjects were truly object-oriented, then it would be trivial to display a UI as a viewpoint on some object, and that viewpoint would be based on some context of what's going on in the system.

Basically, Naked Objects always struck me as characteristic of a cynic: knowing the the price of everything, but the value of nothing. The authors chide Ivar Jacobsen's methodology but don't really provide an equivalent methodology.

So what does Naked Objects do well, that I do similarly?

  • focus on agility
  • it does not commit what CJ Date in his book The Third Manifesto calls The First Great Blunder of most enterprise systems: not defining user-defined types and instead using the SQL DBMSes underlying types?
  • testing built into the product (in my environment, the product after an initial bootstrap knows how to test itself -- the best explanation of this idea I've seen has to be from Jacques Pitrat's Artificial Beings book where he talks about active and passive versions of his CAIA ai; although Pitrat doesn't talk in terms of "object-oriented" because he is an AI researcher, object-oriented and AI have always had a close connection)

Naked Objects and OO Modeling

I do know of Naked Objects. Naked Objects expose user-interface to a system via automatic transform (e.g. to HTML), has some advantages over application model, but can be improved in a number of ways (including dataflow, security, composition, flexibility).

Relevant to this topic, the Naked Objects do not much benefit from having a 'tax report' hierarchy, inheritance, et cetera. The system needs to provide certain forms to the user for create, read, update, delete, and ideally some liveness properties in a concurrent system (to see changes by others). These forms are an interface to the system. Modeling each form as an object without any inheritance, using a facet-based approach to interface the form with the system is at least as effective... and can share just as much code.

Even better if you don't need to name a specific object for each form... just provide an optionally named organized collection of facets (an object graph of simple IO channels) to which a user needs access, and an automatic translation from "set of facets" to "display document". That's far more composable, far less rigid, readily supports fine-grained security. One can create new displays by recomposing facet sets without the pain of creating a class or prototype just for that facet set. There are well-defined properties for union or intersect of two documents. With 'labeled' (role-based) facets, one can also define documents as override, combinations, fallbacks, sequential, and parallel documents. A little extra support for describing and subscribing to form display properties, and ideally the whole facet-set (e.g. via optional inheritance/embedding/linking of other, named facet-sets), in a functional reactive manner and you'll have what I'd begin to consider a decent UI framework.

An OO taxonomy for specific forms simply isn't very useful. Such a taxonomy is rigid, inflexible, will slowly petrify your project. It doesn't provide nearly as much code-reuse as you might think... not nearly as much as various alternatives, at least. You'll get far more reuse out of functor objects, "UI facet sets" with automatic layout, etc.. Bertrand Meyer says that OO is twice removed from reality. Do not model forms. Model models of forms: buttons, text areas, animations and transitions, and so on; then compose the form.

The Naked Object variation on that advice would be to not model form-objects (a specific tax-form object), but rather to model models of system interface (read-only views of state and and message streams, stateful properties (policies, goals, switches, checkboxes), message interfaces, etc.) then compose these, then automatically translate the resulting compositions into forms in a display language. This still ensures the system interface and the user-interface are 1:1, which is one of the big selling points of Naked Objects, without losing composability and fine-grained security and such. (Might still not produce a really nice UI, though. And Naked Objects Framework doesn't support this, at least not as of last time I read about it.)

In the beginning, you have a

In the beginning, you have a domain and no program, so this seems question-begging to me.

Actually, not true at all.

Usually we have lots of programs and no domain. When the domain comes, we use programs to support the new domain.

Objects allow you to substitute entire problem domains easily. Usually, however, we don't call these objects simply objects. We use terms such as software factories, components and frameworks.

Regardless of the domain, evolutionary computing has always been the main thrust of OO computing. Its roots trace back to one of its founders, Alan Kay, who possessed a background in biological systems.

Usually we have lots of

Usually we have lots of programs and no domain. When the domain comes, we use programs to support the new domain.

I think we're playing word games at this point. The comment I was responding to is still question-begging.

We use terms such as software factories, components and frameworks... evolution... Alan Kay... biology...

One of the main problems with mainstream OOP languages is their poor ability to create frameworks supporting reuse, and IMO weak metaphors to biology and evolution etc. don't help the situation.

I don't play word games. It

I don't play word games. It seems you want to just say "OO has a poor ability to do so and so" and aren't pleased that I'm defending the way I build applications

[Edit: If you'll tilt your head just a little, you'll note I'm suggesting that very rarely do I start project's with a blank canvas with an infinite continuum of possibilities. I start with many choices already made, and I know why those choices were made. When I come across an unsuitable problem domain for these choices, I will likely have an increase in development costs, but I can price my estimates accordingly.]

I have pretty high reuse, although I think that is a stupid way to measure scalability. We should measure complexity curves. As the system gets larger and larger, how strong are the arches in the architecture as they stretch longer and longer?

Scalability is when you can wiggle something in the front-end and immediately tell me what just wiggled in the back-end, and why. Scalability is the removal of what Brooks calls accidental complexity and also the removal of what Pontus Johnson calls artificial consistency. To do this, you need a stable structure of your application domain. As a rule of thumb, your gross application structure should mirror your problem space.

I wouldn't place the blame on mainstream OOP languages. I'd point the finger inward. The mainstream folks just lack accountability. That is why methodologies like Domain-Driven Design work so well: they add lots of artificial consistency such as "Aggregate Roots", but give people a tradition and a set of rules to follow, such that they don't have to be accountable for thinking new thoughts.

Not even Scrabble or Boggle?

I don't play word games.

I probably could have phrased this more tactfully, but you responded to the words I used rather than the point I made. The suggestion that we "use OOP to model the program" is circular - modeling is how we get that program.

It seems you want to just say "OO has a poor ability to do so and so" and aren't pleased that I'm defending the way I build applications.

I hadn't noticed any defense, so no, it didn't upset me. I've actually mostly agreed with much of the sentiment of your posts in this thread, including this last one. But much of it is pretty nebulous, which is also why I dropped out of the thread with David - it wasn't clear to me what exactly we were talking about.

My complaints about mainstream OO languages stem from my belief in domain modeling. When I do an analysis of the structure of a situation and then note that OO languages don't offer a good model of that structure, I complain.

The suggestion that we "use

The suggestion that we "use OOP to model the program" is circular - modeling is how we get that program.

If "program" is the problem-word, feel free to substitute it for "process components", "dataflow components", "messages and transport", etc. OO programs are best constructed by modeling the process in terms of its components. I don't believe calling this "modeling the program" is inappropriate, but I'd rather not battle over definitions.

Inputs from the domain into the process need to be modeled in OOP. Outputs also need to be modeled in OOP. But, the 'domain' itself doesn't need to be and shouldn't be modeled in OOP programs. Even if the goal is simulating the domain - i.e. simulating the weather, or simulating a city, or simulating traffic on a city map under various weather conditions - OOP is rarely a best choice for constructing said model.

My complaints about mainstream OO languages stem from my belief in domain modeling. When I do an analysis of the structure of a situation and then note that OO languages don't offer a good model of that structure, I complain.

Why would programs need to model "the structure of a situation"? Where, outside of iterative prediction and planning systems, does a solution require one?

My complaints about mainstream programming education stem from my belief that programmers are taught to believe in 'domain modeling' even where it is entirely inappropriate (which is anywhere except for simulators/predictors/planners). My disgust with mainstream programming education stems from my realization that programmers are further taught to use ineffective programming tools (such as OOP) to achieve this domain modeling.

That is, they come out of school thinking to use the wrong tool for the wrong job.

It's still not very clear

It's still not very clear to me what you're advocating in place of modeling.

Why would programs need to model "the structure of a situation"?

In general, to code a solution to a problem, you have to explain what world objects map to the inputs and outputs of your program. In order to correctly process inputs to outputs, the structure of the computation you perform will need to match some structure that's present in your problem domain. Identifying the abstract structures present in your problem domain is what modeling is about. If your language requires cumbersome or ad hoc encodings, that's bad.

In general, to code a

In general, to code a solution to a problem, you have to explain what world objects map to the inputs and outputs of your program.

Perhaps. Such explanation likely helps programmers understand how the program fits into the world.

But such explanation does not need to be part of the program. Indeed, it shouldn't be part of the program, since how a program fits into the world around it is not a property of the program itself.

In order to correctly process inputs to outputs, the structure of the computation you perform will need to match some structure that's present in your problem domain.

That isn't true, at least not in general.

Perhaps, because you assume this is true, you see a need for an 'alternative' if domain modeling is eschewed.

In general, there is no need for an alternative. For most programming tasks, the construction of a domain model, description of domain objects, matching some "abstract structure" that humans use to 'understand' the problem domain is a completely wasted effort. In general, the programming task doesn't need to be a domain simulator or any other class of domain prediction engine.

Only when the 'output' is a prediction in the domain, or by extension an informed plan or anything requiring prediction within the domain, is it necessary to capture a 'situation' or 'domain model' within the program. And, in these cases, OOP domain objects is far from the best choice for reasons described above.

My complaints about

My complaints about mainstream OO lang"uages stem from my belief in domain modeling. When I do an analysis of the structure of a situation and then note that OO languages don't offer a good model of that structure, I complain.

You need to give us a real example of a real mistake you made using OO. Then we can correct you.

This is like saying "multivariate calculus cannot model instantaneous change" without understanding basic theorems such as Fubini that allow you to do certain kinds of integration based on the structure of the problem.

Sounds good

I'll respond in a few days when I have time to think through an example.

I thought the link was

I thought the link was pretty lame

I considered it lame as well, but the interesting thing was for me that people criticising OOD for being 'too intuitive' and pointing out beginner's mistakes apparently haven't really gotten it themselves to the point they should be writing such an article if they suggest deriving fax from e-mail, professor from student or define 'abstract class' as 'a class with at least one virtual function'.

Having sat through some OOD classes I found it rather typical than the exception that teaching matches real-world objects to OO objects far too often and neglects the more abstract cases that come up in programming.

If something is taught badly, people can't properly use it (of course, that doesn't mean there's no good OO teaching)

Attending a fashion institute...

won't make you the next Ralph Lauren.

We say it over and over again, but people keep forgetting it in conversation: the best programmers are self-taught.

Really?

We say it over and over again, but people keep forgetting it in conversation: the best programmers are self-taught.

I've never seen any (non-anecdotal) evidence for this, and countless examples of the opposite.

Where can you go to get a

Where can you go to get a great education on how to be a programmer?

What is an 'example of the opposite', exactly? The best programmers you know being formally schooled? Or, rather, many poor programmers who were not formally schooled?

To unify both possible opposites, I'll say that it's important to be able to think scientifically. Knowing the scientific method, even informally, is way more important than knowing functional programming or object-oriented programming. Richard Feynman has a famous Caltech commencement address titled "Cargo Cult Science" where he explains my feelings better than I ever could.

Where can you go to get a

Where can you go to get a great education on how to be a programmer?

First, I want to make it clear that I think education is necessary, but not sufficient. But as to where to get an education in how to be a programmer, I'm personally fond of a lot of the things we're doing in the curriculum here. For me, graduate school has been an important part of my training to be a good programmer, which is the case for other people I know.

What is an 'example of the opposite', exactly? The best programmers you know being formally schooled? Or, rather, many poor programmers who were not formally schooled?

Both. Almost every good programmer I know has formal training, and I've been mostly unimpressed with self-taught programmers (including myself, back when I was one).

As for the most important attribute in a programmer, I would say it is discipline. Well after that, the ability to communicate ideas to other people.

This is not intuitive

We are two different worlds.

I co-wrote a curriculum assessment of my college's CS dept. My senior year of college a few years back, and did curriculum comparisons between our program and other school's in the northeast.

Self-taught seems to rub you the wrong way, as you are inculcated in an academic lifestyle. Self-taught doesn't mean "no formal school", but it may mean they were a Mechanical Engineering or Philosophy major who switched careers. "Self-directed" may be more apt. Some programmers are just carnivores for information that will make them better programmers. Usually, it is OTJ training.

Where some self-taught programmers go wrong is reading a concert of disconnected blogs or trade press articles, rather than coherent set of papers or a book that hangs together really well.

The biggest problem with universities is their curriculum's are targeted toward accreditation, and can only teach so much tangential knowledge in four years while still meeting ABET requirements. ABET Accreditation is a good thing, as compared with no-name we-just-need-accreditation accreditation. However... credit hours are a limited resource.

Things get sacrificed. Students are never told what those things are, because it is in the interest of a university to have students unaware of what they're *not* learning. Self-taught programmers must figure this out for themselves, and it is a very non-linear process. However, self-taught programmers often benefit from learning these skills and ideas through OTJ training, giving them real-world experience and problem complexities that university courses balk at. There are exceptions to this rule, but they are typically subsidized by massive government or private loans - it is the only way to overcome the fact that the best practitioners tend not to be teachers. PLTScheme is a decent example, and so is the freshman programming course Bertrand Meyer has been teaching the past three years. Bertrand has students code against a 150,000 SLOC Eiffel project. Many other NSF "teaching grants" tend to be failures judged by ill-informed grant supervisors as successes.

The alternative, as you point out, is to fork over extra cash for graduate school. However, graduate school is frequently not the best place to learn how to write clear, concise, well-structured, well-designed complex programs. Performance or correctness and proof-of-concept dominate, and as masters and Ph.D. students get closer and closer to the submission deadline, they often make more and more compromises. Moreover, the problems a thesis deals with are typically isolated and the wise student will not take an overarching project but rather solve an isolated problem. Most Ph.D. theses solve an isolated problem - I should know, I read about 30 dissertations a year.

Graduate school will perhaps teach you what not to do. However, this is a bit like asking 50 divorced couples for relationship advice, and spurning the one couple that has been happily married for 50 years. The other value of graduate school is that it places you accountable, ***by pairing you with a problem where you are directly affected by the consequences of your actions***.

Furthermore, more often than not, in the curriculums I reviewed, courses were teaching things I most definitely do not want students in college to hear, but instead get that perspective from me OTJ.

discipline. [...] the ability to communicate ideas to other people.

True, that is why in job interviews we look for somebody with a four year degree (a sign of discipline) and somebody who can distill theory into plain English a four year old can understand. Nothing is more impressive than feeling like you're talking to a four year old with a four year CS degree. A good theoretician is a good practitioner, and a good practitioner is a good theoretician. Actually, too much education or training can be a disservice, as we do not build applications in a mainstream way. We'd need to un-train the person first.

Is intuition the enemy?

So, ultimately, I'd say "don't adjust objects to match intuition, adjust programmer's attitude to not rely on intuition." Unfortunately, that's the exact opposite of what you are looking for.

I've heard Alan Kay talk about the importance of teaching mathematics and science to kids, and the use of computers to do it (esp. Squeak), which is something he's been working on for years. He was inspired to get into this work by Seymour Papert, the creator of Logo. Mathematics and science are both ways of thinking that get people to think beyond what's intuitive. Science in particular, and mathematics to a certain extent, help people create mental models that are more reliable than what intuition offers. So yes, I think you have a good point here, that we should not use intuitiveness as our guide to what is the best form of programming.

I think what we should be more focused on is what capabilities a language adds to our understanding, how well it facilitates modeling our ideas. There is an aspect of intuitiveness to this, particularly in a language's representation, but we should not be afraid to try to get beyond intuition if intuitive models are insufficient to get us where we really want to go. I think a mistake we make is that "intuitive" is where we really want to go, and all the crap we have to put up with as a result of that is worth it. A question that needs to be critically examined is, "Is the operating model sufficient to really get at the power of this thing?" Intuitiveness is a barrier that is difficult to get humans to cross. Getting beyond it means we actually have to think and expand what we know. A more positive way of looking at it is we're more capable than we think we are. We just have to work at improving ourselves. The way to motivate people to do that is to show that the effort is worth it.

Are Maxwell's equations intuitive?

VPRI is Alan Kay's research organization. Those who follow the VPRI folks closely know that one of their core interests is finding Maxwell's equations for computer science. The idea is that the core ideas to evolutionary computing can, like Maxwell's equations, be printed on a t-shirt and sold for riches. :)

Inventing Fundamental New Computing Technologies http://www.vpri.org/html/work/ifnct.htm

STEPS Toward The Reinvention of Programming http://www.vpri.org/pdf/tr2007008_steps.pdf

A mentor once told me that "teaching is simply telling a smaller lie each day." Science and math are very much the same way. Even though we know classical mechanics to be wrong, it is a very useful lie to tell before introducing quantum mechanics, QCD and QED, etc. Furthermore, what matters is whether there are real-world problems to which this "new found intuition" applies. As Jef Raskin would've said, what makes it intuit-able?

ADT

Because "object" is an abbreviation of "abstract data type".

Quite funny

Somehow I find very funny that on LTU objects are being critised as 'too unintuitive' whereas arcane concepts such as 'monads' are praised..

Double standard by people who prefer functionnal programming to object-oriented programming?
Probably.

OK, there are quite a few bad books/example about OO design with stupid hierachies, so what?
There are also good books such as OOSC by Bertrand Meyer..

Somehow I find very funny

Somehow I find very funny that on LTU objects are being critised as 'too unintuitive' whereas arcane concepts such as 'monads' are praised..

Aren't these two sides of the same coin, namely that intuitive concepts are bad? ;-)

word

i think Jef agreed.

OK, there are quite a few

OK, there are quite a few bad books/example about OO design with stupid hierachies, so what?
There are also good books such as OOSC by Bertrand Meyer..

I am a bookworm and I basically don't have a favorite OO book, despite having one of the largest programming libraries you can imagine. Actually, my favorite programming book is by Andrei Alexandrescu: Modern C++ Design. I once heard a fellow programmer tell me, "That book is template pornography." I will admit the book places great demands on its reader. However, the material is truly wizard-level programming stuff. Far beyond SICP, On Lisp, HTDP, etc.

Christopher Alexander of patterns fame once asked a student if her design was as good as Chartres. He said we have to insist on such greatness if we are to build things of great significance.

I think that I have extreme prejudices as to what constitutes good use of objects, based on being burned by my own poor designs. However, I've never blamed the language.

I always blame myself, and return to the drawing board insisting on better. I analyze my failures carefully, and try to learn alternative program construction techniques to defeat the werewolves that attack me.

Hmpf

"My library is bigger than your library?" ;-)

[I am actually wondering how many books you have.]

I have no idea. It is a

I have no idea. It is a collective habit I started in college when I asked a professor to buy books for the library, after I saw how badly our collection sucked. After compiling lists of books and reasons for why they'd be good additions and getting ignored, I simply decided to build my own library. It started with half my summer earnings being invested, and I've never looked back. I usually buy over 100 books per year, and also read plenty of ebooks, papers and dissertations through ACM membership and free/libre/open content linked on CiteSeer or places like here.

For me

For me it was a collective habit I had as a teacher (you basically get books for free, if you don't overdo it). Although, I admit I am rather underwhelmed with my library at the moment. Most teachers I know start giving books away at some point.

I've never blamed the language

However, I've never blamed the language. ... I always blame myself

I gather this is common in abuse situations.

Language self abuse

That has me laughing out loud, but I agree with the sentiment.

what are real world objects you are talking about?

I'm afraid that there is no salvation for you in the world of language-based computation. It is not possible to create flexible objects while we are sticking to programming languages (symbol based computations).

The problem is that there is no real work objects in real word, unless you are going to stick to platonism or its descendants.

The "real word" objects is how are we organize data about real world. So they are tags for the experience created by brain in culturally dependent way. "The map is not the territory" as it is said. The flexibility of natural languages in dealing with "real world" objects comes from the fact that we can redraw and fix our maps.

But there is no such luck in PL. In PL we deal with map of the world and we cannot redraw it in process of execution because object boundaries are already selected. These pesky programmers are needed to change program according to real world feedback. But program itself deals with the closed world model (the so-called "open world" models are mostly closed in non-traditional ways).

We can create a map of a map, or even a map of a map of a map. But it will just rigidity to the system (not a bad thing for some scenarios).

In the games, the difference could be seen to extreme. In games the program tries to simulate world to some extreme, and it is not possible to do anything outside of the map of build by game creators. It is not possible to break the wall unless it was made breakable. The map could be quite detailed, but there is limit of what could be coded. And limits of the model easily push out of the game trance.

On other side business applications are more-or-less honest in recording only important and incomplete data about world, and if they are asking you birthday, be sure that this information will be used by marketing department of for some other evil purposes, since every bit of data comes at some cost.

The solution might be possible for neural networks since they can change behavior basing on feedback, but I do not know the topic well. But we do not know how to build scalable and reliable solutions basing on neural networks.

relationships between objects stay constant

I've been holding back on my own thoughts on the matter, for hopes that the discussion would lead to interesting points (which it did, scala, subtext are interesting, as are the books and links).

But I think that it is possible to make an intuitive PL that models the real world and run without too much overhead (and which I'm trying to design).

Say that you are programming a toy truck in a toy world. You will need to have an object Truck, an object for the ground, and object for location. There are relationships that come up: the truck drives on the ground, and location maps to parts of the ground, and the truck is always at some location. We can implement the ground as a 2x2 char array, and location as a pair of Int, and the truck as the letter X. The function drive should take one parameter -- the location that you want it to go; and for the pathfinding, we'll use an A* algorithm (assuming we also have obstacles). Let's say you want to pickup and drop off cargo (implemented as a 'C' character), the algorithm for Delivery(cargo, location) would be to drive(cargo.location); pickup(); drive(location); dropoff().

Now consider programming a controller for a truck in the DARPA challenge. You still have a truck, you still have the ground, you still have location. The truck still drives on the ground, locations still map to the ground, and the truck is still always at some location. You can't implement the A* algorithm, because you don't know exactly what's at what location, in fact, you're not too sure where the truck is either. But, the algorithm for delivery is still the same: drive(cargo.location); pickup(); drive(location); dropoff(). Now, how those each individual fxns are implemented may change, but the algorithm does not.

I can't think of harder examples off the top of my head, but you mentioned maps. Maps can be defined to be a representation of the relative locations of things in a terrain. A map of New York City has the relative positions of the streets, the museums, etc. In a game, a "map" can contain the relative position of quests and buildings. Even though these two kinds of map are actually very different things (one's a physical object, another's just an image on a computer, stored in 0's and 1's), you can still use a map the same way to find out how to get somewhere.

There are two points that I want to make. First: if relationships between objects are constant, the algorithms that take advantage of the relationship is contant, as you saw in Delivery(). Actually typical definition of a "model" is that the relationships are the same. When we said "implement" we actually created a mapping of what we wanted to do and some programming construct that have the same properties/relationships.
Second: You don't need to describe the objects/relationships in full to write particular functions. There are plenty of things you can leave out -- if we just implemented the pathfinding algorithm, we don't need to know whether the truck is holding cargo or not. Deciding what to implement / what not to is up to the programmer, but it's conceivable that we can have a library of relationships with functions that you can use if that relationship is true so the programmer can pick ones that fits the object representaion he chose.

[esoteric] Maps and Vehicles

For some reason, discussion along these lines reminds me of the Taxi Programming Language, which employs an intuitive interface of passenger delivery. (Not that it has any practical application).

I think this thread suffers from the lack of specifics. Knowledge representation and modeling are, to be sure, interesting topics. But I can't connect the dots. I'll be the first to admit that there are serious drawbacks with the current slate of programming languages and design techniques. Lots of interesting languages, with various strengths and weaknesses. I can't help but think that emphasis on intuition and the real-world, obscures the fact that we are dealing with symbol manipulation. Any parallels between language and reality is by analogy. And analogies tend to fall apart when you examine them too closely.

I think that the comparison of the problems with OOP to that of ancestry determination in Prolog masks the fact that Prolog can be a pain to use for problems that fall outside of the domain that it happens to excel at. Without degenerating into platitudes, it would be nice to carve out some specifics in terms of syntax and semantics. Personally, I like the direction of Oz in integrating the concepts of logic programming with a multi-paradigm approach. But the relational/declarative features of Oz may not be what you have in mind.

Knowledge representation

Well, I'm not too sure what to say about your comment about specific examples, I can pull out a real world problem from my field (bioinformatics) and say how I think the relationships lead to reusable code, but the "real"ness of the problem is proportional to the length of the comment, so I dunno if people will be interested enough to analyze it. Let me know if you think it's an interesting issue that is worth looking into, otherwise, you'll have to wait till I'm done w/ the language I have in mind.

You mentioned knowledge representation, my take on it is that it is too focused on trying to use it for reasoning rather than figuring out what we can do with it. Typically you can express everything you want with first-order logic, and if not, higher-order logic (or fuzzy logic if it's not precise). The problem was never with the representation, but with the inference. Sometimes people prefer a KR that is weaker, or one that's incomplete so that it can be used to tell you things that you didn't specify.
I'm all for using inference to figure out more stuff from the world, but when you start using logic to move blocks (STRIPS) it sounds like it is overstepping its role in an intelligent system. I think that's where most of the issue come up -- quantification/frame/ramification problems become debilitating when you are reasoning about details when they could be handled with heuristics.
I read a paper a while ago about an AI system that ran a lot faster and better by doing reasoning at a higher level rather than planning each step of the state change(don't have the reference... sorry).

The point with knowledge representation is that it can already represent pretty much everything we want it to represent, but a lot of the attention is on how to use it to make inferences for planners. My take on logic programming is that it uses a particular inference engine to search for the solution we specified with logic. Note that neither of these are using the predicates to represent knowledge about our program to help us write code well.

And analogies tend to fall apart when you examine them too closely.

I would argue that analogies work for things that the relationship btw objects hold, and they fall apart where the relationship no longer holds. A car is like a bicycle in that they can both be used to move the location of a person (relationship between car/bike and passenger and location are the same), so you can use either one for transportation. But they are operated differently (relationship between car and its pedals is different from the relationship between a bike and its pedals), so a person that can drive may not know how to ride a bike.

So anyways, I'll definitely share the language on LtU when it's done, since the two main ideas that I have should be novel and interesting (as I've gathered from this thread and the previous one I started). But it might take a while without any help. So here's a shameless request for people to discuss specific implementation ideas / help with design =P.

It's a false dichotomy

In OOD, if you have a hammer and a nail, you have two classes Hammer & Nail. Proper design forces you to choose a place to implement the behaviors. This leads to expressions like:

hammer.drive(nail)
hammer.remove(nail)

or

nail.insertWith(hammer)
nail.removeWith(hammer)

The tasteful designer chooses the first set, since it seems more natural. Natural until the next day, when there's no hammer, but a brad/nail driver and a pliers. Now we have:

nail.insertWith(brad_driver)
nail.removeWith(piers)

A functional approach (like you use above w/ drive()) requires no such choice, only that the implementations exist somewhere.

drive(nail, hammer)
remove(nail, hammer)
drive(nail, brad_driver)
remove(nail, pliers)

In essence, the act of driving a nail into a wall w/ a hammer (satisfying as it may be) is no more a property of the hammer, than the nail.

Ya, I agree

I think we're all at the consensus that the objects from current OOPL's aren't compatible with real world objects.

Functional approach?

I wouldn't necessarily consider that a functional approach. Though I agree that the traditional OO problem you describe sucks and would be better dealt with via multi-methods as you describe.

I don't think he was

I don't think he was suggesting multimethods, I think he was suggesting that whatever the implementations of nail, hammer, etc. they must have some minimal knowledge of each others implementation. The abstract interface can be specified separately, but the implementations must be provided together, or tied to each other somehow in order that the functions can actually compute something meaningful.

Need a better example

I disagree with your example. Specifically, when you have a nail you are removing with a hammer, you are removing it from something.

This is analogous to any example where you are removing an element from a collection with a driver.

Therefore, it'd still be object oriented even in your functional example, you merely left out the object, wood.drive(nail,hammer).

I really don't see why you'd need any sort of dynamism on the part of the nail or the hammer in this case. I don't think you'd ever want to call remove on the nail or removewith on the hammer, because an element (when not a nail) can potentially be a member of multiple collections and so by doing it you would have to maintain those collections in the wrong objects beneath the wrong abstractions.

Don't forget context and actor

An actor/entity (not necessarily 'you') is removing a nail from the wood with a hammer in a context/environment.

Influence from actor/entity includes joint rotations, balance, mechanics, etc. E.g. two different robots would often have two different requirements here.

Influence from the context/environment includes laws, constraints, requirements. E.g. one might need to change hammer-swinging behavior in a busy airport, or based on traffic in the immediate vicinity.

Neither the functional nor OOD approaches are very good for this problem. Due to the expression problem, neither may be adapted readily to account for new rules and regulations from the context and different constraints from the actor. As with domain model and planning systems in general, some sort of logic programming or declarative meta-programming is far more appropriate.

I'm not convinced

I'm not convinced meta-programming is necessary. I feel this is really just a simple software engineering problem.

For example, upon further reflection I now sort-of feel that the wood.remove(nail,hammer based parameters (force, friction ... etc)) would also be called by the hammer.

Another actor, person, would call hammer.remove(nail,wood, person based parameters (person's force.. etc)).

wood already understands the context it exists within, so that isn't necessary in this example.

I'm not convinced that there is an expression problem here, or that new rules and regulations will really influence this framework, but I'm willing to see a further example or further elaborations.

Simple?

The wood.remove(nail,hammer based parameters (force, friction ... etc)) would be called by the hammer. The person would call hammer.remove(nail,wood, person based parameters (person's force.. etc)).

The problems with your approach: (a) contrary to your assumption, there is no simple fixed set of parameters (force, friction, etc.) sufficient to determine how one should go about removing the nail from the wood. (b) the information-flow requirement is by no means unidirectional. The hammer, nail, and wood all feed back into influencing the behavior and plan of the person. E.g. the person needs to know how to hold the hammer, and how much force can be applied without risk of damaging the wood, the noise profile for a given action, and so on.

Given any set of costs and constraints, it becomes very difficult to know exactly which data will feed into any action. How much noise are you allowed to make (context)? Does the wood need to remain intact? What about the nail? Safety considerations? What are the current physical capabilities of the robot - i.e. how can it actuate its arm? how maneuverable is the platform? In time vs. fuel/energy cost, which needs to be conserved more? The problem becomes considerably harder when dealing with dynamic changes in capability, dynamic changes in policy, and unknowns. For example, a robot's wheel could be damaged as it moves to complete the mission, or perhaps several children have some probability of playing around near the nail that varies with the time of day.

The number of variables that feed into a planner for real world behaviors, or even a reasonable simulation of them, is enough to fill a database. This is true even when all you are doing is using a hammer to remove a distinct nail from a clearly identified piece of wood and you're starting right there in front of it.

Real life makes things even more complicated by introducing needs for recognition, positioning, approach and angle management, and so on. For a typical robot, removing a nail from a piece of wood will probably require path planning to decide how to approach the nail in a manner that will allow a robotic arm to effectively angle a claw-hammer to remove said nail.

I'm not convinced meta-programming is necessary.

It isn't necessary. One could go straight for logic programming with side-effects.

As I understand it, the primary difference between 'declarative meta-programming' and straight-up logic programming is that for declarative meta-programming you have an intermediate executable 'plan' construct that can be saved or compiled, allowing for staged processing. Both approaches allow for re-planning on the fly (and a plan can include contingencies and making a different plans later, allowing feedback).

Anyhow, while meta-programming isn't necessary, getting away from OOD - in particular OOD using domain-objects or domain-based classes - is pretty darn well necessary. Unlike logic programming, OOD is not readily extensible to deal with new concerns as programmers become aware of them.

Alan Kay on objects

Alan Kay once said in a talk that object oriented programming ended up being something that he didn't intend when he originally coined the term. He said something to the effect of "Its not about the objects. Its all about the goo that goes between objects." and I tend to agree.

Smalltalk does indeed capture that spirit to a good extent, in spite of the hammer/nail design asymmetry. Messages have their own existence in Smalltalk, apart from the code that gets run when messages get sent to an object. Comparing that with the functional style, it is kind of the difference between the specification of a function invocation and the code that the invocation eventually runs.

software+wetware complex

The robots are extremely inflexible. In industrial settings they organize environment so robots could work. The factory shop becomes a big modular bench.

And human herder is still needed to oversee the work, since they fail to adapt from time to time despite of specially organized environment.

The reason is that symbolic computations are unable to leave the source model, since when we start manipulating symbols, we have already lost reality. Ability to go from symbols back to reality and to modify symbols cannot be overcome in living OOD programs, since OOD assumes that the problem is a selection of the correct symbols (and no model is correct everywhere).

If you look to LISP and Arc's essays of Paul Graham, you see that the motivation is to create a program that will allow changing their model on the fly. His point seems to be that since every living program has rigid model that cannot be changed by the program itself, the programming environment should facilitate the change of this rigid model being done by programmer. So we instead of adaptable software, we will get flexible and adaptable software+wetware complex. And I also think that this the only way if we stick with symbolic computations (which are inherently rigid by themselves). The question is what tools we will use to create this adaptable complex and what should be unit of change.

And I think that we stick with symbolic computations for a long time (except for boundary cases), since it the only way that we currently have that allows us to understand what we are doing.

The funny thing that it happens to humans in society as well. Humans start manipulating symbols only, and their behavior becomes as inflexible as one of the robots as result. They are loosing ability to alter meaning of symbols and creating new ones (look at legal systems for good example, they live in world of symbols, and they problems of leaving that world even if symbol-based inference do not match situation well). Zen, Daosism, many schools of Yoga, NLP and friends try reverse this trend for their followers (but they use so mystique language).

object properties that can be altered

The problems with your approach: (a) contrary to your assumption, there is no simple fixed set of parameters (force, friction, etc.) sufficient to determine how one should go about removing the nail from the wood. (b) the information-flow requirement is by no means unidirectional. The hammer, nail, and wood all feed back into influencing the behavior and plan of the person. E.g. the person needs to know how to hold the hammer, and how much force can be applied without risk of damaging the wood, the noise profile for a given action, and so on.

If you want to account for feedback, then the best would be to define object properties that can be altered by external objects. Then the behaviour of the object will depend on these properties. For example: the 'noise' perceived by the robot when hammering, the 'visual impression' left by wood chipping, etc.

You need concurrency-safe mutability or dataflow to handle these things in a program, except if you are ready to restart the whole computation for each sample. Which gets me to think, isn't dataflow programming really a whole mess of small programs tied together? Only they focus more on propagating a message (state change) rather than implementing a behaviour (OOP).

Which gets me to think,

Which gets me to think, isn't dataflow programming really a whole mess of small programs tied together? Only they focus more on propagating a message (state change) rather than implementing a behaviour (OOP).

Isn't OOP really a whole bunch of small programs tied together? Only it focuses more on configuring the application (late-binding, modularity) than implementing the steps (imperative)?

>;-)

Dataflow is far too broad a subject (too many different models) for me to actually answer your question. But propagation of changes over time is at the heart of it. And we should be careful about how we frame that statement because just is a dangerous word. The difference between lazy and strict evaluation is 'just' a small tweak in the interpreter, right?

to account for feedback, then the best would be to define object properties that can be altered by external objects

The problem is more difficult than simple feedback because we really want the anticipated profile to influence behavior before we swing the hammer. Additionally, it is unclear in OO where the 'rules' for computing, say, the noise profile would go - it is influenced by so many things: type and position of wood, width of nail, shape of room.

This is a class of problems that has interested me for many years, but I've no viable solution for it, despite recent attemtps. (I wouldn't consider a solution 'viable' unless it maintains security constraints, scalability properties, stability, controllable performance, and at least doesn't prevent real-time properties for targeted end-to-end reaction paths.)

The approach you outlined

The approach you outlined strongly reminds me of the concept-based (generic) programming school as popularised by Alexander Stepanov in the C++ community. Did you look into evolving from there? Is there a significant difference compared to your approach?

same motivation, different implementation

I think the motivation is the same -- capture the essense of the algorithm for modularity and reusability. But I'm not convinced that OO languages these days can express these concepts well.

One key feature that I want to implement is a programming library search, where you will specify the description of the object and it will automatically find all functions that you can use with this object. With templates, you can apply it to any object, even ones that wouldn't make sense to apply it to. With inheritance, you can only apply it to derived classes.

That's what C++0x concepts address

With templates, you can apply it to any object, even ones that wouldn't make sense to apply it to.

That's what C++0x concepts address. They are the interfaces of generic programming in a way. They are much closer to Haskell's type classes than to the mainstream notion of OO interfaces, though.

Ralf Lämmel's paper "Software Extension and Integration with Type Classes" examines Haskell's type classes (a lot carries over to concepts) in the context of software extension. He does some comparison to alternative (particularly OO) approaches. You will be interested in at least sections 2.3 (Tyranny of the dominant decomposition) and 2.2/4.1 (Retroactive interface implementation).

I think these two are related to both your original question, and to your goals of reusing algorithms. It is not natural at all if an object can't be reused for a task it is suitable for just because the creator of its class didn't think of that use when writing the class (in the case of custom interfaces, he probably couldn't even know of their existence). In a language emphasising genericity similar to STL, I'd rather not want to resort to design patterns to make objects work with my algorithms.

cool, thanks!

Lots of stuff to read =D.

The origin of objects from Photoshop

In a totally screwed sense one could suspect that objects arise from clipping functions in Photoshop. I wonder if Photoshop isn't just a natural model of a mind: a central workplace with a neutral background where objects can be clipped and contemplated. Then there are lots of tools at the periphery that can be used to manipulate those objects. O.K. Photoshop lacks autonomy and free will but according to the latest philosophical trends the latter doesn't exist anyway.

The history of AI tells us that people have gone mad about logics and the "laws of thought" and considered the mind as subtle rule based engine. The mind is full of rules just like an 18th century automaton but powered by neurons. Even objects in the OO sense mostly reflect the 20th century "linguistic turn": OOD is about stating sentences in natural language and identifying the nouns, verbs and properties. Those become classes, methods and object attributes.

With this approach we never enter the contemplative realm of Photoshop but rather get sucked into Smalltalk.

I've told people many times

I've told people many times before that Photoshop is the best programming language in the world.

My academic friends tell me "that's impossible, because of the GUIs directed-ness".

How disappointing when academics lack the ability to think outside what they read in papers by others.

As somebody who got paid

As somebody who got paid money for awhile to use Photoshop, while I love it, I switched into PL because I found the design tool chain to be brutally redundant, brittle, and write-once.

Photoshop (and Flash authoring, and Illustrator, ...) are tackling tough problems. The last few versions of Photoshop have made enormous leaps in these areas... but there's still a lot of room for improvement. When you say the best PL, I assume you don't mean the underlying scripting languages, but the direct manipulation interfaces. A lot of traditional PL-like ideas are creeping in, but, again, there's room for a lot more.

One word: Sketchpad.

One word: Sketchpad.

I wish that was my thesis :)

I wish that was my thesis :)

I would love examples of

I would love examples of brutally redundant, brittle and write-once portions of the design tool-chain.

And, yes, I am talking about the direct manipulation interfaces, although I think "direct" is a misnomer, since Photoshop enables a lot of indirection.

Some off the top of my

Some off the top of my head.

1. Once you use Photoshop for awhile, you develop multi-step tricks. What you really want is something like a PBD tool that observes what you're doing and then synthesizes the relevant variables for tweaking later on (imagine mixing PBD with some of Bjorn Hartmann's tunable variable work). For example, I worked with brushes a lot and developed a basic technique for it: the task of creating a brush, tweaking its parameters, and using it is too linear (and in one direction) and long.

2. The tweakability of parameters in general. No effect should ever be final: instead of applying an effect, it should be in the data flow style, where it's just listed as a transform that I can tweak later. Similarly, losing information when you size down and then back up.

3. This is getting a *lot* better, but the disconnect between vectors and pixels, especially when layering effects on top, still gets annoying. They are fundamentally different, but not when you're doing generic operations.

There are more, but three to start with.

The no effect should ever be

The no effect should ever be final constraint is a small symptom, not a disease. A competitor to Photoshop, written by one programmer, Pavel Kanzelsberger, already supports this. In fact, it has for about 4 years. The fact billion-dollar-a-year Adobe can't keep up with one programmer is disappointing, but shows that Adobe's programming model lacks support for real-time effects. By natively incorporating this as a requirement, Pavel has single-handedly trumped some of the best image processing gurus in the world. The takeaway here is that system design matters.

Currently, there is a workaround in Photoshop which is to clone the image at a particular stage. This approximates the brittleness you get by doing FP in a CPS-style. What you are advocating is the removal of this brittleness, which I agree to. The original Photoshop model hasn't advanced much since PS 5, despite rebranding to CS X.

You also seem to be requesting something more powerful than actions and action sets with your Programming by Demonstration (Yes, THAT PBD?). You want an active macro recorder that analyzes logs of inert macro recordings - and then offers Statistically Improbable Macros (SIMs, making up a TLA here) to the end-user? To be meaningful, you'd also want to directly link back to the instances in time that defines each SIM example instance. In this way, the end user doesnt have to look at a command language, but rather view their own intuitive design actions by examples they've created. So what you are really saying to me here is that Photoshop should take advantage of its notion of History (self-explanatory) and Bookmarking (saving selections, loading image files into selections) into a very rich Hypermedia system.

Unless I'm putting words in your mouth (giving too much credit?), I agree. My intent here is to flesh out your thoughts on this matter, though.

Adam & Eve doesn't describe hypermedia in a systematic way.

I can comment on more of these if you'd like to have a back-and-forth.

1. I'd give Adobe a bit more

1. I'd give Adobe a bit more credit :) They're doing a lot (e.g., natural brushes may be coming soon!) and the benefits from their studio integration is great.

2. Your analogy to CPS is how I think about it as well :) The history starts to help, but is just a slightly richer form escape continuation.

3. There was actually an image editor (80s?) that started to incorporate PBD (you got the TLA right :)) -- it's not a new idea in this space :) I'm not sure why you want to know about improbable macros -- I want to know about probable ones (... and wrote a paper draft about a program an analysis suggesting the road to exposing such knowledge for arbitrary web apps!), and, from those, expose the tweakable knobs are. How this maps into history, bookmarking, etc. seems more of a legacy implementation detail than algorithmic or systems design problem...

But yea.. I think we're on the same page. Once you move into the 3D space, it gets even worse in practice (I don't know how, say, the lighting guys at Pixar stay sane). Thought of some more things I disliked: once you start thinking about moving items around, guides were a weak and flat constraint language. Furthermore, there is a projection problem in how to extract part of one image and put it into another (which remains a PBD/HCI/slicing problem, even if you take a data flow approach).

Another inspirational (linguistic) place to look is tangible functional programming and subtext. There was also some stuff on live and composable pixel shaders, but I never worked in that space so I don't have a good feel for how interesting it was.

My head isn't really focused on this space anymore, so I'm not sure I have a particular direction for this comment :)

Those who don't understand

Those who don't understand Hypermedia are doomed to reinvent it, poorly. Rich history and bookmarking is a data structures issue. Part of the reason the Web has so many flaws is that it was invented by physicists playing with computers. The HTTP Uri scheme is pretty awful for "Web applications". Despite Roy Fielding's best attempts, HTTP still has its flaws, notably its fragment identifier scheme.

To give you an idea of its value, lets say the user wants to publish a brush made via PBD. He/she can also publish how that brush was synthesized and then later on used, because all those details were tracked by history and bookmarking. This would be a PhotoShop "Lifestream" in David Gelertners's parlance.

Statistical improbability gives you the highest probability macros for re-use over a long sample period. You're looking for outliers, by definition of searching a sample space for statistically significant behavioral demonstrations by the user. In the background, a log/trace utility captures macro primitives and something-else analyzes them.

What image editor used PBD? Link to your rough draft?

Pixar tends to use very boring body animations throughout the movie, and then mixes in very custom body animations for a small set of subsequences. Otherwise, they concentrate on facial expressiveness and the angle of the face to the camera. This is just an observation.

[Edit: Also, I think the CPS analogy isn't flattering. It is a smell of bad I/O subsystem design. I explicitly prefer talking in terms of Landin's J Operator for this reason, as it is a uber-goto "jump" operator capable of making arbitrary jumps. The arbitrariness is constrained with semantics that explain why on earth you'd allow such a jump to take place, and what jumping from A to B even means. CPS doesn't actually model this well, as it is primarily a technique for compiler optimization. Using it for programming requires programmers to edit code as a side effect of controlling side effects. That is two levels removed from the problem.]

[Edit: by the way, it is not that I'm failing to give Adobe credit. Rather, I'm simply saying correct system design up front can make a single programmer more productive than a small army.]

Tracking actions (especially

Tracking actions (especially generically) and reifying them as intuitive first-class objects (or providing even more fun linguistic support) are both tricky. Furthermore, adding persistence / global naming is a big performance headache, as the continuation server guys have rediscovered (Fielding wasn't shooting from the hip when he went with REST -- the context was inventing Apache at the same time, I believe!). I'm still not sure about how to do it well in a rich setting -- Seaside starts to support hierarchical continuation-based components, which is a good second step, but this area is still feels like the stoneage. We probably agree that the 'ideal' editor interface probably shouldn't be very far from the ideal web one -- imagine Photoshop with wiki-like features -- but I've found this space to be really, really challenging once you want to build robust / large apps with layers of application semantics. Photoshop might get some simplifying assumptions if you assume only one user, but probably not enough.

I don't remember which editor incorporated PBD unfortunately, though it was manually done (at the framework level) which is passe relative to modern attempts like CoScriptor/MashMaker etc. I think it was described in an early CHI or UIST paper talking about treating PBD as a slicing problem on the history of actions.

I can give you a draft of my paper if you email me -- it's how to do a blackbox (dynamic) control-flow analysis on arbitrary web apps to figure out the different UI states and how they transition. Stuff like k-CFA doesn't work because it's too low-level to be useful for a lot of web needs, and, considering the web is a hodgepodge of JS/DOM code and PHP, it fails in practice anyways (and why I'm skeptical of many papers claiming good analysis results!). The importance is that it's a fundamental analysis at a usable abstraction level for web apps. Stuff like PBD can take advantage of it -- we ended up making a demo to translate natural language commands into sequences of web application actions, and we implicitly learn the action graph that a PBD tool would extract along the way (the original draft focused on aiding PBD, but reviewers found it unimportant!). It's not on my site because I'm struggling to get it published (the joys of grad school?).

Btw, I'm not convinced about the particular example in terms of productivity. Without knowing better, it's like comparing TinyOS development practices to Vista ones; extrapolating is hard. I agree in general, but 1 programmer vs an army is pretty strong, and it's unclear how those ideas in particular compose with a comparable system.

Btw, I'm not convinced about

Btw, I'm not convinced about the particular example in terms of productivity. Without knowing better, it's like comparing TinyOS development practices to Vista ones; extrapolating is hard. I agree in general, but 1 programmer vs an army is pretty strong, and it's unclear how those ideas in particular compose with a comparable system.

At work, we have 5 programmers supporting 45 (and growing) clients in one of our product divisions. For the concepts we support well, implementation is fast. I'm in charge of our next-gen software, and untying the existing interdependencies in the bad parts is a pain. The thing is, the bad parts are a small percentage of our code base, but a large reason for us not being able to innovate any further. We'fve reached a glass ceiling that requires massive refactoring. As an academic, you are probably wondering how this happens. Usually, these mistakes occur when the CEO requests something for a sales demo, and then 10 years later you're still stuck with the sales demo kludge. Right now, most of my productvity is sapped trying to remove 10 years of sales demo kludges. We still have a very good architecture, but we want our next architectural leap forward to be 30 years ahead of the rest of the industry.

Larry Ellison, the CEO of Oracle and one of the shrewdest businessmen in IT, has always believed in keeping the core Oracle team to under 50 people.

Moreover, if you look at projects that had a lot of programmers (Windows Presentation Foundation), their architectures have many interlocking interdependencies. WPF, in particular, had no fewer than 350 programmers on it at any given point in time, and almost as many as 500 at one point.

extrapolating is hard.

But experience working on small teams is not extrapolation. It is strictly a matter of being directly affected by the consequences of your actions and knowing your actions are the root cause of your pain. You can't read a study by CMU's SEI that studies stuff like this. There is no good way to measure whether people realize they're at fault. For this reason, Dr. William Edwards Deming invented The Red Bead Game Experiment to demonstrate that people are poor at assigning fault, especially when the fault occurs as essentially chaotic behavior of the system. In other words, some failures occur due to the design of the system itself. As usual, Joel Spolsky speaks the truth, "At no point in history did a programmer ever not do the right thing, but [the Office file formats are still messed up.]"

Seaside starts to support hierarchical continuation-based components, which is a good second step, but this area is still feels like the stoneage.

I mostly feel that various practical proposals for continuations are a mistake. At least JBoss Seam supports the continuation model-redux in an OO form using the Workspace-Conversation metaphor.

Strange about WPF. Back at

Strange about WPF. Back at Macromedia, the big projects had less than 1/10th of that many feature engineers (maybe more like up to 10 A-feature developers). The rule of 150 advises against the WPF model (though I'm sure it's compartmentalized, e.g., driver devs are separated from say frontend devs).

However, getting back to the point, I don't get what either of us experimenting with new software systems has to do with the claim that the architectural and linguistic design of the photo manipulation software you linked enabled the individual devs to be as effective as an army of Adobe ones. My first thought, actually, was that this was due to the many perks of dealing with a smaller code base. My second thought was that the productivity could be deceptive: the features being added to Photoshop are more interesting nowadays because the boilerplate/basics are done (outside of fundamental things they didn't think of early on). E.g., if I remember right, there's even an internal scripting language!

I think you said my point

I think you said my point well. A small army fighting with stones (still creating the boilerplate API) will seem feeble when faced against one chain gun. However, one chain gun doesn't guarantee an infinite supply of ammo. Also, once the small army upgrades to chain gun, the blue civilization in Age of Empire takes over the single red and wins.

Adobe's boldest move so far was basically Sean Parent effectively saying, "QT and frameworks like it don't cut it any more."

And, yes, Macromedia is another good example.

BTW WPF wasn't as compartmentalized as it should've been. They got the basic division correct, milcore.dll and PresentationCore.dll, but PresentationCore was designed wrong (IMHO).

Rule of 150 - First time I heard it. Dunbar's Law on wikipedia - very interesting.

I found the 150 rule in

I found the 150 rule in Tipping Point (Malcolm Gladwell) -- a lot of useful tidbits for trying to popularize something. I don't practice them (doesn't fit well with my research style) so I can't lend too much weight in how easy the ideas are to apply, but it was a fun read :)

In a way, Photoshop's direct

In a way, Photoshop's direct manipulation interface is an implementation of Landin's J. Operator (the predecessor to modern continuations). Things such as selections, actions and action sets, history and layers all provide pretty much arbitrary indirection.

It gets artists to become programmers. This is pretty much the "grocery clerk as DSL programmer" pipe dream some computer scientists had in the 60s through 80s.

Also, that thesis sounds very good. I've had the author's viewpoint for years, and it is exciting to see an academic flesh it out. Should be a great read, and I will look forward to giving the author any feedback I can. Spreadsheets are very much analogous to how I think of programs today.

Edit: by the way, I'm a huge visual languages junkie, so informally designed visual programming languages like Photoshop are intriguing because of the fact they were organically created to solve visualization and artist workflow problems, not programmer control flow problems.

Excel and visual languages like it suffer from the same design flaw: unconstrained modelling. Back in the '80s, Maureen Thomes wrote a book describing a bullet-proof way of constructing spreadsheets called the "staircase layout", but it defeated the point behind laying out spreadsheets in a way that was naturally consumable for not only analysis but also reporting.

constraints can engender success

photoshop is constrained to pixels, basically. all the tools and abstractions are based on that.

how do you envision a general purpose language along those lines, that isn't about graphics?

I think that if you look

I think that if you look carefully you will see that often the great scientists, by turning the problem around a bit, changed a defect to an asset. For example, many scientists when they found they couldn't do a problem finally began to study why not. They then turned it around the other way and said, ``But of course, this is what it is'' and got an important result. So ideal working conditions are very strange. The ones you want aren't always the best ones for you.

Richard Hamming, You and Your Research

By the way, for those of you who agree that "Photoshop is constrained by pixels", you would be mistaken.

pixels:Photoshop :: telescopes : Astronomy

A pixel is just a picture element - a data structure of some kind used for describing an element in a set.

Pixels are not a defect.

prepositional connotations?

constrained By vs. To pixels - the former to me implies an upper limit, the latter is more about foundational atoms.

You say Photoshop is great which is intriguing, but I haven't understood from any of your notes yet precisely how it is so great, and I'd like to understand.

What level of expertise do

What level of expertise do you have with Photoshop?

I am very big on providing people with specific examples of things, so I'd be happy to elucidate. However, knowing my target audience proves crucial. In my experience, most programmers have never even used this program and even more think it is just some fabulously expensive toy for designers creating corporate logos.

If you've never used Photoshop before, I'd recommend the web comic animated series You Suck At Photoshop.

As an aside, I once gave somebody a really good example of how not to create a DSL by citing Tivoli Storage Manager's configuration language, and their excuse for not understanding my example is that "I've never used the program, so I am not sure how big of a deal this design flaw you point out really is".

i have used it

i'm not a graphic designer, but have done basic photo editing and art creation in it. and have used mac paint style programs since the original mac. so i basically know about photoshop's abilities wrt pixels, brushes, filters, layers, plug-ins.

Objects are not objects

Why are objects that we use in programming so vastly different from real-world objects?

Because they're not real-world objects. That's the mistake. I am personally very dismissive of the idea the objects in OOP should aim to simulate real-world objects. To me, object-oriented program design is about managing complexity and making libraries more user-friendly.

An object which does correspond to a real-world concept is better understood as representation of the aspects of that concept that are relevant to the application. Sometimes you invent objects which only makes sense in the world of computation - and that's OK! There's no such thing as a StringBuffer or Socket or Event in the physical world, but you can't deny they're useful classes.

Don't fall in to the trap of philosophising, remember that you're just writing code.

Domain Modeling Not Necessary?

I agree that you don't need domain modeling for most applications.

What? Really? Are people being paid to rewrite "grep" or frob some trivial report generator?

Hell, even in my days slumming in Web development, complex domain modeling was necessary. Just boring companies, organizations, projects, discrete vs. multi-year projects, funding sources, user accounts, group accounts, credit pulls and merges, complex loan products, lending criteria, property types and on and on and on....

Not just some crap one types right into C or Python or Java or any other language. I had one accounting system firm at a non-profit simply back out of their contract when confronted by the complexity of the domain model.

Where does this idea of "domain model not necessary" come from? [And, no, I am neither some modeling nor OO fanatic by any stretch of the imagination.]

- S.

OOps.

I overstated that. Domain modeling is involved in fusing data to make useful decisions or predictions. For example, to report alarming tax reports, we do need a decision function for what constitutes 'alarming'. So most applications do include some implicit forms of domain modeling. What I mean to say is that most applications do not need to include domain models.

Where does this idea come from? Well, you don't need to model a printer in order to send postscript at it. You just need a pipe. You don't need to model a keyboard to receive input from one. The domain model is whatever lives in your client's head - the printers, keyboards, monitors, robots, motion, traffic, traffic lights, mountains, mole-hills - possibly shoehorned into some sort of relational schema.

Applications don't need those models, not unless they're going to be performing rich searches to make 'creative' decisions. Now, I wouldn't reject a paradigm that leveraged rich domain models to support planning and creative decisions - and I've made a few stabs at generative grammars for that sort of purpose. But that's far beyond state-of-the-art. At the moment, most applications don't need domain models, and would not be able to leverage them.

Thus, an OO class whose type names a domain element is almost certainly a mistake: even if your application is among the exceptions that does need or effectively benefit from a domain model, you could do much better than OO.

Since GUI programming seems

Since GUI programming seems one of the best fits for OO methodology, could you tell me how you would program a windowed GUI -- and provide an API for it -- without naming a window, a button, a font...?

A model of GUI elements is

A model of GUI elements is not usually considered a domain model. We don't often fill databases full of information about windows and buttons, for example, and the sort of 'decisions' and 'predictions' involved aren't directly related to any client requirements. There is a great deal of 'accidental complexity' involved. I'll grant that it's quite borderline, though.

But to answer your question, there are plenty of ways to provide GUI without a model of windows, buttons, fonts. Naked objects was already mentioned here. Tangible values, developed by Conal Elliott, would be a choice in the same vein as naked objects, but for functional values. HTML largely declares a GUI in terms of its content (though that still requires a transform between models, i.e. to turn databases into text). Some technologies simply turn the code into a GUI via graphical programming environments (Smalltalk, LabView, Max, Croquet). A console serves as a simple UI that doesn't require naming any UI elements. (I'm not endorsing them just by listing them.)

GUI programming is not a good fit for OO methodology (but I'll agree that it's one of the best fits >;^). With OO, you will end up reinventing a reactive programming model (badly), and a concurrency model. You'll face challenges dealing with a mix of synchronous and asynchronous IO for both the user and for keeping the display up-to-date. You'll deal with corrupted state and glitches, where state of objects diverges from the model it is intended to represent. You cannot easily observe a change in the GUI based on tweaking the code underlying it, so debugging and testing is expensive. You get no help from the paradigm with accessibility, multi-language, and various other cross-cutting domain concerns.

which level of implementation?

i only sort of follow/believe what you are saying, as i read it. and i do agree that oo guis suck wrt concurrency and 'oh now i need the nail friction here' issues.

yet things like naked objects or tangible values have to go through some final actual gui/rendering system like x-windows or whatever, which do in fact (for good or for ill) have "window" concepts. so i'm not sure how that helps your argument if the things you say are different end up using the not-different foundation.

i don't see how html supports your "w/out naming windows, buttons, fonts" claim when it explicitly does mention buttons, checkboxes, menus, fonts, colors, iframes (windows), ...

so the light-over-my-head-ness of what you say is only at like 43%, i'm hoping to grok more.

Application level.

What is displayed can be content-driven, which frees developers from concerning themselves with positioning windows, managing layouts, and generally modeling the display elements. Inputs can also be 'content-driven'... e.g. a boolean input becomes a checkbox, and a string input becomes a textbox.

HTML I mentioned as 'largely' content driven. We could go much further in that direction than did HTML. HTML is a markup language, which (by definition of 'markup') mixes content and presentation. I mentioned it because it serves as a familiar example. HTML itself eventually moved more towards the GUI modeling direction by adding Cookies, scripting, and DOM. As an example, you look at an 'iframe' and see 'window', but a slight change in direction and we might have been looking at 'iframe' and seeing 'content transclusion' - i.e. a simple declaration that some external content should be included in this content.

And we don't actually need an underlying gui/rendering system 'like x-windows'. As a potential alternative, we could have underlying gui/rendering systems for 'application' objects be more 'like Seadragon',or more along the lines of object browsers and table browsers. The graphical programming environments mentioned above certainly qualify, some to lesser degrees than others.

You are correct that, at some stage, we do need a translation or templating system that will put pixels on the screen at the right place and right time. A windowing concept might be involved, but even if not we'll at least be using the domain concepts of time, colors, positions, perhaps even of monitors and GPUs, and some sort of integration with the user's input devices (mouse, keyboard, joystick, webcam, microphone).

And the real question is how much of this needs to be modeled by our applications. There is some difference between declaring a button (or declaring a 'unit event' input suitable for buttons) and describing a model of buttons (buttons go up and down, etc.).

Since GUI programming seems

Since GUI programming seems one of the best fits for OO methodology, could you tell me how you would program a windowed GUI -- and provide an API for it -- without naming a window, a button, a font...?

I guess you've never programmed in Smalltalk!

There is no such thing as a Window in Smalltalk, at least not as GUI toolkits like Progress OpenEDGE ABL, Microsoft AWT, Microsoft WinForms, Microsoft WPF, Microsoft MFC, Sun Swing, Sun AWT, IBM SWT, Borland whatever-they-called-it, etc. refer to one.

This is amusing, of course, because Smalltalk is considered by most OO people to be one of the best OO languages so far, despite also being one of the first. And yet the design of its GUI toolkit shares almost nothing in common with any of the industry toolkits mentioned above.

which smalltalk?

i'm aware of MVCish and Morphicish stuff in the history of Smalltalks. looking at the GUIDevGuide.pdf of VisualWorks, Chapter 3 shows that there are, in fact, window objects. ?!

thanks for any pointers.

As far as I am aware,

Smalltalk-80 does not have an ApplicationWindow class, like VisualWorks Smalltalk does. I would have to consult my copies of Ted Kaehler and Dave Patterson's Smalltalk-80 book A Taste of Smalltalk and Glenn Krasner's Smalltalk-80: Bits of History, Words of Advice book to tell you exactly what the class designs were for the Smalltalk-80 GUI subsystem.

Smalltalk-72 used a turtle class as the primary line drawing mechanism, and a window class for general window management.

But this was gradually de-emphasized ever since the introduction of the Model-View-Controller application organization (1978). You don't need a Window. That's the point of MVC, and its follow-up variations (well, aside from some strange variations, which demonstrate a complete non-understanding of what the M, V, and C stand for responsbility-wise). The point is to not create monolithic Window objects, but to further decompose the responsibilities, so that there is no God object governing a bunch of non-related stuff, like your application model, i/o, and presentation processor.

It seems that VisualWorks didn't have this metaphor and clearer division of responsibility. Even the name ApplicationWindow sounds wrong to me.

Some of the GUI toolkits I mention only have these monolithic Window classes to pacify the linker, loader and underlying event system of the operating system... For example, you need to know the history of Windows Graphics to understand why WPF is the way it is. Although WPF is largely DirectX now, there is still the Win32 Message Pump for backward compatibility with GDI/GDI+ applications, and this leaky message pump abstraction is exposed in some places (WPF's own event model, so-called routed events, is completely stupid, but that is another matter). More basically, the Application class, as I understand it, is hardlinked to how the operating system works, including basic stuff like removing unnecessary privileges from the process token.

On a tangent, windows are

On a tangent, windows are very deprecated for web UIs, phone, table, and slate UIs, they just don't work that well. In fact, we are seeing focused window-free designs propagate back to conventional PC desktops--its a very exciting time in UI as we are finally getting beyond Xerox.

Domain models as typing

I agree it usually doesn't make sense to include a keyboard object - how does one implement a keyboard in software? But it can make sense to model the state of such external entities and use them as phantom types in monadic keyboard interaction functions. Do you count that as domain modeling?

I wouldn't qualify a buffer

I wouldn't qualify a buffer of keyboard state as domain modeling because it isn't part of how the client understands the domain or describes requirements.

I would grant that the relationship between key events and system behaviors as a form of domain modeling, though. (i.e. "If I press Ctrl+Alt+z, do xyzzy.") But it wouldn't really be a domain 'model'.

For full disclosure, I should note that there is one place 'domain modeling' is pretty commonly useful: mock objects for unit testing. One would reasonably model a keyboard if that is what is necessary to support reproducible tests of key event sequences.

Modeling keyboards

Well, this is a thorny issue. There is a lot that goes into properly handling key presses across all countries (I recently was baffled when Stefan Hananberg let me type on his German laptop, and he had to help me figure it out) [edit: and even operating systems, if you want true Write Once, Run Anywhere virtual machine technology]. The domain model here simply must be able to take as a context these variables, and map out to a keyboard model a set of standard keypresses. In addition, at a higher level, a key event manager could handle key chords, since the keyboard itself should have no knowledge of the internal timing of key presses. - Just imagine from an automata perspective the explosion in the model if you tried doing that!

how does one implement a keyboard in software?

you mean like on a smartphone?

According to OOAD

Unless you're specifically dealing with a keyboard as part of your problem domain, say, e.g., your developing one with new functionality, a keyboard normally never is part of your domain model, and most OOAD development methods will encourage that one doesn't model input devices or GUIs as part of the domain model.

A keyboard might be part of the design model, i.e., if you want to elaborate in an embedded system how different to be implemented components work together. But, it's a grey area really since design decisions are part of the architect.

But all in all, it would be very weird if a keyboard ever makes it into the domain or design model.

[ Ah heck, this post ended up somewhere random. ]

proof by example

Not surprisingly people's posts in this thread are all over the place in terms of adding to the discussion. People have understood your question differently, or been intrigued by different aspects of the question so the variation is only to be expected. The question is simply too vague and people's experiences too varied for the discussion to coalesce around a few concrete points where everyone agrees.

Programs are pretty much the same in the sense that everyone tends to bring a unique perspective to the problem. So if you define objects as being intuitive to mean there is a clear consensus about which objects are needed and how they are related --like you believe there is is for real-world objects-- then you need way more than just objects. You need a precise problem definition and clear criteria to judge between competing solutions; otherwise, there is simply no reason to expect the kind of convergence you desire. To me, placing the blame on objects for not being "intuitive" is simply barking up the wrong tree for the most part.