Coupling of concepts - good or bad

I have a specific problem with OO design and programming, though it is just an instance of a more generic issue.

On one (methodological) hand, objects are thought to represent "real-world entities". On the other (scientific?) hand, objects are mostly "gobs of recursion" (at least, in single-dispatch OOPLs).

Assuming you agree with the previous statements, when you read some code, how do you tell, whether specific object/class was created because the programmer needed "gobs of recursion" (dispatch via "this") or because he wanted to model a "real-world entity"? Should it be documented? Is it important at all? What is your experience? (mine is pretty limited as for some reason I always end up developing very abstract systems without any "real-world" meaning :-) )

I feel I should not post this to "objective scientific" thread.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I think what's important is t

I think what's important is to document why the class model is what it is...i.e. to explain why the certain design was chosen over others.

There is a phylosophical side

There is a philosophical side to your post. REAL objects by definition ought to be "in time" and "now". Such objects are really collections of variables that characterize the NOW. The functions defined on the object are actions on the real world and return some result of that action. Since we are talking REAL here the actions are not simply determinate state changes. A scientific view is a concept of the real which is always true (ie not NOW). Which is to say it is analytic, and doesn't require variables. The material of functional or logic programming.

My take on this is that the REAL (ie imperative) way of thinking that we all use in the world is often used to represent the processing of concepts. We think about the steps involved in getting the answer. This might be practical but it treats the analytic, functional as if it were imperative. A "pure" way of programming would not mix up imperative and functional this way. (ie Haskel) Am I rambling again or does this make any sense.

Edit: An example of treating an analytic problem as if it were imperative is to solve a dynamic problem using a determinate state variable. Engineers would have trouble getting along without state variables!

The Imperative Perspective

I don't see objects as models of entities or tools for recursion. I see them as capsules of state and/or behavior. As such, they *may* model real entities, or they may model an abstract feature. If the object is publicly-facing, I expect it to model something that the end-user wants, and more typically something "concrete". If the object is an implementation detail, I don't expect anything of it. Thus, I don't feel a programmer is obligated to document every detail of an implementation, nor does it matter, unless they used a very non-obvious approach.

How do you do real world objects?

I don't understand how you can write a program where objects represent real world objects. When was the last time you went to the kitchen, picked up a vector, and ate the third element? There's just not really much of a resemblance between the computer world and the real world, and we have to think abstractly to be able to program. I usually think of objects as datastructures with behavior and program as such, but capsules of recursion could also work.

Think about a program to cont

Think about a program to control an engineering system. There are real objects there and real actions. Don't think that because you work in a computer world you can ignor external things. Programs ultimately live and survive in some real environment.

Ok, but that's a really small

Ok, but that's a really small subset of problems. Most of the things we deal with in computers exist only in computers. Anyway, it would probably take many classes and objects to model one object in the real world, not just a single OO object representing a single real object. Maybe one of those composite objects will stand for the real one, but most of them will be too abstract.

Well there is a difference be

Well there is a difference between engineering and computer science. But it is still worth knowing about the other side. Suppose there is a thermostat and a smoke detector hooked up to your computer world. If it gets too hot or smoky the computer might conclude that it is time to call the fire department, backup critical data to a remote site, etc. Suppose there is a user loging on to your computer who exhibits unusual file searching behavior? We could go on and on----

Real world analogies

Software is abstract, so real world analogies can only be seen as a way to conceptualize a particular problem domain. As with all analogies, they tend to break apart when you analyze them with a microscope. That doesn't mean that analogies can't be useful as a guideline for understanding intricacy.

Common confusion of levels in OO practice

It is certainly the case that any large-scale OO project will have some classes that are "business objects", by which I mean that you can talk to non-technical end-users about them, and some classes that are "computer-science objects", which form the infrastructure of your running programe and which that same non-technical end-user has neither the capacity nor need to understand. Original OO-hype that everything could be handled in business objects turned out to be overly optimistic. This is just par for the course, as it's become pretty clear that putting all functionality in either business objects or computer-science objects will result in brittle designs, miscommunication, and generally ugliness. Handled poorly, this business/computer-science dichotomy can easily result in horrible designs. I know of no hard-and-fast rules, but here's some hard-learned experience.

1) Segregate, segregate, segregate. To as great an extent as possible, business logic and computer-science infrastructure should be in separate classes, ideally in separate packages or modules (if the language supports such constructs). Document the segregation, and if possible use static analyses to enforce it.

2)If a business class exhibits any pattern use other that an accept() method for a Visitor interface, it's probably too closely coupled to it's computer-science infrastructure, and you'll lose the communications benefits that having business objects allows.

3) Business objects should be passive, and never assume control of the flow of computation from computer-science objects. Down the other path lies madness. Computer-science objects exist largely to orchestrate control-flow and data-flow amongst business objects.

4) If a computer-science class uses any data structure beyond a simple hashtable, you're probably missing a business object that could profitably be exposed.

5) Don't be afraid of stateless business "objects". Services, rules and (occasionally) triggers can be perfectly reasonable business constructs.

There, that should be enough to stir up some controversy on this thread...

Excellent summary

Actually, I expected to hear something like this. In my practice (however limited), "good" designs contain classes that establish some kind of infrastructure/substrate/abstract machine, and other classes that utilize it. I wonder, is it productive to view this as a special case of DSLs?

"Real-world entities"?

On one (methodological) hand, objects are thought to represent "real-world entities".

Not in my programs, they don't. The vast majority of objects are solution-domain abstractions; they are "objects" because they have technical language properties like subtype polymorphism that are useful at the implementation level.

(This is despite the fact that I mostly work on embedded systems that are controlling real-world hardware.)

I've never really understood what some people are going on about when they talk about OO programs (specifically) "modelling the real world". They do so no more and no less than programs in any other paradigm. These people also tend have a habit of anthropomorphising objects, ascribing goals to them and using pronouns like "my" etc. (referring to an object) in documentation. On that topic I'm with Dijkstra:

Is anthropomorphic thinking bad? Well, it is certainly no good in the sense that it does not help.

(Previous discussion on LtU here.)