OOP Is Much Better in Theory Than in Practice

An critique of OOP. The article is about OOP as a SE/design approach and doesn't directly attack the issue from a PL angle, but it might still interest LtU readers.

From a PL point of view, I would have chnaged the title to: OOP Is Much Better in Theory Than in Practice, (And the Theory Isn't too Good anyway).

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I disagree

I think the real topic of the article is how practice is different from theory, and how crusty old practitioners like the author know better than mandarin professor-theorists up in their ivory towers. Practice and theory are here social categories more than anything else: practice is what horny-handed sons of toil get up to in the real world, while theory is a quaint perversion indulged in by cossetted smart alecs without pressing demands on their time.

In truth, OOP has not been over-praised by theorists nearly so much as it has been over-marketed by tool vendors.

I had to smirk a little at the author's preferred alternative: procedure-oriented programming, to be supplanted at some unspecified point in the future by clever computers that can understand natural language...

Even more bizarre

"I think the real topic of the article is how practice is different from theory, and how crusty old practitioners like the author know better than mandarin professor-theorists up in their ivory towers."

Even more bizarre, it's about how "amateurs working alone" like the author know better than "professionals and academics".

Was the article against OOP or against C++?

Amateurs, indeed

I gave up reading when I got to the statement, "most programming is done by individuals".

True

In truth, OOP has not been over-praised by theorists nearly so much as it has been over-marketed by tool vendors.

That's true. "Theory" in this context is not "mathematical theory" but rather "theory" as in "a theory untested in real life" .

OOP reflects the problem at hand

The major advantage of OOP is that it allows to map real-life problems to a design and then an implementation. Other programming methodologies (including procedural languages), require a certain amount of ingenuity on the behalf of the designer/programmer. This guy seems to acknowledge this fact re:GUIs but completely fails to recognise thousands of other problem domains where OOP produces a clean fast solution. I think it is a very poorly thought-out article, which ignores practically every good aspect of what OOP as a problem-solving tool. I also do not believe that after 10 years people "still don't get OOP". If they really don't -- they are in the wrong job. ;-)

As far as Ehud's gripe about the theory aspect goes, then I'm not 100% sure what you meant. Do you mean the formalisms are clumsy compared to FP? My view is again to do with it being a reflection of real life problems first, and a formalism second. Retrofitting a formalism to a framework for dealing with "things that have properties and can do stuff to each other" was never going to produce something as clean as, say, ML. It does, however, map better on to real problems.

Invisible ingenuity

Whatever the mapping between a problem domain and a design, it's always a mapping and always requires some ingenuity.

One of the biggest ways in which OOP has been oversold is the notion that OOP mappings are somehow more natural or intuitive than other mappings. That depends entirely on what intuitions you have acquired!

If all you know and practice is OOP, your intuitions will most likely be OOP intuitions and OOP solutions will feel correspondingly more intuitive and less ingenious. One of the advantages of trying out different models is that if one perseveres with them one can begin to acquire new intuitions.

For instance, although I still program in an OOP style most of the time, since I started dabbling more frequently in Haskell there have been more and more occasions when I've found myself wishing I could write a nice reduce foo n . map bar to transform some data set - it seems so much more natural to do it that way (and Google agrees!).

Complaints that this approach is somehow more complicated or difficult to understand seem totally nonsensical to me...

Ingenuity is indeed intangible

I mostly agree with you. I also spend nearly 100% of my time in the OOP world and on many occasions I do find that the odd functional concept would make my program easier to read and require far less time and typing. However, I would like to reiterate my view that OOP is the closest programming approach to real problems most of the time.

My belief stems from the propensity of people to see systems as a collection of things with certain properties that interact in certain ways, rather than a bunch of functions called in a sertain sequence on an input. True, you can acquire the skill to see things this way with practice but the catch with OO is that one requires minimal training to grasp the idea of objects as they are so close to real life conceptually. Grasping the idea of functional programming isn't more difficult at a basic stage (i.e. functions as first class citizens, polymorphism etc), but the intuition stops abruptly when you try to get people to design a solution to a complex problem (e.g. an online bookshop). With OO, a person is likely to think "Well, I have a bunch of Books that have Authors, which will be purchased by Customers who have creditCardNumbers)", whereas what does a normal human begin think when you ask them to produce a functional solution?

Perhaps I have fallen into the trap of being one of those who can no longer think in a non-OO way. I do not discount this as a possibility. The OO way is a nice comfort zone and with practice showing that it does the job, it might be hard to escape. Please feel free to recommend books/resources that demonstrate how "real" software is designed in other methodologies, particularly FP. Then we'll be able to see how close those concepts are to the average problem at hand.

OO or relational?

What you've just described sounds more like a relational/first-order logic model than anything specifically OO, e.g. Author(person,book) etc. You can encode this into a functional language easily enough (e.g. as a set of tuples, or as a boolean function to indicate set inclusion). In fact, if you take "object" as meaning just a thing with a bunch of attributes, then you can easily model that as a key and a bunch of relations. For me, it's the inheritance aspects of OO which are distinguishing features. But you can encode those into a functional/relational language too.

Can you justify your statements?

Are you asserting that OOP is suitable for more than GUIs, or that OOP is suitable for most real-life problems?

If the former, you're attacking a strawman. The article stated that OOP is often a useful paradigm, just not enough to justify excluding other alternatives. It's aimed at those people who claim that OOP is great as a one-size-fits-all paradigm.

If you're making the latter, stronger claim, you're going to have to justify it.

Note that I disagree with many points of the article, which is written from a rather mainstream perspective. I'm fairly certain that many people on LtU will agree that in order to defend OOP, you have to defend the need for subclass polymorphism and dynamic dispatch specifically.

Data abstraction and encapsulation are entirely independent of OOP, the typical mainstream class facility is just one way of implementing them.

In my experience, "OO" programs with shallow class hierarchies are usually only making use of the class facility for abstraction and encapsulation. Programs with deep class hierarchies are often unnecessarily complex and inefficient.

Can you provide examples of real world problems where deep class hierarchies are a useful way of representing things?

What is OOP?

Note that I disagree with many points of the article, which is written from a rather mainstream perspective. I'm fairly certain that many people on LtU will agree that in order to defend OOP, you have to defend the need for subclass polymorphism and dynamic dispatch specifically.

Are classes necessary for OOP? Isn't the existence of other approachs like prototypes, "lambda-objects" (e.g. E language), structural subtyping, etc. an evidence that they aren't necessary? IMO classes are the wrong tool to solve a fuzzy problem.

Classes aren't necessary...

But I assume that they are something implied by most OOP advocates.

Rather than try to define "what is essentially OOP" (definitions vary depending on who you ask), I was trying to exclude commonly misattributed features that are definitely not in any way unique to OO languages.

OO Formalisms

Retrofitting a formalism to a framework for dealing with "things that have properties and can do stuff to each other" was never going to produce something as clean as, say, ML. It does, however, map better on to real problems.

There are some formalisms that can be applied to OO that are clean and deal with things that have properties and can do stuff to each other. Bart Jacobs' papers on coalgebraic specification of classes comes to mind (e.g. The Coalgebraic Class Specification Language CCSL, Coalgebras in Specification and Verification for Object-Oriented Languages, Objects and classes, co-algebraically).

OO language designers usually ignore formalisms and just use ad-hoc solutions, instead going the FPL way: start with a basic calculus (e.g. sigma calculus, opulus) and extend it with some theoretically proven features (e.g. type-classes, functors, uniqueness). A formal treatment of the features available in OO languages like Java, Eiffel or Smalltalk is available at The Theory of Classification (this is the latest column, earlier columns introduced the basic formal ideas used in this one).

IMHO such approachs are as clean as ML. The problem isn't in inexisting formalism, but on clueless language designers.

Then it Would Be Easy

If OOP so closely matched the real problem domain, then it would be easy. The fact that "after 10 years, people still don't get OOP" is not necessarily an indication of programmer quality (at least, not to as great an extent as many say). Rather, I think it points to a mismatch between OOP's stated advantages and its real-world use.

I think OOP is very useful when you have a designer that is good at mapping the real-world to OOP solutions (and further knowing where OOP is going to get in the way, for both large and small problems).

Old tapes

I wonder if this is a programmer issue. The strengths of the languages that most people use is that they are, for the most part, multi-paradigm. Also for the most part, there are only two paradigms supported (OOP and procedural). Multi-paradigm is good, because one size does not fit all. But it's bad because it discourages people from learning OOP properly.

I often wonder if OOP would be more successful if programmers were forced to spend a year solving problems in languages like Smalltalk, Eiffel and Erlang; the idea being that your C++ would look a lot less like C when you were done.

Switching Concepts

Pseudonym: I often wonder if OOP would be more successful if programmers were forced to spend a year solving problems in languages like Smalltalk, Eiffel and Erlang; the idea being that your C++ would look a lot less like C when you were done.

Another good reason not to do education using whatever language happens to be popular today, and I definitely recommend CTM and Oz for the same reason you're suggesting Smalltalk, Eiffel, and Erlang. Working familiarity with many different concepts leads to better implementations in whatever language is ultimately used.

Parable of the Elephant and the Blind Men

An OOP model often describes a limited and artificial view of a large system. The OOP programmer is like one of the blind men in the Story of the Elephant and the Blind Men; he sees the system from only the perspective of a particular OOP model. But different OOP models may characterise a system differently. These different OOP models can be inconsistent.

In contrast a logic programmer is like a full-sighted man and can see and work on any aspect of the system.

The logical/relational paradigm is established and dominant in database technology. It will be at least another 10 years before the same occurs for software development.

Robert Kowalski has published an online book How to be Artificially Intelligent – the Logical Way . In particular Chapter 10 Logic and Objects discusses the relationship of logic programming to OOP.

Completely off-topic

I started reading the online form of Robert Kowalski's book and also looked at the short article about the Wason Selection Test. I find the following excerpt mildly amusing as well as instructive:

The negation of the concept "the number is 3", however, has no natural positive representation. Worse than that, as we will see later, if we are given as input an observation that "the number is 7", it is difficult to derive the conclusion that "the number is not 3". This is because we could equally well derive that "the number is not 1", "the number is not 2", etc. It is only because we are asked to test a rule referring to the number 3, that we are able to make the mental leap from "7" to "not 3".

(My emphasis)

OOP and its real-world mapping

Isn't the mapping of OOP to real-world objects a little overrated? Consider design patterns, which are by definition supposed to be commonly seen in OOP. What exactly are say, visitor objects in the real world? Are they any less abstract than corresponding solutions in other paradigms? Ditto for controller objects in MVC.

Sure, it's easy to identify the concrete objects early on, but orchestrating them together in a solution is not at all intuitive and it requires thinking just as abstract as in any other paradigm, if not more so since you chose to see everything early on as only objects and nothing but.

Not quite...

The major advantage of OOP is that it allows to map real-life problems to a design and then an implementation.

No, you're talking about OOAD (Object Oriented Analysis and Design).

I would argue that historically, OOAD is indeed a strength of OOP. One of the reasons why OOP became as popular as it did is that it came with an analysis and design methodology such that the solution space (i.e. the language you're implementing in) and the design space (i.e. what your design methodology created) were extremely close. Interestingly, it's also the tight integration of OOAD and OOP which has led to the over-hype by tools vendors.

OOAD doesn't reflect "real life problems" any better or worse than any other big-M Methodology. What it does do, though, is that it breaks a problem down into schedulable entities. Even if it doesn't model the problem very well, it does model the solution in a way that managers can hand out work assignments. "You write this object and you write this object." That is a strength, of sorts.

Not sure the author knows what he is talking about

I think there are many arguments against OOP, but I am not sure whether the author really knows what he is talking about.

E.g., "Hiding code from oneself just seems weird." - I thought this debate was finally closed 40 years ago in the 1960s by Parnas and Dijkstra. Also, information hiding is not specific to OOP.

I think the main point of OOP is that it provides a very intuitive metaphor of how to structure your code. Subclassing, late binding, subtype polymorphism, etc., are all only technical artefacts of this idea.

Maybe the author has not worked in large scale projects.

Having worked in both procedural and object-oriented environments, I can say one thing: if the programming language does not force everyone to think along parallel lines, then the project will fail. Consequently, the OOP style is very important, because it enforces people to think about the problem to be solved in an abstracted way.

Let me give you some examples of what I have worked on (I am employed in the defense applications sector):

  1. a network package sniffer that can present the data in structured manner, according to the used protocol. It was done in pure C, with procedural programming.
  2. an airborne situation simulator for helping test a radar. It was written in C++ in an OOP style.
  3. a radar console. It was written in Java in an OOP style.

All of these projects needed some evolutions at some point in time. This is what happened:

  1. the packet sniffer application was totally re-written in Java, because new forms of network protocols could not be easily handled. With the current version in Java, adding new protocols is as easy as adding a new .jar file in a specific directory.
  2. the simulator application was kept in C++, but it was a major pain in the *** to add new simulated systems, because, although the project was in OOP, the OOP style was violated many times from the developers: it was easier for them to do something quick and dirty in C procedural style, but that could not be changed easily.
  3. The radar console's network protocol changed several times, but the change was minimal: it was only the message handlers and the message classes that where changed. It took more to design the changes into the application, but the OOP style made the changes very easy to do (and with much less cost).

Of course, nothing is a panacea. If one designs the code with too many abstractions, it is easy to loose track. On the other hand, if one does not do an abstraction, it may fall in the trap of not being able to extend/modify the system easily. Whatever the situation, OOP certainly has benefits though.

You aren't talking about OOP

You're talking about having a restricted enough language to force programmers to use the same approach for every problem. In this case, you're also confusing OOP with data abstraction and encapsulation, because a popular "OO" language (Java) happens to use the same mechanism for both while being crippled enough that it doesn't really support anything else.

Using a restricted language may be necessary when you have teams of bad or lazy programmers, but when you're using the same approach for every problem, it'll also very often be the wrong tool for the job.

There are far too few programmers who are good at using different kinds of methods of abstraction, but those programmers are likely to be far more productive given the chance to use powerful tools than the lazy ones that require crippled languages in order to produce maintainable code.

I do agree with you (and disagree with the author of the article) that the traditional C/C++ procedural approach is usually a bad idea for anything non-trivial, but there are far more alternatives than OOP vs. procedural.

But the state of programming is never going to improve if people think that teaching programmers only one way to approach problems is a good idea. You'll only end up with bad programmers. That shouldn't be the goal. Something like this should be.

Or this

Or, indeed, this.

I am talking about two things

I am talking about two things:

1) OOP being really useful.
2) it's better for a project if discipline is enforced rather than suggested.

I agree that good programmers will be far more productive when given powerful tools. For example, I prefer C++ because I like templates, and I am much more productive with templates rather than with Java.

But how often is it guarranteed that the same good programmer will always be available to maintain a project? In the real world, a programmer being too good is usually a problem: the others can't follow him, and he can't be replaced.

As for 'far more alternatives' to OOP/procedural, would you like to provide some links? I only know about one other real alternative: functional programming.

The real world

For what other professions is it true to say that in the real world, an <x> being too good is usually a problem: the others can't follow him, and he can't be replaced?

"Too good"

I've heard that this is a problem with musicianship, where someone with too much talent can unbalance a band and is likely to be leave for better offerings quickly.

Personally, I prefer to call programmers like the ones described as "too clever" rather than "too good". There are plenty of good programmers who realize that code which may be a clever solution to a tactical problem can easily create strategic problems that are worse. The keys are whether your "clever solution" obscures or illuminates design intent, and whether the programmer views coding as a solo performance or an orchestral collaboration.

Illumination

I think this is a property of really good programmers: that their solutions make it easier for others to proceed, because they clarify the domain model and provide pertinent abstractions for general use.

The advanced language features that are often argued against on strategic grounds are often the features that you need to do this: they support library writers and creators of new frameworks, and may not be really suited for ad hoc use in everyday code (the scripting layer at the top of the stack). The trouble is, if you don't allow anyone access to those features, you force everyone to muddle along without the benefits that a tastefully-chosen set of macros or decorators or custom attributes or what-have-you could provide.

A really good bass player does nifty things that nobody notices directly - the odd passing note, the off-the-beat accent here or there - that provide the underpinnings of a band's whole groove. Does that have anything to do with programming? I guess it can if you want it to. ;)

Other alternatives.

Logic/Declarative is more widely used than acknowledged. But in terms of real world, the Relational model is perhaps the most pervasive. (and no, the relational model is not the same thing as either OO or Procedural - which fall under the imperative programming umbrella).

I don't think logic/declarati

I don't think logic/declarative/relational are alternatives to imperative programming. Would you care to mention an example of an algorithm that can be expressed with those 'paradigms'?

Fairly pervasive

Well, declarative type programming is used in most of the spreadsheet programs in the world. And relational databases contain a lot of the data of the world.

But I guess, the bigger question is what do you mean by algorithm? And does that automatically assume a procedural aspect? An algorithm can either be looked at as "a step-by-step procedure for solving a problem or accomplishing some end especially by a computer", which means that it's a self fulfilling prophecy that the procedural is the only method that is capable of producing a "procedure".

Or you can look at an algorithm, not in terms of how it accomplishes it's goals but rather as a timeless transformation of some input to some output. In that sense, algorithms can be expressed in non-procedural terms, functional, relational, or any other numbers of logic/maths.

Functional programming is not

Functional programming is nothing more than imperative without assignments. In other words, the programmer is not allowed to change the state of a variable, because the change might create bugs. FP does not trust the programmer. Other than that, it is a procedure-oriented paradigm: a function is a computation step amongst other computation steps. Since FP has steps, it is procedural, i.e. the flow of computations is coded by the programmer. Of course FP has many other properties that are not directly related to the main concept; for example, pattern matching, closures etc that exist in other non FP environments. The main concept of functional programming can be easily applied to any other imperative language; for example, try coding in C++ without assignments, without classes and every parameter as const. It's the same thing.

The declarative thing is not programming at all. All that it is about is declaring things. In spreadsheets, you don't do programming, you simply declare the formulas to be used and the range to be applied to. The actual program has been there for you, in the form of built-in functions. It is only when you make a new function that you make some sort of programming. But that is not declarative, it is procedural, because it involves logic and assignments (hidden from the user, though). Take the SUM function, for example: it iterates over a range of cells, adding the contents to a variable. That's procedural programming: it is a loop over a range, an addition and an assignment. Although the user doesn't deal with that in a spreadsheet.

The relational model is an information organization model and not a programming paradigm. It does not contain logic, nor does it allow for doing algorithms. It describes the organization of data.

the programmer is not allowed

the programmer is not allowed to change the state of a variable, because the change might create bugs.

I suppose referential transparency does have the goal of reducing the number of bugs by minimizing side effects. But I'd say that's not what functional program is about. FP, put simply, is concerned with the manipulation of functions - i.e. Higher-Order-Programming or first class functions. I think you confuse a constraint of FP (no side effects) with the goal of FP.


<conjecture> What if, by some interesting coincidence, that the programs you downloaded could also be considered a piece of data? How would you know that program does what it says it would do? How would know that it won't infect your computer with a virus? Would you base conjecture solely on trust? That is, you have faith in the program simply because it can be proven to have been written by some reputable source (such as Microsoft)? So here we have the most objective analysis engines in the known universe, and we load stuff into simply based on faith (or if you prefer, trial and error).

</conjecture>

The declarative thing is not programming at all. All that it is about is declaring things.

Sounds like the moving debate about intelligence. Machines aren't intelligent because they are incapable of handling abstract concepts? Ok, you can't get any more abstract than numbers - which last I heard computers were better at manipulating than humans....


Anyhow, the more mundane question is what is programming? In my estimation it is the use of computers to perform various transformations. Transformation of raw data into meaningful information. Transformation of keystrokes into bits. Transformation of bits into visually displayed patterns. etc... Why is it that the person that uses a spreadsheet is not considered to be using a computer language? Surely you can't deny they are using a computer? And surely there must be some form of communication between the user and that computer. And surely where we have communication, we must infer that there is some form of language involved between the two communicants.


Some of the earliest programmers were considered with little more than ballistics. That sort of programming is a cinch now with a calculator or spreadsheet. At what point did it stop being a matter of programming, and become grunt work?

The relational model is an information organization model and not a programming paradigm.

The relational model is not concerned with how data is physically laid out. It is concerned with the relationship, insertion, and extraction of tuples. These tuples (just like the spreadsheets) could contain integers, strings, or any kind of data for that matter (hint objects and functions). Anyhow, if you can transform tuples in various ways, you can achieve quite a few algorithms (hint, I do a lot with trees). Heck, lisp gets by with the far simpler concept of manipulating lists, and tuples are a step above that.


I suppose you mean that you can't do recursion with the relational model, meaning that it's not turing complete. Still, you'd be amazed how much stuff can get done with RDBMS.

Lists and tuples

Heck, lisp gets by with the far simpler concept of manipulating lists, and tuples are a step above that.

? I have a hard time seeing this, since Lisp explicitly builds its lists up out of 2-tuples.

Well, that's an implementation detail.

Anyhow, tuples are quite powerful in their own right. :-)

I think you confuse a constra

I think you confuse a constraint of FP (no side effects) with the goal of FP

But without that constraint, it would not be possible to have FP. There are imperative programming languages where functions are first class citizens.

Still, you'd be amazed how much stuff can get done with RDBMS

Would you care to provide some examples?

But without that constraint,

But without that constraint, it would not be possible to have FP. There are imperative programming languages where functions are first class citizens.

Without state, imperative programming can be cumbersome. But does that mean that the goal of imperative programs is to use as much state as possible?

And you can do FP in imperative languages (just like you can OOP in practically any PL). Or as the old addage goes, you can write Fortran in any language.


Without delving too far in some of the possibilities, let's just refer to the FP FAQ:

Functional programming is a style of programming that emphasizes the evaluation of expressions, rather than execution of commands. The expressions in these language are formed by using functions to combine basic values. A functional language is a language that supports and encourages programming in a functional style.

Or if you prefer the more homespun sayings of Larry Wall:

We know from experience that computer languages differ not so much in what they make possible, but in what they make easy

Anyhow, you'll find a lot of ideas from FP have crept into more imperative languages as well as vice versa.

Would you care to provide some examples?

Well, since I just ran a query, how about:

SELECT SUM(StorageSpace), ClientID FROM Archive GROUP BY ClientID

But seriously, are you questioning whether the relational model exists in the real world? Whether it is useful in the real world? Or whether it is a programming language?


If it's one of the first two, you need to get out more. If it's the third, then give me the reason why it is not considered a programming language?


Also, for reference we have discussed spreadsheets as languages in the past on LtU: here and here. Most telling was the statement that:

Design of a programming language is a Human-Computer Interaction (HCI) problem

Too many assume that the PL of their discipline is the only thing that exists. Indeed, there are a multitude of Programming Languages that exist for communicating with a computer. And more optimistically, there has to be more and better ways than assembly or the slate of procedural or OO languages that are currently at our disposal. (Unless of course you believe that faith based usage of computers is here to stay. Of course, I always find it ironic that the only thing that computers can't check is the programmers - the rest of the world constantly has their work overseen by automation).

Prolog

It is a rather common language in academy and uses logic. Search for "prolog tutorial" to see some algorithms. Other languages based on logic are: Mercury and Oz.

Software is written by people not tools.

But how often is it guarranteed that the same good programmer will always be available to maintain a project? In the real world, a programmer being too good is usually a problem: the others can't follow him, and he can't be replaced.

This is only a problem if managers value more tools/processes/languages/paradigms than people. In the end of the day software is written by good people (as Brooks noted decades ago) not by bad programmers with buzzword compliant tools.

Well, it is not: maybe a good

Well, it is not: maybe a good programmer decides that he wants to work on something else; maybe a good programmer gets ill, or even dies in a car accident. Maybe a good programmer is smug and not co-operative enough; maybe a project stalls because it takes too long for all the other (average) programmers to catch up.

For me, software is written by people following established procedures. From my experience, it is much more important for a company to be able to quickly implement, maintain and evolve a project, rather than code a perfect solution that no one but the original programmer knows how to handle. It is no coincidence that very advanced designs and tools are considered as a risk by managers.

Definition of "good" programmer

You seem to be under the impression that "good" programmers write unmaintainable code. That they use their fancy skills to willfully obfuscate their code. Maybe some do. Perhaps it's a useful strategy for ensuring "lock-in" and a job for life! I don't think that's really the case though. "Good" programmers create useful abstractions that make the code easier to understand and maintain. If they're creating bizarre, contorted, unfathomable tangles then they're either not a "good" programmer, or likely trying to workaround some restrictive limitation in the language they were given (like only being able to form abstractions through class hierarchies).

I am not talking about unmain

I am not talking about unmaintanable code, but about code that is very clever, and therefore difficult to be understood by the majority of the other programmers.

But this is another discussion, and I am going offtopic.

The false dichotomy / uspoken assumption here

is that "bad" or "average" programmers using crippled tools and languages produce "useable, understandable, maintainable" code.

I've not seen this in practice. (I consider myself an average programmer. I write crap code in PERL, C, C++ equally well.

In my experience, (forced-useage OR crippled-language) + (average programmer) code is just as hard to read, sometimes harder than great-programmer code.

I've known a couple of really good programmers and much of their code has been easier to understand than most other code (sometimes my own!).

In terms of object oriented design & development I keep coming back to what the books say.

"how to make your company succeed with OO"
in the first chapter will tell you OO is easy & intuitive.

Later it tells you the programmers will have a major drop in productivity lasting from months to years.

Also later it tells you to hire several decades-experienced gurus for each project. This kind of puts the lie to the 'intuitivity' theory.

Re: I am talking about two things

1. I'd say it's often somewhat useful. Abstraction and encapsulation are really useful, but they're not specific to OOP.

2. Discipline can be enforced by means other than language choice, such as requiring all inter-component interfaces to be documented and accepted by the project lead. Rigorously defined programs can be written in multi-paradigm languages (consider that Ada is a multi-paradigm language).

As for alternatives, I'm not just talking about named paradigms, but various features and techniques that may or may not be part of any given language categorized as "functional", "OO", "concurrent" or anything else. You mention one - C++ templates.

The combination of closures and mutable state is powerful, as it allows you to trivially build several other abstractions (laziness, many forms of "OO") even if the language doesn't have them.

First-class continuations are extremely powerful, also particularly in the presence of mutable state, although they're probably not something that should be used directly in most programs; but they can be used to construct control flow abstractions used to write the program.

Indeed, one of the most powerful programming techniques, regardless of specifics, is building your own abstraction methods to write your program with.

There's also a lot more to OOP than is possible in C++ or Java. Consider BETA or CLOS, which are neither similar to one another or C++/Java.

OOP is nothing more than proc

OOP is nothing more than procedural programming organized in a different way. What OOP says, essentially, is to bind code together with data. The real problem with procedural programming is that it does not enforce organization of code. Disciplined programmers, or strict procedures may alleviate the problem; otherwise, spaghetti code can be easily done.

After all these years of programming and thinking about programming languages, all I can say is that there is only one programming paradigm: the Turing machine. In other words, when we make a program, we arrange instructions to be executed one after the other. OOP, FP, procedural, etc are all disguises of the Turing model, and they exist in order to solve specific problems. For example, OOP solves the big project organization problem (that's why studies show success in big projects). FP solves the problem of undesired state change. But all these are simply models for laying instructions one after the other. There has never been any really different paradigm that puts the programmer one step closer to making bug-free programs.

In the process of evolution of programming languages, there have been successes in several areas, including OOP. Java solved many C++ problems, like the lack of standard libraries for many usual tasks, the header file problems, the write-once-run-everywhere thing etc. But the successes of Java were wrongly contributed to OOP: Java is good, not because it is object-oriented, but it solves many other problems in a beautiful way. Imagine a C++ with a garbage collector, bytecode and a library to include gui/dbs/etc! it would seriously kick Java's butt! So it is not OOP that it's good, but the other properties of it. OOP has been overestimated, to the point that today's frameworks are seriously complicated. I think the original article wanted to give emphasis to that, but it went to the other edge, even declaring 'copy-and-paste' as a good technique for software re-use.

Just one question...

Do you have any real world experience in non-mainstream programming languages?

Er, why do you ask?

Er, why do you ask?

PROC is nothing more than fun

After all these years of programming and thinking about programming languages, all I can say is that there is only one programming paradigm: the Turing machine.

After all these years of programming and thinking about programming languages, all I can say is that there is only one programming paradigm: the Lambda Calculus.

Don't forget interactive programs

After all these years of programming and thinking about programming languages, all I can say is that there is only one programming paradigm: the Turing machine.
After all these years of programming and thinking about programming languages, all I can say is that there is only one programming paradigm: the Lambda Calculus.
You mean neither of you ever written any interactive application?

Regarding proc vs. fun: why, it's obvious, procedural means following the procedures while functional means performing some function. It's red tape vs. something useful, really :)

interactiveProgram :: World -> World

You mean neither of you ever written any interactive application?

Sure, they all have type World → World!

Single user?

Is this world inhabited by just one source of events? Are you happy with the way your PL deals with non-deterministic scenarios of incoming events?

If yes, I would much appreciate some pointers.

Pointer

I'm not really into interactive programming myself. Clean uses explicit environment passing (World -> World) and you essentially get a different world after receiving each event. The type system makes sure that you use the World single-threadedly (you can't copy the World).

See Comparing Proofs about I/O in Three Programming Paradigms (2001) by Andrew Butterfield and Glenn Strong for an comparison between this way of handling I/O, the monadic approach (Haskell) and the imperative, side-effect approach (C).

Err...

[...] although the project was in OOP, the OOP style was violated many times from the developers: it was easier for them to do something quick and dirty in C procedural style, but that could not be changed easily.

Forgive me if this sounds a bit rude, but it sounds to me like you don't do enough code reviews in your shop, or they're not good enough. Your development process is supposed to catch this sort of thing, surely?

With OO, a person is likely t

With OO, a person is likely to think "Well, I have a bunch of Books that have Authors, which will be purchased by Customers who have creditCardNumbers)", whereas what does a normal human begin think when you ask them to produce a functional solution?

Depends on what value of normal you're using! But probably something fairly similar, since these entities can be represented by types. The difference (well, a difference) with FP is that operations over these types are often easier to compose.

Re: With OO, a person is likely t

I thought you might ask for a definition of "normal"! ;-)

I see where you are going with types. However, this is starting to look like OO, since (and I might send shockwaves through LtU here, but bear with me) to Joe Sixpack, they are not that different a notion.

However, the decisive issue that gets people to choose the "OO" model is not the ease of programming but the ease of design. Few people think of relations when they see Books and Customers. Inheritance is a simple and useful tool which for bolting on extensions without too much headache. I realise that it is not what it should be used for, but from where I sit, this is the sad reality.

So, to draw the line under my mutterings: I agree wholeheartedly that OO is not the most elegant approach to programming, but refuse to concede that currently there are methodologies out there "on the market" which are more accessble to the programming masses.

BTW, thanks a lot for the "Concepts, Techniques, and Models of Computer Programming" link -- it's a little cheaper on Amazon and looks quite interesting.

What a diaster

When I first saw this article, I was optimistic. I thought someone was going to take OO languages to task for not having a solid theory of types. Or maybe it'd be insightfull comments about how many design patterns are actually just ways to work around shortcommings of the paradigm. Or maybe it'd be some war stories of OO gone horribly wrong. OO has more than it's share of sacred cows which could use some goring.

But then I got to this sentence:


A frequent argument for OOP is it helps with code reusability, but one can reuse code without OOP-often by simply copying and pasting.

Is he seriously advocating cut and paste programming? Yes, it looks like it.

By the time I got to the point where he calls Basic "as similiar to English as possible", and calls procedural programming a fad and equates it to communism, I had ceased wanting to rebut this author, and instead wanted to MST3K him.

There is, I think, an interesting and educational article that could be written on the limitations and problems of object oriented programming. This article isn't it, however.

Well, if you liked that ramble...

...then you'd really love the Tablizer, the bane of many a newsgroup. (wasted many an hour reading and arguing with the dude). :-)

OOP, Type, and lousy article

I was also immediately struck by the cut/paste comment.

By the way, speaking of types, does anyone know what the current sate of the art is in type theoretical analysis of OOP. I recall Cook's thesis work years ago and issues about contravariance of function types making them unsuitable for modeling polymorphism, subclassing, etc. I've always thought that the single object dispatch approach in C++ and Java made it difficult to see functions as having some kind of first class status. Contrast that with the generic functions approach in CLOS. Aside from the theory, the superficial view of objects as types does help in modeling applications and also enables tool vendors to build refactoring IDEs and so forth, that are essential in large programming efforts.

In the beginning was the cons.

OOP for everything is bad

In my experience as a professional and hobbyist programmer, OOP works well for large, coarse objects. Like characters in a video game or entities in a simulation. And it's not so much encapsulation or inheritance or classes as much as being able to send an arbitrary message to a "thing" and have it respond. You don't care what the thing is, as long as it knows how to process a specific message. This has always been Alan Kay's view (messages are what matter), and it took me a while to see what he was talking about.

Fine-grained OOP is a different story. I'll defer to Joe Armstrong's little rant Why OO Sucks. It's flippant and informal, but hits the nail on the head IMO.

Marry OOP to FP

In my experience as a professional and hobbyist programmer, OOP works well for large, coarse objects.[...]And it's not so much encapsulation or inheritance or classes as much as being able to send an arbitrary message to a "thing" and have it respond.
I share the appreciation of OOP for exactly this purpose. Are you aware of O'Haskell?

I was thinking of designing a language like that, but using some Turing-incomplete language (such as Epigram) instead of Haskell as a base. What an opportunity to clash dependent types with interactive logic types! If only I were smart enough to do that :-)

You might like to learn Erlan

You might like to learn Erlang. I think you'd love it.

Let's do the Timewarp again...

Given that the article that started this thread reminds me of the cheesy computer hobbyist magazines from the 80's, and most of the arguments about "OOP hype" harken back to the early 90's, I think LtU may have stepped into a timewarp.

To pull it back to the present, let me make this proposal: most of what we call OO is in fact just a set of techniques (or PL contructs) that allow for the organization of code and data into separate components with localized scope.

(As an aside, pace Joe Armstrong, there many cases where data structures and operations form a natural grouping, such as stacks and algebras)

Based on this definition, I see, for example, Haskell modules, or SML structures, or ADTS as the same kind of mechanism as classes, and it seems to me that ALL PLs must handle this problem in some way or another, even if only by ignoring it and putting the whole program in the same soup.

So are the anti-OO people arguing that creating any abstractions other than functions is a bad thing? Or are they proposing some other mechanism for handling this?

Or maybe it is just the timewarp talking..

Abstractions are good

But OOP - as in "classes are the only abstractions you need/should use" - is constraining and counterproductive.

My arguments were more against modern mainstream OOP proponents than in agreement the article, which made a few correct observations for the wrong reasons.

Hobbyist magazines

Given that the article that started this thread reminds me of the cheesy computer hobbyist magazines from the 80's,

No surprise--according to the bioblip at the bottom, that's where the author got his start.

Save us from the Turing tar-pit!

Every type of program eventually gets turned into a sequence of opcodes, but not every program is itself a sequence of opcodes. That is why we are interested in programming languages - that is why there is anything at all for us to be interested in.

There's more than one way to sum a row of spreadsheet cells, incidentally. Suppose for the sake of argument that the data structure representing the range is a binary tree, with every subnode representing a binary partition of the range represented by its parent node. Each node also maintains a "sum" value, which is the total of the sums of both of its sub-ranges and is updated when either of those totals changes. To obtain the sum of the entire range, we take the sum value of the root node. To obtain the sum of some sub-range, we take the sums of all of the subnodes that cover parts of that range and sum them together.

Now, maybe our spreadsheet software represents ranges in this way and maybe it doesn't (maybe it just iterates over the entire range to get SUM values). Maybe it optimizes ranges for which SUMs are frequently demanded. The point is that there isn't a direct, one-to-one relationship between a spreadsheet program and the particular set of data structures and algorithms that are used to implement it. Several possible implementations are possible, and a self-optimizing run-time system might even choose between them dynamically based on performance feedback. The spreadsheet program is not just a disguise for a procedural implementation: it is a declaration of what we would like the run-time system to create a procedural implementation to do.

But we don't code the algorit

But we don't code the algorithm ourselves, we just select one of the predefined ways and supply it with data. That's not programming.

I am the King of Mercia, and I know

The spreadsheet programmer neither codes nor selects the algorithms that are used to evaluate the spreadsheet program, and that's the point.

I fail to see how not coding

I fail to see how not coding or not selecting the algorithm is programming.

A little more flip and a little less flop

If we have a set of input data, and a set of outputs, and a description of how to produce the outputs from the inputs, then what we have is arguably a program.

By this definition, a spreadsheet is a program. However, it is not a procedural program, because the procedures that are followed to transform the inputs into the outputs are not specified in the program itself.

Neither is it correct to argue that, because some procedure must be followed to perform the computation specified by the spreadsheet, the spreadsheet is really a procedural program in disguise. That is because we cannot say exclusively what procedural program it really is: any of a number of procedural programs might realize the specification, and it is the task of the spreadsheet engine to determine a suitably efficient strategy. This is also true of query optimization in relational databases, of course.

Finally, one might object that real programs often do more than simply translate a set of inputs into a set of outputs: they interact with the outside world in various ways. But so do spreadsheets, which automatically update their state in response to user input (changes to cell values). It is perfectly possible to implement a state machine in a spreadsheet, with the user providing inputs that cause state transitions (the user must provide the previous state as part of the input, as spreadsheet programs are referentially transparent).

What is programming?

Do you fabricate your own chips too? After all, your programs all get translated down to machine code eventually -- which is just a sequence of predefined "ways" which you supply with data. All programming must come down to selecting from predefined operations at some point. Even electronic circuits designers must select their operations from the set that physics allows. Anything else would be magic.

Let's see it from a practical

Let's see it from a practical viewpoint: what one can really achieve with Spreadsheets, other than calculations?

Let's see it from a practical

Let's see it from a practical viewpoint: what one can really achieve with Spreadsheets, other than calculations?

Let's take one more step back: What can one really acheive with Computers, other than calculations?

(Actually, I promised myself I wouldn't post in this thread. Whoops.)

Computers are more than calculators of functions

Let's take one more step back: What can one really acheive with Computers, other than calculations?

Unlike with (automatic) Turing machines, with Computers one can achieve more than calculations (if we define calculations as calculations of values of functions).

(Actually, I promised myself I wouldn't post in this thread. Whoops.)

Me too, but I never can withstand a chance to proclaim superiority of computers over a-TM :-)

However

Programming a computer to do only what a Turing machine can do is still programming...

TM has no select(2)

Programming a computer to do only what a Turing machine can do is still programming...
I don't see your point. Programming a computer is always programming, even if the task being programmed cannot be fulfilled by Turing machine.

In response to your prevous comment that a TM can simulate an interactive process: imagine a process with more than one interaction channel (say, a real-time mutliplayer game server, connected to other processes/TMs that represent clients). The TM will accept a pair of the game state and an event, and calculate the next state of the game. The problem is, who will call the TM? Client TMs obviously cannot do that. It looks like we need some demon who will run all the TMs, passing outputs of one as a part of input to another, keeping in mind their last states and connections to each other. Again, this demon cannot be a TM itself (as no (deterministic) TM can run several potentially blocking TMs in parallel without being blocked itself).

I have trouble explaining my point, so I will look for some papers...

Meanwhile, here is an example from real-life: try to program a web server on one thread (deterministic TM) without select(2).

Trip, trap over the bridge

Slight crossing of wires - I just wanted to point out to our other interlocutor that the fact that computers aren't just TMs doesn't mean that programming something that is just a TM (or even not quite a TM) isn't real programming.

Program organization vs. domain modeling

Well, here's my two cents. One thing that bugs me about nearly every short summary of "Object-Oriented Programming" is how they overemphasize its value as a way of modeling a problem domain, and then confuse this with good program organization.

We've all seen the classic baby example of a class hierarchy: create a base class for shapes, subclass it for triangles, squares and circles, use multiple inheritance to add attributes like colors, etc. This is very often a good approach when you're modeling a problem domain, but other times it isn't, and other times it's best to mix it with other techniques, such as domain specific languages.

The problem I've experienced is that most OOP programming (and I'm thinking of Java here) isn't really like that. People end up using class hierarchies not for modeling problem domains, but rather, for modularizing an parametrizing code. Most of the classes that I see in the systems that I work with have to do not with the problem domain, but rather with implementation details, and the code gets split into classes the way it is to facilitate reuse, not because of the ontology of the problem being tackled. Class instances don't correspond so much to objects in the problem domain, as to components in the system that solves it; singleton classes are really common, and if a single concrete class is instantiated multiple times, the intent is not to model the state of real-world objects, but rather, to parametrize a software component.

A vivid example: in a software system I've worked on with thousands of classes, the base class for most of the system's classes is called LoggingSupport, and as the name indicates, it has the basic code that's used to interact with the system's logging facilities. Inheritance is used as a convenient mechanism for components to share that code.

Somebody above mentions design patterns as a way the practice of OOP diverges from the one-paragraph shapes with colors versions. I couldn't agree more.

In short, what I'd like to see is some language or paradigm that (a) emphasizes program organization and component parametrization over domain modeling, (b) is very late bound. The Standard ML module system looks very good on the first front, but pretty bad on the second one...

P for Programming

"most OOP programming... isn't really like that. People end up using class hierarchies not for modeling problem domains, but rather, for modularizing an parametrizing code."

The extra emphasis you gave to programming (OOP programming) pretty much explains that - it's OO programming not OO domain modelling.

"I'd like to see is some language or paradigm that (a) emphasizes program organization and component parametrization over domain modeling"

You just said that's what OOP gives you!

"(b) is very late bound"

Smalltalk, Lisp, ...

The extra emphasis you gave t

The extra emphasis you gave to programming (OOP programming) pretty much explains that - it's OO programming not OO domain modelling.

Well, I wouldn't call that "emphasis"-- not anymore than in the phrase "ATM machine" the word "machine" is emphasized...

Still, google for "object oriented programming", and the first hit you get is the Object-Oriented Programming Concepts part of Sun's Java Tutorial. The first section thereof ("What Is an Object?") gives you the party line:

Software objects are modeled after real-world objects in that they too have state and behavior.

You just said that's what OOP gives you!

No I didn't. I said that it strikes me like it's designed primarily to provide domain modeling capabilities, not program organization. Essentially I'm being skeptical about mainstream OO languages in the following way: it strikes me that their constructs are designed with domain modeling in mind, not for program organization, and that there might be a better and/or more straightforward approach to the latter, that they're missing.

Smalltalk, Lisp, ...

I'm not aware of Smalltalk and Lisp having much of the way of a module system... especially not if you compare them with ML. (I ought to take a look perhaps at Scheme48's module system, and definitely at Alice ML's...)

Why people still don't "get" OOP?

What this "critique" made me think about is, why is it that so few programmers "get" OOP? To be more precise why so few programmers are capable of developing a reasonable set of classes with appropriate methods given a problem. I was trained in OOP and design in college and you could say it is my native way of thinking (though I still have a secret love of Scheme and wish that I had opportunities to truly work with Haskell and ML). My experiences have convinced me that properly architected OO programs have many advantages over procedural programs. These advantages include flexibility, extensibility, maintainability, and correctness. However, among the programmers I have known, few realize these ideals from OOP programming because of their inability to correctly design. To me the real issue this article raises is why so many typical working programmers still don’t “get” it and what we, who do, should be doing differently? What do LtU readers think ?

It's more like, they don't get programming

People who don't "get" OOP generally don't "get" programming. That's a gross generalization and needs to be broken down actually. There are two variants of "not getting OOP":

* the person is genuinely clueless

* the person is not following a methodology or set of procedures that has been established by an "OOP guru"[1]

I think that you are referring to the former one. The latter one is a common reason cited by managers for the failure of OO projects. There is a sort of "general cluelessness" that pervades the corporate programming world that people sort of try to sweep under the rug with a lot of blather about how OOP (or something else) is going to revolutionize the industry. By general cluelessness I mean a profound and glaring lack of ability to specifically identify the problem to be solved and map it into a formalisation so that it can be processed by computer. This is step number one in all programming: identifying specifically the problem (the difference between the current state and the result you want) and figuring out how a computer might be used to solve it. If you lack this capability then you lack the capability to program. This lack of capability of programming in the general sense is often misinterpreted as a lack of capability to do OOP precisely because OOP has been so successful with the lack-of-ability-to-program crowd: OOP tool vendors have targeted the corporate development market in a big way in the last ten, fifteen years or so.

[1] By OOP guru I mean the kind of person who writes thick tomes about implementing Rational Unified Process for managers; I don't mean Alan Kay, Guido van Rossum, or someone like that who is the real deal. :)

Poorly written article

This article, I hate to say it, is trade rag fodder. It's full of anecdotal evidence, hearsay, industry superstition, and unsubstantiated opinion. With more than a few outrageous claims thrown in to draw hits and click-thrus.

Which is OK, I suppose, given where it was posted. The trade press is full of such material.

But I'm kinda surprised that it would be considered worth a mention here on LtU.

As others have written, there are numerous legitimate criticisms against OO (or any paradigm, for that matter). However, this article struck me as a bit content-free.

OK then...

I guess the article isn't that good, and perhaps I shouldn't have posted about it. But at least this thread gets some attention while threads about interesting papers with new results didn't get the attention they deserved...

Agreed.

My criticism was intended entirely for the article, not for your editorial discretion. Sorry if it appeared otherwise.

At any rate, your observation is rather astute--an article that is essentially flamebait attracts more attention than numerous examples of real scholarship (assuming attention is measured in number of talkbalks). I suppose that's just human nature... the urge to shoot at fish in barrels is frequently irresistable.

Flaming is easier than reading papers ;-)

I think this phenomenon is more immediate than that: it can take a really long time (and non-zero work) to read some of the papers here fully, whereas anyone who has heard of a class can argue about OOP.

Where the playing field is level, many will play.

I find that it sometimes takes me so long to digest some of the papers mentioned that by the time I have something interesting to say, the live threads have moved on.

Perhaps if we all developed the habit of reviving old threads after we read the paper, we might have more focused discussions. ;-)

Perhaps if we all developed t

Perhaps if we all developed the habit of reviving old threads after we read the paper, we might have more focused discussions. ;-)

Good idea!

Oh boy am I going to get in trouble this time

Oh, wait, I think you meant the threads about real papers. Sorry.

(P.S.: Lately i've been looking for something away from OO, maybe away from FP too, something to mix them up somehow? I am insufficiently smart to formalize my thinking. Maybe driven by the 'eval problem'... I think the solutions to the 'eval' problem we've seen of late are nice and all, but they smack too much of a physical representation rather than a logical one to me; Scala's solution has so much ASCII here, there, everywhere that it is painful. I'd rather have that be the microcode in a sense, and be working in some higher level ("any problem solved by introducing abstraction") logical view that shows things all together. Even though Chris didn't like Tablizer, for some strange reason when I noodle, er, google around for alternatives I end up there sometimes. Maybe it is like how some of the things LaRouche says make sense to me, in the small, but when I see the big picture they inhabit I think I need to scream and run.)

([on edit adding] P.P.S.: Recently I've been wondering about RPC vs. not and it does sound like the old saw all over again. Why is it that we are so attracted to the shiny idea of keeping state and functionality bundled together? What are the key points we have to remember / drill into ourselves that knock us on the head so we remember that there are even better reasons not to marry them?)

We are?

We are?

Oui

Well, some folks are. And others as mentioned before. There are enough people who are clearly smarter than I on either side of the debate, from what I can see, that I guess the only conclusion I have is "it depends".

Object-Oriented or Class-Oriented?

Daniel Yokomizo asked (in an earlier comment) whether classes are necessary for OOP. They're not: see Self, which uses special objects called prototypes instead of classes. (My understanding is that the Self team were trying to bypass the problems with metaclass objects in Smalltalk.) Craig Chambers, one of the designers of Self, has since produced Cecil, another class-free OOPL. And everyone reading this has probably used, if not coded in, a widely-deployed prototype-based OOPL: Javascript. (However, to my dismay, there are proposals to reinstate classes in a future version of Javascript ...)

One problem with class-oriented PLs is that the class mechanism does too much: encapsulation, modularization, inheritance and subtyping. I would prefer to see separate mechanisms for separate things. For instance, COPLs generally provide interface specifications (formal or informal) for classes by running a tool such as Javadoc or Eiffel's 'short' over the class definition, meaning that the poor person writing a class has to write to 4 different 'audiences' at once: the compiler, people writing code that uses the class, people writing derived classes and people maintaining the class itself. In Modula-3, you write at least two files for each module: an interface, where you specify and/or document everything a user of the class needs to know, and an implementation, which only people who code or maintain the module should need to read. I think this leads to better programs and better code re-use. I would like to go further and separate sub-typing from implementation inheritance, as Sather does.

(By a nice coincidence, I'm currently reading A History of CLU by Barbara Liskov, in which she discusses her pioneering work in this area. I got it from the “Design Docs” page of this site.)

Objects as interpreters?

I'm beginning to think it's useful to think of objects as mini-interpreters. You take a namespace (i.e. a set of identifiers bound to values; aka an environment), which defines a vocabulary. Then you add a command which interprets messages for this object. The command can take care of things like inheritance, etc. This seems a very flexible way of looking at OOP as language design. It also suggests an appropriate scale for objects -- an integer shouldn't be an object, but a component for handling HTTP requests might be.

(I also think that the namespace should be a first-class value, the "command" should be a function (monadic parser?), and this sort of interpreter-function is a sort of "type" (or maybe a type-class?)).

Any literature I should be reading? (Either backing me up, or showing why I'm wrong).

NO EVIDENCE, the author is right about this

You can call the author a "geezer" and all kinds of names. But he is right that there is no objective evidence that OO is better. The only book that claims to make a decent detailed case for OO is Bertrand Meyer's OOSC2 book, and his arguments are debunked at:

http://www.geocities.com/tablizer/meyer1.htm

For one, it makes the assumption that the world devides nicely up into "type" taxonomies, which is absurd. The real world does not care about our taxonomies, and changes randomly, ingoring them. (Not all OO fans are into tree taxonomies, but that is another debate.)

The are no comparative examples to *see* OO actually being better. It is only anecdotal. Don't dismiss a "geezer" based on anecdotal evidence only. That is not fair. We need some SCIENCE, not anecdotes, not insults.

Wrong Argument

But the argument is that OOP is "better" in "theory" than in "practice," not that OOP is better than alternative concepts. It's really not hard to see how someone could arrive at that conclusion based upon experience with the C++s and Javas of the world. Nor is it hard to see how someone could disagree based upon experience with the Smalltalks and the Common Lisps of the world. I haven't seen a lot of dismissal of the article based upon the claim that "OOP is better," but rather precisely on the basis that the author doesn't engage in the kind of science that you're asking for.

Right, No Science

but rather precisely on the basis that the author doesn't engage in the kind of science that you're asking for.

Nobody is doing such science, and THAT is the problem. All we have are endless fights based on anecdotes. The author rightfully asked for better science, or at least public examples of OO being better that are subject to open-source-like scrutiny.

OOP understood differently

The experiences of using C++, C# and Java are vastly different than using Smalltalk, LISP and python. I think OOP cannot be justily defended or blamed because different experiences result from using OOP in language specific ways.

CLU, for example, is considered OOP by some (myself included), but it did not provide inheritance and methods were bound to types only. This is an extreme example, but my point is that OOP experience is mostly described from language specific point of view.

It is harder to talk about specifics of OOP concepts as practiced by many practicing programmers who churn out production code.

I suspect even the most ardent OOP defenders would in practice "resort" to procedural way of programming when necessary.

LISP gives you both ways. For me, langauges that force you to declare a class for the "main" procedure are absurd.