## Implementation Inheritance

I have been reading an old LTU thread from 2003 regarding the role of inheritance in Object Theory. It appeared to me that the general feeling of the discussion was that inheritance of interface was preferable to inheritance of implementation.

I have come across this sentiment so many times during my programming career that I am beginning to wonder if implementation inheritance (with all its associated troubles: fragile base classes, unwieldy monolithic class hierarchies, multiple inheritance) should be included in modern Object (or Component) based languages of today or not.

I have been idly wondering: might we better off moving swiftly away from inheritance of implementation and going with a language (and supporting tools) that make it quick and easy to program using interface, composition and delegation? (Interfaces for Polymorphism, Composition for code sharing and encapsulation, Delegation for extension).

"I fear people who like composition. The gravest sin of OO programming is confusing 'is a' and 'has a' relationships. But composition encourages you to munge those concepts"

The poster certainly has a point! But does this indicate a shortcoming with the HasA-IsA meme, rather than composition as an extension mechanism?

So ultimately, my question is: What do LTU readers feel about inheritance of implementation in 2007?

Apologies if I'm seen as flogging a dead horse, but I see so much reliance on implementation inheritance around me that I'm wondering if the issue is fully resolved.

## Comment viewing options

### Try Coming at it from the Self/NewtonScript Perspective

Implementation inheritance is often abused, because to me it seems that typical coders rarely create a truly object-oriented design. My canonical example, which I see over and over, is a "God class" -- or any class called a "Manager," which is generally really an API wrapped up in a class.

Implementation inheritance often does work very nicely in small components, like UI widgets. But -- and this is a big but -- the key is a clear model understanding the exact relationship between your method implementation and the implementation of the base class. Do you want to _replace_ a basic widget behavior? To do that right you often need to read the source of the base class method. Are you _extending_ it and still calling the base class method afterward? Are you calling it before or after yours? Very often, it isn't possible to cleanly do what you want, because of an excess of state in the base class; almost always, methods are too big and too inter-dependent. I have, however, seen some exceptions: the PowerPlant C++ framework from MetroWerks CodeWarrior was very nicely factored and implemented as a "forest." The confusion over how to extend a base class behavior can be resolved cleanly using the "decorator" pattern. Some others: Apple's WebObjects Java framework, which is nicely integrated with a visual builder. Apple's experimental "Technology Release" of Dylan.

The key is that these examples tend to have very small, clear methods. And in Dylan you have getters and setters, so you can rather nicely intercept what looks like plain old variable access and turn it into something more elaborate. This also works in part because Dylan is like CLOS; classes don't "contain" methods, but rather multi-methods "contain" methods that specialize on subtypes. This makes it considerably easier to quickly extend behavior, where the "class" is really an object consisting entirely of data.

Your points about composition and delegation are well-taken: in fact, some of the more successful uses of OOP I've seen did in fact rely heavily on delegation, using messages and a "Chain of Responsibility" design pattern. Borland's TurboVision framework, for example, relied heavily on constructor chaining. There was a clear _temporal_ API, in the context of object construction and GUI event handling. The Qt framework with its signals-and-slots preprocessor could be considered building chains of delegation. Often these environments have nice graphical layout tools, such as Apple's Interface Builder (the latest incarnation of the NeXT toolset), although I have yet to use Objective C, it is highly thought of, and has extensive support for interfaces.

I have used interfaces extensively in Java code, and you are right, it is easy and flexible to add an interface to a class or change them.

I recently have had reason to think a lot about client/server designs in the context of a proprietary framework for embedded software in which a class "is-a" server (that is, it inherits from a server class) but "has-a" client (it can own client objects for one or more servers, and provide via callbacks a point for handling return messages). I did not really like this design, but it does work fairly well. In a more flexible language than C++ I might consider multiple inheritance, a tool I've rarely needed, but when I did need it, I really needed it. Multiple inheritance can work very well for mix-ins, and certain complex problems, often involving the "strategy" design pattern, as in "I need three different strategies for presenting this data." I would not be too quick to throw out multiple inheritance, although when I see it in C++ I start to feel nervous.

I'd like to call your attention to an alternate multiple inheritance paradigms you may not be familiar with. In NewtonScript, objects don't inherit from base classes; they inherit from other objects. You actually have two inheritance chains. This paradigm was designed specifically for GUI programming, although it could be used for objects that didn't have a visual representation. The "parent" chain represented the visual chain of responsibility, modeling the visual containment hierarchy. The messages that went up and down this tree were generally more semantic, as in "user has activated this feature." You could (roughly hand waving) consider that to be interface inheritance. The "proto" chain represented the chain of behaviors; for example, how an object redrew itself. It would link directly to a prototypical button, for example, or you could implement your own drawing behavior. This approach was taken from the language Self, but there are other examples, such as the language Omega. In NewtonScript, inheriting from objects worked as a strategy because of copy-on-write; a run-time object could consist mostly of references to parent and prototype objects in ROM, which was cheap, with only an absolutely minimal footprint in RAM. This is starkly different than the Java or C++ object model; you could call it "inheritance by difference." If a slot was updated at runtime, a page fault would occur and get handled, allowing a slot to be created in the RAM heap. That was a great design, to the best of my knowledge not used in any more "modern" runtimes for low-power devices.

So I guess the answer I'm coming to is that it is easy to abuse inheritance of implementation, and I've seen a lot of examples where it was done poorly, with a few where it was well thought-out and really gave the developer leverage.

One thing I'm trying to come to terms with is this meme that "OOP has failed." I don't quite see it that way, although I see a lot of weak and inelegant use of OOP, particularly now in the C++ and Java worlds. I'm currently trying to learn Haskell, and trying to somehow fit together the very elegant programming patterns that can be achieved using function composition, with the somewhat different object-oriented paradigm, which tends to be stateful and imperative. And I'm having trouble doing so! To declare that OOP is a failure is to throw out some very nice and very effective designs with the "bathwater" of monstrously complicated frameworks of the past that deserve to die, like MFC and MacApp.

Anyway... sorry for the rambling... just some incoherent thoughts from someone who is more a practitioner than a theorist, but always looking for elegance and simplicity wherever I can find it.

### Good Points

So I guess the answer I'm coming to is that it is easy to abuse inheritance of implementation, and I've seen a lot of examples where it was done poorly, with a few where it was well thought-out and really gave the developer leverage.

I agree. But the (frighteningly) easy abuse of implementation inheritance, coupled with a lack of a simple and enforced framework for handling all those awkward overriding rules you mention is making me wonder if composition with delegation might be a safer alternative.

..almost always, methods are too big and too inter-dependent.

I agree, and think this point is key. In the compositional approach I have in mind, objects are small and numerous and the programmer composes new objects by picking and choosing from a large library.

I'd like to call your attention to an alternate multiple inheritance paradigms you may not be familiar with.

Yes, I have studied prototype based object systems in some detail. My main issue with prototypes is that the prototypical (parent) object exists outside the client object. It worries me to see (for example) an abstract "Animal" object floating around in the system masquerading as concrete!

In the Composition + Delegation approach, a "Super" is encapsulated inside the client which enables you to think of the whole thing as one object. In my view this simplifies everything a great deal and easily allows embedding of multiple objects. Selected embedded objects can have their interfaces exposed to the outside world alongside your own interface. And you are free to delegate calls (after any pre-processing you wish to perform) that come in through any of the interfaces down to the embedded objects as you see fit.

This also works in part because Dylan is like CLOS; classes don't "contain" methods, but rather multi-methods "contain" methods that specialize on subtypes. This makes it considerably easier to quickly extend behaviour, where the "class" is really an object consisting entirely of data.

I agree whole-heartedly, this is another facet of the sort of system I have in mind.

One thing I'm trying to come to terms with is this meme that "OOP has failed".

I don't personally subscribe to this view. My thinking is that OOP has partly failed through what some might consider a strength: extreme flexibility.

How can we create a truly reusable ecosystem of software components when everyone is using a different programming ethos? (you mention a few: QT, TurboVision, Interface Builder, MFC, MacApp).

Ultimately current OOP systems give the programmer too much choice. And in my opinion the only way to achieve guaranteed interoperable and truly reusable components is to straitjacket the programmer into a unified framework that solidly enforces the right way to do things while not restricting what things are possible.

### See Jan Bosch's LayOM for

See Jan Bosch's LayOM for some ideas about object-oriented reusability.

Other works from Jan Bosch

Nothing new, it is 10 years old, but well worth a look.

### Composition and delegation

You might like Szyperski's book Component Software, which (IIRC) spends quite some time discussing the pitfalls of implementation inheritance and the benefits of composition+delegation.

### It's all you need

Composition+delegation is really all you need. I'm really starting to detest the limitations of this inheritance cruft; it gets in the way more often than not, and when your language supports delegation, you can simulate inheritance via forwarding if actually desired. First-class messages are the wave of the OO future!

### re: It's all you need

This is absolutely true. What's odd is that so few languages seem to be heading down this path. Perhaps a perceived lack of efficiency inherent in the delegation model is hindering its acceptance as a replacement for inheritance? The only language I'm aware of that has gone all the way in this respect is Io -- even further than Self in some ways.

### Re: delegation languages

I thought Perl had something where you could say "if we get called and we don't have the function, then pass it on to this other sucker"? (on edit: autoload.)

I've hacked up some script to let me do really lame delegation wrappers in Java, I expect things like Eclipse already have such functionality? Although it bugs me to invoke the IDE escape clause to work around language lameness.

### big difference

There's a big difference between having certain features as a core part of the language and being able to hack them in here and there. For example, object-oriented programming is much more useful in pure OO languages than it is hacked into a non-OO language (see the various Scheme object systems, or even Java).

My guess is that the same sort of thing goes for delegation; A language that uses delegation for everything (differential inheritance, "scoping", etc) is much more flexible than one which simply allows you to make use of it here and there. A goal should be to eliminate static inheritance, not supplement it, at least for some high-level languages.

The question I am going to rise is probably somehow perverse to be used on Lambda the Ultimate, but I believe that we should be discussing primarily theoretical issues that have a practical implementation. And in practice, a lot of young, naive programmers (and more experienced ones) will ask, how efficient this feature is before adapting it.

I believe that one advantage of static, class-based inheritance is efficiency. I am wondering, are there efficient methods implementing (static) delegate-based inheritance efficiently?

I am primarily interested in this topic because I find implementation inheritance somewhat hard-to-use-efficiently, but am nevertheless interested in implementing a fast language. Please disregard this post if you find it disgusting or perverse.

### Inheritance and delegation efficiency

I believe that one advantage of static, class-based inheritance is efficiency. I am wondering, are there efficient methods implementing (static) delegate-based inheritance efficiently?

I've been investigating the efficiency of dispatching techniques, and it turns out that virtual dispatch tables are not the most efficient, even for dispatching static OO languages like C++ [1]. The faster binary tree based dispatch (BTBD) is amenable to delegation-based inheritance via first-class messages; it may stress the performance limit depending on the number of distinct object types.

In a language with first-class messages, like Smalltalk, sending the same message to a list of distinct object types is the absolute worst case performance-wise given typical dispatching techniques; using BTBD, dispatching overhead is logarithmic in the number of distinct object types that implement that message. This is assuming a dynamic messaging model of course; given sufficient compile-time analysis, the majority of call sites can be resolved statically at link time (except situations like the above degenerate case).

It's also possible to perform first-class messaging via Java/C# interfaces (which use vtables), but this requires extensive compiler support: each message type, ie. name+signature, is essentially its own interface, and an object with n methods implements n interfaces. The compiler needs to generate the appropriate vtable for the interfaces passed in at each call site; this may be problematic with dynamic loading, but runtime code generation could definitely handle this case as well. I'm investigating this approach for a first-class messaging-based language on .NET.

There's no magic bullet that I've seen yet: the added expressiveness will cost you something if you use it. If you stick to fairly static calls in your pure OO language with first class messages, then you can get close to overheads of C + garbage collection. Adding dynamism has to cost something: logarithmic overhead in the number of distinct cases; that's an optimized series of nested if-statements to determine the exact case to handle. Seems pretty reasonable to me, and hard to beat with a general solution (case-specific solutions will always be faster of course).

I have plenty of additional references if anyone's interested (search for 'dispatch' to find the relevant papers).

[1] I've updated Wikipedia on virtual dispatch tables to reflect this.

### Efficiency? Definitely!

I am wondering, are there efficient methods implementing (static) delegate-based inheritance efficiently?

Yes. In fact you can implement it more efficiently than using virtual functions. The compiler can optimize intra-class method calls because it can avoiding using the vtable dispatch.

See:

### Funny you should mention that book..

Because my copy arrived the day after I posted the topic and so far I'm finding it a great read.

Recently I've got into the habit of underlining anything I agree with or feel I need to research further. The problem being, Szyperski's book is rapidly becoming unreadable!

### There aren't any Animals

It worries me to see (for example) an abstract "Animal" object floating around in the system masquerading as concrete!

You wouldn't; instead, you would see ProtoDogs with muffled barkings and indeterminate-colored coats, or ProtoParrots with barely the capability for flight. Or maybe flying ProtoParrots but that wouldn't know whether to eat Brazil nuts or sunflower seeds.

On a more serious tone, what worries me is that the only "grown-up" way of extending an object is through memory overlay of contiguous partial representations.

### OO actually stands for many unrelated concepts.

OO actually stands for many concepts. More specifically, the concepts of inheritance, encapsulation and message passing are a disguised form of classification, record extension, composition with namespace flattening, pattern matching based on value type etc.

Inheritance is three things: a) a classification of values, b) record extension, c) namespace flattening. When a class inherits from an interface, we have a classification of the object. When a class inherits from another class, we have record extension. In either case, the names of the base class/interface are transferred to the namespace of the derived class.

For example, when a class Foo implements the Bar1 interface and the Bar2 interface, then we have a classification: Foo is a Bar1 and a Bar2. When a class Foo extends the Bar class, then Foo extends the Bar record and modifies/extends behaviour of Bar.

Message passing is simply run-time pattern matching based on value type: the type of actual subroutine to invoke is determined by the type of the first argument (in case of single dispatch) or some/all of the arguments (in case of multiple dispatch).

So, under this light, and to answer your question about implementation inheritance, implementation inheritance is composition + namespace flattening. In my mind, it is neither bad or good, and it all depends on your needs: as long as classification is separated from extension, I am all for it.

To put it differently, I prefer a language where objects are what they are supposed to be, i.e. representative of entities in the problem domain, and interfaces are simply wrappers around objects based on compatibility of capabilities. Some call this structural subtyping.

For example, I would like my library to have a single class named File which represents a file on the disk, with a short API to read and write primitives to/from it:

class File {
void open(String name);
void close();
void writeBytes(byte[] data);
}


Now if I wanted to treat a file as a Stream, I would have to have another class which extends File.:

class FileStream extends Stream {
void putByte(byte b);
byte getByte();
void putInteger(int i);
int getInt();
}


If I wanted an abstraction on Streams, then I would have to code an interface class:

interface Stream {
void putByte(byte b);
byte getByte();
}


And then if I wanted to use a file stream where a stream is expected, I would do the following:

void processStream(Stream s) { ...}

void main() {
processStream(new FileStream("myfile.txt"));
}


What happens in the above code is that the compiler sees that Stream and FileStream have common methods and therefore constructs an object from FileStream that invokes the appropriate methods for it.

This approach (structural subtyping) is much more flexible, because one can not tell when and how a class is to be used.

### First class messages

This approach (structural subtyping) is much more flexible, because one can not tell when and how a class is to be used.

Agreed, but we can do even better: what you really want is first-class messages [1], where any object can be used given it responds to the messages being sent (message name and type signature). In such a language, you wouldn't even need to declare the Stream interface. More than that, you can assign messages to variables and pass them around: essentially, higher-order messaging. See [1] for a type system that can type all of the above.

[1] http://www.cs.jhu.edu/~pari/papers/fool2004/first-class_FOOL2004.pdf

### I do not think it is necessarily better.

From a technical point of view, what you propose is certainly doable, and it can easily be done now, for example in C++, with templates.

But in the long run, you need those interfaces: they improve error reporting, they work as documentation, etc. That's why C++ templates will have template constraints at some point.

### I agree interfaces provide

I agree interfaces provide good documentation, and they can be added to the above type system relatively easily. However, this is far from easy to add to C++, and the typing guarantees of the above paper are *far* stronger than anaything achievable in C++.

According to the author, the type system is roughly equivalent to GADTs, which is pretty significant given its simplicity.

C++ already supports interfaces, through template member function pointers. As for typing guarantees, on interface level, all that you care about is conformance to the interface's specifications.

### Insufficient polymorphism

I also care about full parametric polymorphism, so no, C++ isn't quite there.

Ooops, forgot to add: interfaces are merely signatures in a good module system, which the paper lacks in any case. So they are a necessary addition regardless.

### Sounds like ...

<plug>
Sounds like Heron. Heron is coming out of hibernation, btw. I just quit my job at Microsoft, and using the time-off to make a new release of HeronFront in the next couple of months.
</plug>

### Yeap.

I actually read the term 'structural subtyping' for the first time in one of your posts here on LtU :-).

### My feelings

My thoughts:

Inheritance of implementation can be useful, but it shouldn't be used to represent subclassing representations. Delegation and composition are also very useful. I find that delegation is always a more effective and elegant method to implement design patterns that rely on virtual functions.

"I fear people who like composition. The gravest sin of OO programming is confusing 'is a' and 'has a' relationships. But composition encourages you to munge those concepts"

I see no evidence of the truth of this statement, and I disagree strongly.

### Confusing relationships

I fear people who like composition. The gravest sin of OO programming is confusing 'is a' and 'has a' relationships. But composition encourages you to munge those concepts

I see no evidence of the truth of this statement, and I disagree strongly.

I agree with half of it: confusing is-a and has-a is bad. But, one of the reasons they're easy to confuse is that OO "is-a" is so very different from natural English "is a". The kind of "is-a" that makes sense in most OO programming languages is behavioral subtyping (see Liskov and Wing), which fails in many English "is a" relationships. For instance, Vegetarian is not a behavioral subtype of Person.

So, yes, it's bad to confuse "is-a" and "has-a". But "has-a" is the more pragmatic answer in more instances than I think most people realize.

JCV

### I'd propose

I'd propose "has-behaviour"...

### In my idea of a perfect OOPL

I agree with many of the ideas expressed in the discussion so far. In my idea of a perfect OO language, here is how I would like to see the terms used with my feature wish-list:

"has-a" == object composition. Class contains another class as a member field.

"behavior" == Interface with contract (preconditions, postconditions, and invariants).

"is-a" == Traditional subtyping of classes (class extends class) or interface (interface extends interface). But I wouldn't want to allow it.

"inherits" == Inheritance of implementation among classes, should not imply subtyping.

"has-behavior" == behavioral subtyping, implicit or explicit. A class should be able to explicitly implement an interface (implements keyword), or implicitly implements an interface (structural subtyping), or to prevent any further subtyping (final).

"delegation" == Automated explicit behavioral subtyping. You should be able to delegate an interface to a member object with the ability to override specific functions.

"virtual function" == Shouldn't be allowed (that's for another post).

So to sum-up in terms of the OP question, I feel inheritance of implementation is useful but should not express a subtype relationship. Going even further, I feel that only interfaces should be subtyped, and not classes [edit: was objects]. Whether this is a practical idea, remains to be seen.

### prevent any further

prevent any further subtyping (final).

Do you have a specific reason why you want that? Besides, is there any practical way to guarantee finality in a run-time?

Going even further, I feel that only interfaces should be subtyped, and not objects.

Is that a take against prototyping? Why should it be disallowed? Or did you mean classes...

### >> prevent any further

>> prevent any further subtyping (final).

Do you have a specific reason why you want that? Besides, is there any practical way to guarantee finality in a run-time?

Well primitives in C++/C#/Java can't be subtyped, and I think that is a good thing. I believe that being for a system to guarantee that incoming objects (e.g. function parameters) have a fixed type can enable it to be more secure and robust.

Is that a take against prototyping? Why should it be disallowed?

Sorry, I meant classes. I've updated the original post.

### Perhaps nothing should be subtyped.

I feel that only interfaces should be subtyped

Perhaps nothing should be 'subtyped'; structural subtyping is much more flexible. And in case one wants to created a closed set of relationships between data, union types are enough.

### Re: Confusing relationships

Vegetarian is not a behavioral subtype of Person.

No, but it is a reasonably proper subtype of Diet, and a Person has-a diet. It's just clumsy in most languages to attach other data to the relation or express the relation outside the definition of Person. For example, how strictly does one follow the diet? You could have a "EatingRegimen" type that has members Diet and some kind of scale value, but the overhead of using relation types quickly gets cumbersome.

### Codd, grave, rolling?

Might that be a hint that we could benefit from more fully-featured, robust, understandable, efficient, etc. relational algebra in our programming languages?

### I think the Relational model casts interesting light on this.

It seems that whenever the data from inheritance-based models are represented in a properly-normalised relational data model, this is done in a composition-like way.

As I think the relational model is a lot more elegant and fundamental than some OO approaches, I take it as evidence towards my belief that what we need is a language which builds object-oriented behaviours ontop of the relational model - using composition and delegation as some here have suggested, but perhaps other approaches too. The important point being that the core language would be relational, with the tools to be flexible and implement various different kinds of object-like behaviour ontop of this core.

I think that, if integrated tightly with a good relational database, this would also go a long way towards improving life for developers.

I know that when working on database-driven applications I find that Object-Relational mapping issues really diminish my faith in the currently-favoured Object Oriented model.

### The Great Unlearned Truth

There is no impedance mismatch between subtyping and the relational algebra. See The Third Manifesto. There is an impedance mismatch between "objects," as popularly implemented, and SQL, as (regrettably) standardized and implemented. This impedance mismatch has indeed cost the industry countless millions of real dollars that could have been better invested elsewhere with just a bit more attention to the theory underlying both the subtyping relation and the relational algebra.

### any "real" implementations?

I've heard about SQL != Relational. I always wondered why somebody hadn't implemented a "real" relational database, since folks have heard of the issues via the manifesto. (I think there was actually a project in the works of late by Those Guys?) Are there issues about "objects, as popularly implemented" which could be better with respect to some theory?

### See the Third Manifesto Site's Related Projects Page

raould: I always wondered why somebody hadn't implemented a "real" relational database, since folks have heard of the issues via the manifesto.

Good question. As the title indicates, the Third Manifesto site itself has links to several attempts. I would also call Vlerq related.

raould: Are there issues about "objects, as popularly implemented" which could be better with respect to some theory?.

Objects are one approach (and, in most popular object-oriented languages, the only approach) to subtyping, but as has been pointed out before, subtyping and subclassing aren't the same thing. Some languages—notably Sather, Scala, and Objective Caml—do not confuse subtyping and subclassing. These language seem (to me) like they would be excellent substrates upon which to attempt to implement Darwen and Date's "D" (or, more accurately, that their type systems could make them D's in the sense of Darwen and Date). The question I have is whether to perhaps make, e.g. O'Caml bindings to vlerq, or attempt essentially to reimplement HaskellDB in terms of, e.g. OCamlODBC, maintaining the prescriptions and proscriptions required in order to be a Darwen and Date "D." It's also possible that I'd need to move up to MetaOCaml to accomplish the reflection of types to database schema that might be involved in some of these approaches—obviously, I haven't thought all of this through. Part of the challenge, of course, is precisely that the point is that "programming language" and "persistence" shouldn't be as divorced as they are in SQL, which begs the question: do you pick a language with good subtyping and try to add persistence in a way that results in a "D," or do you pick a good storage engine and try to bind a language with good subtyping to it in a way that results in a "D?"

Inquiring minds want to know.

Oh, and do this while simultaneously implementing object-capability security and something like STM, and I think you'd have the next big programming language.

### Word

Since I have nothing constructive from a PLT-perspective to say, I can only say that I'd like to watch such development come to be :-)