Informed dissent: William Cook contra Bob Harper on OOP

Ongoing discussion that you can follow on William Cook's blog.

I am not going to take sides (or keep points). I know everyone here has an opinion on the issue, and many of the arguments were discussed here over the years. I still think LtU-ers will want to follow this.

Given the nature of the topic, I remind everyone to review our policies before posting here on the issue.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Needless (re)posting of a private flame war

Forgive me, but why is this being reposted? This is a private e-mail conversation that one of the participants has chosen to make public (without the consent of the other) for no apparent reason. In my opinion, it is highly unethical to publicly post a private e-mail conversation that was not intended for mass consumption.

FWIW, it is no secret that Bob has strong opinions about OOP. William Cook's suggestion that Bob is doing damage to the field by expressing his own opinions (as politely as possible) in his own book is simply ludicrous. I have had my own private arguments with Bob (who was my PhD advisor) on a variety of topics, but he is one of the most honest and inspiring researchers I know.

This public airing of his private flame war serves no discernible purpose but to start another one.

I didn't realize that Bob

I didn't realize that Bob Harper has not consented to Cook's posting of their exchange. I apologize.

Motivation

I understand the concerns about posting this discussion. However, I am hoping that it will have a overall positive result, not just cause more flaming. Bob Harper has criticized OO for many years, but I have not been able to find any detailed explanation of his position. There are many people who share his views. One of my goals was to try to get to the bottom of it -- to understand the basis of his negative views. There are many confused ideas about OO, and I wanted to see if some confusion might be contributing to the problem. This lack of mutual understanding is a serious problem for our field. Our conversation went on for almost 50 messages, and we have a fair bit of technical discussion. Our attempts to understand each other are instructive, even though they end in failure. Understanding that failure is essential for our field, I think.

Not the right way to go about it...

William, I'm all for gaining mutual understanding, but posting private e-mail messages without consent of both parties is not IMO the right way to go about it.

It seems that the discussion

It seems that the discussion did not go anywhere. I think a more constructive discussion can be had on this topic. You disagree with Bob Harpers opinion of OO and his presentation in his book. Perhaps you can write up your alternative presentation that shows the good side of OO?

Debate

I interpret this as an extension of our public debate, which began in a group setting at POPL, not a private conversation between friends. But I understand that this is open to interpretation. I hope that it proves valuable in the end.
William

No, it's not

I interpret this as an extension of our public debate, which began in a group setting at POPL, not a private conversation between friends. But I understand that this is open to interpretation.

No, it's not; it's simply a false statement. This is not an extension of a public debate, period. You'd do well to remove the posts and apologize.

I did enjoy your contrast between ADTs and objects in your "On Understanding Data Abstraction, Revisited" when I read it years ago, but this is an absurd way to promote your point of view! Most of the emails posted aren't even vaguely illuminating and surely if you were honest you'd be able to see that.

I'm not finding the "debate"

I'm not finding the "debate" at any of those links.

However, just the other day, I realized that I have a minority viewpoint about the benefits of OOP that I want to throw out there. I was reading a different LtU thread about C++ concepts, and one regular LtU'er (who I respect) commented that one of his main criticisms of C++ is that he finds it's abstractions leaky and C++ code hard to debug - commenting that he spends a lot more time debugging a C++ program than writing it. My first thought was that I've had the opposite experience: C++ programs take a long time to write, requiring a lot of mgmt. boilerplate, but I find them relatively quick to debug. One (relatively uninteresting for LtU) reason they are easy to debug is good tool support. But the other reason is that leveraging OOP makes finding bugs a lot easier compared to non-OOP imperative code. Information hiding, encapsulation, and code reuse all contribute to making bugs easier to localize and diagnose. In my mind, the benefits of OOP designs for debugging are probably greater than the other more commonly touted attributes of OOP.

Where can one find non-OOP

Where can one find non-OOP imperative code these days?

Probably in almost half the

Probably in almost half the code in existence. A lot of that code is written in C/C++, PHP, Javascript, Pascal, Perl, Tcl, various assembly languages, etc. Certainly orders of magnitude more than code written in a functional style, skewed attention in the blogosphere notwithstanding.

C++ has plenty of that.

C++ has plenty of that.

Coding in most imperative

Coding in most imperative languages can be done in a very OOP way or a non-OOP way. Are you saying that everyone using imperative languages is trying to be pretty strict about OOP? That's not my perception.

I should have said OOPL.

I should have said OOPL.

Compared to what?

Note that C++ style OOP is not pure OOP, as William Cook defines the term. In particular, the information hiding and encapsulation facilities (private data) are closer to ADT techniques, and are done better in other languages.

I'm not familiar with

I'm not familiar with William Cook's definition of "pure OOP" - where I can find that? C++ is definitely not pure OOP in the sense of SmallTalk, and I'm not using C++ as an example of of an ideal programming language - I'm using C++ as an example of how programming in an OOP style can make debugging easier, and claiming that is one of the best reasons to use OOP for imperative programming. My comparison was between C++ in an OOP style and C++ not in an OOP style, though I think the point I am making carries over to programming in other imperative languages as well.

To each object its implementation, and inheritance desemphasized

The recommend reading is William Cook's 2009 paper On Understanding Data Abstraction, Revisited; it is a comparison of the "abstract data types" and "object oriented" programming styles, described precisely with their differences highlighted. A very good read.

My take-away is that the abstraction happens at a difference place: an ADT is an abstract choice of data representation, and a set of operations on that (shared) choice, while each instance of an object has its own private "implementation" (state representation and operations), possibly differing from the others -- greater implementor flexibility, but more difficult to make distinct data cooperate. This can be seen as the position of an existential type quantifier (for type abstraction), placed either above the type of the methods/functions (ADT), or below (objects). In any case, the style of "OOP with classes" where the "type" of an object is determined by a class whose all instances have the same implementation is rather reminding of the ADT vision than his (or, eg., Alan Kay's) vision of OOP.

I have the feeling that Cook also downplays the role of implementation inheritance, often considered as an essential OOP construct. It isn't much mentioned in the above. It's interesting because implementation inheritance is a relatively complex and sometimes decried -- especially in the functional circles. I however have the impression that William appreciates inheritance and some appropriate uses of it, and promotes the "coinductive reasoning" style for reasoning with such objects with open recursion.

Thanks for the reference.

Thanks for the reference. IMO the distinction between the concepts Cook is calling "ADT" and the concepts he is calling "Object" are valid, however I wouldn't set up the nomenclature of the boundaries of the taxonomy in the quite the same way. Let's look at Java specifically. I believe that Java's interfaces are the Java mechanism for creating what Cook is calling "Object" while Java's classes *can* be - as Cook points out - used in particular ways to become pure ADT's. As there is already too much linguistic tradition invested in referring to Java's classes as "objects", it doesn't make good sense to me to try and align that terminology with the valid distinction that Cook is making. Relative to Cook's terms, both "ADT's" and "Objects" are important to OOP.

You mean C++ is an OOPL but

You mean C++ is an OOPL but C is not?

I see a fair amount of OOP done in C. Most of the Gnome Linux Desktop project fits that description. Code meant to be used as "plugins" generally uses C for its external interfaces due to fit linker/ABI conventions. In C++, I would say that use of inheritance has decreased over time, often being replaced by generic programming and template trait annotations.

If the original debate topic was supposed to be about Functional vs. Imperative with OOP then I started down a different track because I couldn't see the original debate. On that topic, I'd like to consider also imperative language designs where data is immutable as well as module and thread private/local by default so sharing has to be explicitly enabled by the programmer - kindred concepts/motivations that should make debugging easier.

Like D

"imperative language designs where data is immutable as well as module and thread private/local by default so sharing has to be explicitly enabled by the programmer"

That sounds like the D programming language.

I was thinking about D. D

I was thinking about D. D does take this approach with regard to threading. D doesn't do that with const, but it takes a step in the (mostly) right directions for making it easier to guarantee reason about and guarantee pure behavior when it is desired.

D const-faq
W. Bright's Dr. Dobbs Article on purity

OO for UI maybe

I've found OO is a good abstraction for user interface toolkits. But besides that, in most cases either a datatype with constructors would be better, or else a record type of functions. I mean this sort of thing (in Haskell):

data Thing = ThisThing Int Bool | ThatThing String | OtherThing;

data Thing = MkThing { thingName :: String, eatThing :: IO (), smellThing :: Nose -> IO Smell };

The second definition of

The second definition of Thing is object-oriented in the sense of allowing polymorphic method dispatch.

social aspects of influential dissent

As a student of “the social (or cultural, if you prefer) processes”, Ehud may appreciate the following parallel between the now private — or, should I say, reprivatized? — Cook/Harper debate and the public flamewar between Fetzer and his esteemed opponents.

Let's compare. Here's what motivates Cook, in part:

You are doing untold damage to the field of Programming Languages by leading this misguided holy war against OO, which is turning programming languages into the laughingstock of compute science. No other subdiscipline is so fundamentally at war with itself, or its academics so out of touch with practice. It makes us look and act ridiculous, and causes us to lose funding opportunities and graduate students.

And here's a quote from Fetzer's Philosophy and Computer Science: Reflections on the Program Verification Debate:

The April 1989 issue published the letters from Müller, Holt, and Watters, which was something I had expected, together with an OP/ED piece entitled, “Program Verification: Public Image and Private Reality”, which I had not [Dobson and Randell, 1989]. The authors were John Dobson and Brian Randell, the same “Brian Randell” who had posted a favorable piece of my article on the Risks Forum. … Their principal objections to my analysis of theoretical limitations of program verification, however, were two in number: (1) “the (unfortunately justifiable) fear that [my] paper will be misinterpreted by laymen, particularly those involved in funding” and, (2) …

Before they raised the issue, I had not considered — even remotely — whether or not an analysis of the scope and limits of formal methods in computer science had any financial ramifications.

That's my long way of saying that the title of this LtU thread, “informed dissent”, misses the point. “Influential dissent” would be closer to the mark. To appreciate the difference, consider that Ehud's low opinion of OO — no matter how well-informed! — is no of consequence to William Cook. Bob Harper, on the other hand, is a different story.

(Fetzer's dissent was perceived as influential, and therefore dangerous, solely of its publication venue. His original piece appeared in the ACM's flagship magazine back when the latter was universally deemed well worth reading.)

For the record, when I gave

For the record, when I gave the title Bob Harper wasn't the one I thought was dissenting.

coding vs. research

I haven't seen the original debate and thank you for the summary. From my perspective I don't see any real reason the academic study of languages and the mainstream usage should be all that similar.

The mainstream fundamentally is people doing relatively simply, well understood but highly specific things to enormously complex and detailed data sets. Compsci research is fundamentally people doing complex, not well understood and very general / abstract things to simple / generic data sets. Why would we expect those two users to have similar needs?

Ultimately the goal of theoretical research is to make those not well understood things understood, so that the complex becomes doable once the data set is complex. It is not to jump into the morass of specific real life problems.

Theoretical languages should have traits that the languages of 2020 & 2030 will have, not the ones that the languages of 2000 had.

Suppose that

Suppose you are doing programming language research. Ask yourself the following question:

Do I care if this ever impacts how anyone writes code?

If the answer is no, then I would gently suggest that you are not doing programming language research at all, but are doing something else entirely. Perhaps you are pursuing mathematical research in the context of programming languages, but you aren't doing programming language research.

However, if the answer is yes, then it's just a matter of how far in the future you expect your work to have impact. If you say 50 years, I'll laugh; you might as well have said no.

writes code

Where I think academia is likely to have influence on mainstream programming is the introduction of new techniques for languages that are being built in this generation, but will mainly be used by the next. I think 50 years is perhaps a bit long, but that isn't totally impossible that as the cost of the computing infrastructure increases times to change are going to get that long. Lets just look at the facts:

Using the TIOBE index:

The most popular language today is Java. Lets say created 1991.
Second is C, created by 1973.
Object oriented forms of C (C++, Objective-C, C#) created around 1983.

Yes there is a fairly substantial incubation time between an idea and the time it becomes the dominant paradigm. And that is likely to increases.

Where you can cut that time is the influence of new niches.

The first youngster on the list is PHP, which goes back to 1995 but had all the ideas from Perl/CGI which if it was going to be influenced by academics had to be prior to 1993.

After that Visual Basic (legacy), another niche cheap GUI programming 1991. And that was incredibly innovative on the part of Alan Cooper. But what Cooper was doing was acting as a bridge from 1980s academic ideas into the mainstream.

I wouldn't expect academia to directly effect how people write code for what is today mainstream applications. Code writing is generally a question of industry best practice and there is no real reason to suppose that academics aren't less qualified than experienced industry programmers to address best practice questions. A guy with 15 years in the professional workforce has had to help write and maintain many programs written by hundreds of people over a course of decades involving over 1000 man years of work. Academics just don't work on projects of that scale generally, they don't have to deal with the data complexity that drives "real world" programming.

Why would you expect academics to lead in this area?

Everyone knows where the likely next big innovation is, massive parallelism. The reason arguably it hasn't become mainstream yet is that academics still haven't figured out how to solve most of the problems even for the simple theoretical cases, much less dealing with the nastiness of real data. Once they do massive parallelism becomes mainstream, and the window will more or less closes for academics to have much input on the coding techniques / languages in this area except for small gradual changes.

Another area I think where there is room for academics to have huge influence on the next generation of programmers is educational programming languages. A high school kid will hit industry in under a decade, an elementary school kid within 15 years. I'd love to see academics who want influence focusing on those languages.

Missing the point

I think you oversimplify. None of the languages that you mention, except the dead or dying ones, have been static. Instead, they've continually added features, often influenced by academic thinking. For example, the design of Java generics was based on the work by Wadler and Odersky on Pizza; they actually implemented their ideas in GJ and that code became the basis of javac 5.0. That was a "time to market" of just a few years: 1999 to circa 2005.

Perhaps my point wasn't as clear as I intended. Ultimately work in programming languages, academic or otherwise, should be motivated by making programs better, either by creating new languages for new programs, or new features for existing languages. If it is not, then call it something else, but it sure as hell isn't programming language research.

If you're an academic and think that your work influences programming, then by all means, get to applying it to actual programming languages. Do some work of relevance to actual programs and people who write those programs.

If we are going to look back at popular languages and lament how far they fall from the state of the art, or how long the echoes of theoretical ideas took before they found emergence in the popular consciousness of languages, then turn around and use that as justification of working on theoretical aspects of programming languages that we happen to find interesting in the hopes that they might be of relevance 30 or 50 years from now, then no wonder there is such a big disconnect. If programming language researchers in academia are intentionally making their work irrelevant to the present because of some misplaced pride in "how far ahead" they're looking, I might gently suggest that they are full of shit and would do better to just admit that they don't care whether their work has practical applicability and pursue it for its own purity. That at least, I can respect.

So you found some nice theorem about some bizarre type system. Congratulations. The foundations of mathematical logic are so happy to receive your contribution. Leave it at that. But if you call it PL research, then apply it to programming.

What is the point?

You seem to think that some people are interested in theoretical research but pretend it is applied. Who are you thinking of? I doubt there is a relevant number of people with this viewpoint, and I suspect your diatribe is targeting imaginary behaviors that do not happen in practice¹.

¹: except maybe when researchers apply for funding. I have no idea how the funding process works -- and suppose it depends strongly on the country -- but could believe some people are inflating the practical aspects of their research towards non-specialists to get more grants. Is it what you're thinking of? That seems related at least to the debate about statements that would be "misinterpreted by laymen".

You seem to have a strong idea about who is or is not worth of the title "programming language researcher". What is the point? Who gets to decide what is applied enough to qualify, and why would it have any importance?

You seem to think that some

You seem to think that some people are interested in theoretical research but pretend it is applied.

There are varying degrees of applied. Mathematicians might consider all of Computer Science, even its most theoretical aspects, to be "applied" mathematics. CS theorists might consider type theory "applied". Type theorists might consider most of the work in PLDI as "applied".

I don't mean to be combative; perhaps my strawman was a little to tempting to beat on.

From my perspective, programming is the "action or process of writing computer programs", programming languages are for writing programs (not proving theorems about logic) and research on programming languages should be motivated by the desire to improve one or both of the preceding. This is a very utilitarian point of view, and while I don't expect anyone else to subscribe to that view, it's at least worth considering what motivates the expenditure of efforts in this area. If that motivation is just the pursuit of interesting mathematical theorems, again, knock yourself out. If it's genuinely motivated by wanting to make programs or programming better, then claiming that such ideas are "N years ahead of their time" is just a cop out. Looking back N years in the past to justify why today's research isn't relevant (yet) is also a cop out. Work on something relevant, or don't.

You seem to have a strong idea about who is or is not worth of the title "programming language researcher". What is the point? Who gets to decide what is applied enough to qualify, and why would it have any importance?

I don't know who gets to decide, but the perception from industry seems to be that academic language researchers have made themselves irrelevant by pursuing more and more theoretical topics and neglecting the gulf between theory and practice. As someone who thinks this gulf can and should be gapped, I bristle when academics retreat deeper into their ivory towers at any contact with the unwashed masses and then in the next presentation or grant proposal give 5 bullet points as to why their work matters because of X, Y or Z problem in industry.

Certainly almost all work

Certainly almost all work related to programming languages wouldn't qualify as mathematically interesting in its own right, so I think it's fair to say that they are at least hoping to be applied.

Yet the connections can be,

Yet the connections can be, and have historically proved to be, surprising. I think "we" understand classical logic better now that we have experience of programming with direct-style continuations (Griffin, 1990: "A formulae-as-types notion of control"). In particular, the idea that classical proofs also have a computational meaning is pretty "disruptive".

There is no choice

As far as I know Generics date from the 1980s and emerged from the practical community. The most important example being templates in C++, which came out of Ada. So I'd disagree with you this was a 6 year time to market nor that it was an example of academic work.

If programming language researchers in academia are intentionally making their work irrelevant to the present because of some misplaced pride in "how far ahead" they're looking, I might gently suggest that they are full of shit and would do better to just admit that they don't care whether their work has practical applicability and pursue it for its own purity. That at least, I can respect.

I think in my last post I gave several reasons PL-researchers can't just switch to stuff a few years in the future:

a) They don't have the skills and experience
b) They don't have the money and resources
c) They don't have the political pull

Let me repeat myself. By the time a language is mainstream, it is very complex and is used in complex ways, with development being driven by business interests and not theoretical interests. The people with that sort of experience to weigh those various business interests, come up with a plan, acquire the resources to implement those plans and then deal with the fallout are not academics. While academics can and should play an advisory role they cannot and should not be in the driver's seat. The actual implementations of new features in mainstream languages often involve coordinate dozens if not hundreds of interests to try and find the right trade offs. All sorts of business partners and political interests need to be considered, and those that wanted things a different way need to be courted to prevent blow back. The actual implementation, because it is now being introduced into a complex ecosystem can involve literally hundreds or even thousands of man years, which means in terms of cost tens and possibly hundreds of millions of dollars.

What makes you think academics are qualified to run that sort of effort much less do it by themselves? You don't learn those sorts of things in academic conferences, you learn those sorts of thing in business. If I'm working for Oracle and trying to push something through for Java, I'd much rather have a 30 year experienced product manager with a BA from Joe's community college than Phillip Wadler. Because she is going to keep the team together, get something out by the deadline and not get distracted by interesting aspects of the underlying problem. If I'm working for Oracle on an entire new type of language with database features built right in designed to replace Java then I want Phillip Wadler.

And this BTW is not unique to programming languages. It applies to academics in any of the engineering fields. There is a reason that technology moves from:
academia -> cutting edge users -> business -> consumer. Because at each stage the cost of failed experiments skyrockets relative to the previous stage. To pick an example from this last week... Nokia's software product strategy in their new generation of phones was a little off, and customers responded less positively than needed. So about 90m phones costing $200-700 each are selling to carriers for 23% off what Nokia had expected. I have nothing but respect and gratitude for accomplishments of Phillip Wadler, while at the same time being certain he does not have the kind of friends to kick in the extra $6b to cover the shortfall this is going to create for Nokia.

Additionally academics don't have misplaced pride in being "far ahead". They have perfectly justifiable pride in being "far ahead". Because the fact is that Stroustrup came up with idea of C with classes in his Ph.D thesis. Many of the ideas that went into Perl (and thus also PHP) came from Larry Wall's time UC Berkley and JPL. The HTML part of PHP originated at CERN. The most important recent extension to Visual Basic, LINQ came from the Microsoft Research X# project which is based on the papers of many of Lambda the Ultimate's regular contributors.

Academic PL researches are doing exactly what you are asking them to do. They work on the problems that effect real programming languages. They are thinking about the problems when thinking doesn't cost $20,000 an hour. They are having the failures when failures don't sink whole companies, and possibly whole communities dependent on those jobs. They are the ones working creating the pools of ideas for cutting edge customers which over the course of decades do become mainstream.

Java generics

I find your post interesting and eloquent, so don't take this as a general disagreement, but I'm not sure I buy your historical analysis of Java generics -- or Ben's, for that matter.

You said:

As far as I know Generics date from the 1980s and emerged from the practical community. The most important example being templates in C++, which came out of Ada. So I'd disagree with you this was a 6 year time to market nor that it was an example of academic work.

Ben said:

For example, the design of Java generics was based on the work by Wadler and Odersky on Pizza; they actually implemented their ideas in GJ and that code became the basis of javac 5.0. That was a "time to market" of just a few years: 1999 to circa 2005.

The first sentence of the oldest Pizza paper I could find on Wadler's page (Pizza into Java: Translating theory into practice, Wadler and Odersky, 1997) says:

Pizza is a strict superset of Java that incorporates three ideas from the academic community: parametric polymorphism, higher-order functions, and algebraic datatypes

Parametric polymorphism was very clearly described in Strachey's 1967 course notes, and independently formalized by Girard in 1971-72 (as a logic) and Reynolds in 1974 (as a programming language). Of course, the Pizza works builds on later research in ML type inference, bounded parametric polymorphism and object calculi, that happened in the 80's, and previous 1996 work by Liskov et al. (Pizza is not the first work on marrying Java and type theory).

The Pizza paper also mentions templates a la C++ (but not concepts):

Superficially, Pizza types appear similar to the template mechanism of C++. [..] However, the analysis does not run deep. C++ templates are implemented by macro expansion, such that all type checking is performed only on the function instances, not on the template itself. [...] In contrast, Pizza allows full type-checking at compile time.

I don't know how one should measure "time to market" of ideas, but to Ben's claim that the GJ work only had a bit less than ten years of "time to market" (which is true), I would add that -- if one consider "market" as adoption in a recognized-as-mainstream language -- the introduction of erasable generics in Java also measures the "time to market" of the 70s work by Girard and Reynolds, so we are also in the 30-years-before-mainstream-application range.

(While we're at it, what is the "time to market" of Hindley-Milner type inference? If your "market" is restricted to mainstream language only, with a sufficiently restrictive notion of "mainstream" excluding ML languages, we must probably wait until 2000 for the ideas to be (quite silently) integrated into Java/C#. Milner developped type inference for programming languages at the end of the 70s, but he was relying on Hindley's 58 writeup of ideas that were well-understood in the community of logicians working on combinatory logic since Curry's work in the 1930s-1940s. I believe it is not too far-fetched to claim that type inference as consciously applied in mainstream languages in the 2000 had 50 years of time-to-market. Curry and Hindley never claimed they were working on programming languages, so are out of the scope of Ben's rant, but Curry in 1940 explicitly insisted on the correspondence between combinatory logic and lambda-calculus, which was then flowering as a way to model both¹ computation and logical formalisms.)

¹: Paving the way for generations and generations of those people working in theoretical type systems and mathematical proof systems as one unique topic, that Ben apparently finds revolting.

Is there type inference in

Is there type inference in Java/C# comparable to Hindley-Milner at all? I don't think so. As far as I know they just have a comparatively trivial tree-walking algorithm for type checking where the type of an expression is computed based on the types of the subexpressions, with the reverse of this for lambda expressions (when you pass a lambda expression somewhere, the type flows from that expression into the lambda expression). C# just allows you to omit the type for some local bindings. For example, neither of these are allowed:

var f = (x) => x;
var f = (x) => x + 3;

In fact, it does not support generic local variables at all. Whether Hindley-Milner style type inference will become mainstream remains to be seen.

Type argument inference

Where type inference comes into play in Java and C# is inferring the type arguments to generic classes and methods. Java only infers type arguments to methods (currently), but there you have the basic unification problem with type variables.

All that Gabriel said

All that Gabriel said.

Also, in reply to Jules, no interesting form of type inference or Hindley/Milner will ever become mainstream as long as the mainstream keeps insisting that subtyping (and OO) is the silver bullet for abstraction and reuse.

Is it surprising that the

Is it surprising that the masses prefer subtyping (naturally intuitive) to H&M (mathematically intuitive)? If all Joe programmers became mathematicians, I guess they would be more willing to break their subtyping habits.

Intuitive??

Rossberg's law of subtyping: Anybody who says that subtyping is intuitive has only understood 10% of it.

Rossberg's law of parametric polymorphism: Anybody who says that polymorphism is complicated has only used it in OO languages.

:)

Subtyping is intuitive and mathematically simple

You must be thinking of a particular style of type system that employs subtyping rules.

Or so it seems

I suggest reading Abadi & Cardelli's "Theory of Objects". It (unintentionally) demonstrates how quickly things get out of hand when you just want to achieve the "canonical" things with subtyping. And that doesn't even cover type constructor variance or anything beyond that IIRC.

Sub-typing vs. type systems that track sub-types

This response confirms to me that when you're talking about "sub-typing" being bad, you're talking about type systems that allow implicit conversions from subtypes to types or that try infer most general types that involve type bounds, etc. The core of sub-typing, the existence of a relation between types A <: B such that every A is also a B is both simple and intuitive. Your beef is with type systems that try to track this relationship in certain ways.

F# requires explicit

F# requires explicit conversions from sub- to supertypes. It gets annoying very quickly, and F# even does implicit conversion in some cases. Certainly some implicit conversion is very welcome.

I agree

But the interactions and trade offs are complicated, so I think it's hard to say how much should be implicit and inferred in general.

Well, in literature,

Well, in literature, subtyping usually is (implicitly) understood to be implicit, i.e., there is a subsumption rule. Explicit subtyping is pretty much an exception, and Ocaml/F# are the only languages I know using it.

In any case, making it explicit may render it more friendly towards type inference, but does not eliminate much of the complexity. (It also makes it much less useful.)

Just a remark: there is a

Just a remark: there is a quite unexpected and interesting point where subtyping is useful in OCaml (in fact I think that's the most useful aspect of it, at least for my personal usage), which is the way it is used by the relaxed value restriction.

(With the relaxed value restrictions, even in exceptions with potential effects, inference variables that only appear in covariant position are generalized. This is sound because they could be unified to bottom, and then re-generalized by covariance. In particular, the type of bad guys, 'a ref, is invariant.)

Despite the title

Note that, despite the title of the paper and perhaps the direction Jacques was coming from, the relaxation does not really rely on subtyping in any way, only on identifying negative occurrences of type variables. Subtyping actually is a distraction in this case.

I disgree that subtyping is complexifying

With the right foundations, it comes for free. And the biggest win I see for sub-typing is in dealing with predicate dependent types. How should Nat and Int play together as types? Or List and SortedList?

IMO functions should be able to specify arbitrary dependently typed preconditions, and call sites should infer proof obligations that amount to ArgumentType <: ParameterType.

[Edited last couple of posts to fix places broken by the < character]

Right foundations?

Not sure I follow what you mean by the "right" foundations.

The One True Way

... is not what I meant. I just meant that with for some choices of foundation, sub-typing comes for free. In particular, I had in mind extrinsic, Curry-style types rather than intrinsic, Chuch-style types, with a logic powerful enough to prove that x:A --> x:B.

"Intuitive" but mathematically complex smells of fragile design

I understand the point that universal quantification, for example, may appeal to theorists more than practitioners. I'm not sure however how "intuitive" subtyping really is. Do you consider contravariance intuitive? I agree it may look familiar to say that "a square is a rectangle" (or is it the other way around?), but in practice I've seen much misuse of inheritance blatantly violating LSP. I suspect it may be the apparent familiarity (but underlying misunderstanding) of subtyping that provokes such bad software design.

But that's a problem that is quite common with OOP: beginners tend to see to much in the metaphors and anthropomorphizations of the field, and have a lot of false start because of this. Of course there are serious people teaching those things better, and taking care to weed those error outs by teaching design principles or patterns.

Thank you for coming round back to the original topic. I find it hard to explain implementation inheritance to people. The way I personally understand it is through the classic encoding into lambda-calculus (an class definition is a function that expects a "self" (or continuation) object, and instance are produced by taking the fixpoint). From what I understood from the beginning of the Cook/Harper debate, Cook is criticizing this presentation as a bad and unnecessarily complex way to teach inheritance to students. I must admit I haven't read his thesis (yet), so have only a very fuzzy idea of his alternative proposal -- though I could imagine how to present objects as coinductive processes reacting to messages -- but I'm a bit incredulous that there actually are much simpler ways.

(Of course I'm ready to admit any better explanation; it's difficult to know for sure what's "easy" to understand or not once your point of view is clouded by a partial personal knowledge. There are other topics that are prototypically hard to teach in detail, such as variable binding -- because it's a hard topic. I remember hearing as an anecdote that Pierre-Louis Curien, upon finding some an old course presentation of variable binding in terms of De Bruijn indices, got strongly convinced that this was "the right way" to teach variable scope to students -- as a first approach of the field. I don't know whether he still thinks that way.)

"Car is a Vehicle" is intuitive

Ok, now I'm getting into sub-classing, but let's pretend sub-typing and sub-classing are equivalent. Simple OO (only sub-classing) is indeed intuitive, its when you add stuff like co/contra-varience, parametric polymorphism, etc...etc..that it becomes hard. Inspite of what academics might think, Java 1.0 was a very simple language with the only language weirdness occurring in arrays.

People really do understand IsA (simple categorization) from a very early age.

Simple OO

Objects are records of functions. Syntactic sugar for property lookup:

a.foo = a#foo(a)

Where a#foo indicates record lookup in the object. For inheritance we use record combiners (e.g. merging two records). Simple! No fixpoint needed.

A class is simply a function that returns an object. For example:

point(x,y) = 
  { getX(self) = x
    getY(self) = y
    length(self) = sqrt(x^2 + y^2)
    normalize(self) = point(x/self.length, y/self.length) }

Adding a new method, the operator with merges records (i.e. mixins):

point(x,y) with { angle(self) = arctan(self.getY / self.getX) }

Inheritance:

point3d(x,y,z) =
  point(x,y) with { getZ(self) = z }

Inheritance with super:

pointWithBorkedGetX(x,y) = 
  super = point(x,y)
  super with { getX(self) = super.getX * 2 }

Multiple inheritance:

colored(color) = 
  { getColor(self) = color }

coloredPoint(x,y,color) = 
  point(x,y) with colored(color) with
  { extra methods here }

Everything falls out naturally. The full description of OO is nothing more than:

a.foo = a#foo(a)

For more fun, model a record of functions as a function from keys to values. This gives you delegation and MethodNotUnderstood as well.

Disclaimer: this is my personal view of single dispatch OO, Cook's view could be different.

Subtyping

But you've completely punted on subtyping. In fact, you don't mention types at all...

If you add typing to this picture, you'll quickly want to be able to track the relationships between all your various record types and then of course we're right back to subtyping. As I'm sure you know, type systems for polymorphism (nominal or structural) over records get complex quickly.

Hm

Matt, I believe Jules was answering about my question on explaining implementation inheritance, rather than participating in the debate on simplicity of subtyping.

Jules >

Inheritance with super:

pointWithBorkedGetX(x,y) = 
  super = point(x,y)
  super with { getX(self) = super.getX * 2 }

Does this really work? I'm under the impression that, if `super.getX` itself makes `self` calls, it will be resolved with the implementations of the `super` object, while -- if I understand correctly -- usual OOP rules are that this super.getX call uses super's implementation of getX, but in which inner self-calls are still made using the derived object implementation (if you want, super only changes method resolution search priorities for this call, not for its subcalls).

In the context of your concatenable records, the recursive encoding I was thinking of would be along the lines of:

pointWithBorkedGetX(self,x,y) =
  super = point(self,x,y)
  super with { getX = super.getX * 2 }

makePointWithBorkedGetX(x,y) =
  fixpoint(self) { pointWithBorked(self,x,y) }

With this modelling, subcalls to the parent methods do get resolved with "self" methods.

Edit: after giving this a bit more thought, I think you could also write the following, which is closer to your presentation (but still involves tying-the-knot):

pointWithBorkedGetX(x,y) =
  super = point(x,y)
  fixpoint(self) {
    super with { getX = super#getX(self) * 2 }
  }

Ah, yes

That makes much more sense. I had jumped a bit in the thread and it seems that I got confused. Sorry!

You're right I should have

You're right I should have written this:

pointWithBorkedGetX(x,y) = 
  super = point(x,y)
  super with { getX(self) = super#getX(self) * 2 }

I don't think the fixpoints are necessary? Any methods that super#getX calls will be called on self, i.e. on the current class rather than the superclass. Tying the knot happens through late binding (namely binding self to a method at the call site instead of at instantiation time).

Isn't this way of passing the self parameter in at method call sites how object oriented languages are implemented? That's what I always thought.

You can also recover the traditional inextensible objects:

// map_record maps a function over all values of a record
// apply applies a function
freeze(obj) = map_record(apply, obj)

obj = { foo(self)(x) = x * 2 }
frozen = freeze(obj)
frozen#foo(3)
// returns 6

This way you can view unfrozen objects as classes, and freeze as instantiation (which ties the knot, except there isn't much of a knot to be tied here, perhaps a better description would be that freeze binds each self parameter to the object).

Indeed

Thanks for the correction, it's indeed simpler that what I was thinking of.

(I'm not sure about `map_record(apply, obj)`, wouldn't that be `map_record(apply(obj),obj)`? One gets the point anyway.)

Yes, you're right:

Yes, you're right: `map_record(apply(obj), obj)` :)

Amusing

Yes, I agree with the standard implementation of objects and inheritance in C++ (and most other OO languages). But you can also move the "self" parameter to the class, rather than on the methods, and replace self-application with fixedpoints. These transformations are based on trivial identities (e.g. "A->B->C" isomorphic to "B->A->C"). Organizing the semantics as fixedpoints rather than self-application provides a somewhat more abstract (less operational) formulation, but understanding the result then requires lots of intuition about fixedpoints. It is this latter formulation that I defined in my thesis (1989). (The point/colored point example was first elaborated in my thesis :-)

Belated response

I'm not sure how it came about, but the thing that you say I am criticizing is exactly the interpretation of inheritance that I believe in. This theory of inheritance comes from my 1989 PhD thesis. If you take a peek at my thesis (A Denotational Semantics of Inheritance), it might provide you some intuitive ways to explain inheritance at a less operational level. It has to do with modification or derivation of self-referential definitions (which may be classes, types, functions, modules, packages, etc). I also subscribe to the view of objects as coinductive entities that react to messages, subject to the observation that the "messages" are not necessarily concurrent. So you and I seem to be mostly in agreement.

I think that Harper's description of objects and inheritance in his book is fundamentally incorrect, or at least seriously incomplete. It captures some special cases of the idea of OO and inheritance, but it is essentially a straw man that he then uses as the basis for his criticism of the limitations of OO. You can read his book for yourself. He did modify the chapter on inheritance some in response to our email discussion. At some point I will write a post to summarize my criticism of his formulation.

If you take the view that

No question that subtyping is much overused in OO languages, but if you take the view that types are static knowledge about values, subtyping is simply the ability to use implication to reason about static knowlegde. If we know that a value X satisfies P, and P => Q, then X satisfies Q. For example if X is prime, then X is an integer. If Y is a sorted list, then Y is a list. Having to write explicit functions that convert a subtype into its supertype is not user friendly. What do you propose as an alternative?

For example

For example, modules, or almost equivalently, type classes. Anything based on ADT-style abstraction that takes away with the tyranny of the dominant decomposition and the complications of omnipresent negative type recursion, variance, etc, and the resulting failure to model even trivial things like generic binary functions easily.

Implication is all fine and important, but it doesn't need to be on the types themselves, but on predicates over types. Type classes are probably the most succinct formulation of this idea.

I agree completely with you

I agree completely with you about OO vs ADT, but subtyping does not imply OO. How do you propose to handle the following functions:

  • A function nth_prime that takes an integer and generates a prime
  • A function rsa_decrypt that takes a prime and a string and returns a string
  • A function plus_one that takes an integer and returns an integer

The requirement is that you should only be able to pass prime numbers to rsa_decrypt, but of course you also want to be able to use plus_one on primes. This kind of thing happens all the time when you use more precise types (and, I assume, you're in favor of the ability to use more powerful types?)

You could define Int not as a type but as a predicate over types. So then you could have Int Prime and Int Integer. The function nth_prime would have the type Integer -> Prime. The function rsa_decrypt would have the type Prime -> string -> string, and the function plus_one would have the type Int 'a: 'a -> Integer. This doesn't seem right. Once you have encoded implication completely in this way, you have completely encoded subtyping and you're back to the same problems as subtyping.

It is also ugly in the sense that you have a distinction between base types (Integer and Prime) and types with subtypes (Int 'a).

I see

I see, that's the kind of thing you have in mind. I think you can do this by factoring out the implication into predicates, but I cannot give a quick reply. I notice however, that a recent topic in dependently programming has been how to decouple refinement properties from actual types, too. Because when the two are tight togehter too closely, you generally loose any potential for reuse.

Edit: More importantly, dependently-typed languages, where issues like this pop up first, have gotten along fine without subtyping.

subtyping and H&M

I'm probably asking something stupid but doesn't the Haskell solution (type classes) resolve the H&M subtyping problem? I'm not following how this isn't a dead issue.

Type classes aren't subtyping

Type classes are markedly not subtyping, if that's what you mean.

Otherwise I agree (to some extent) that TCs make subtyping mostly redundant (at least if you have existential types, too).