The OO barrier

It feels as though that advancement of OO is reaching it's peak in the context of PLT, even with Cecil/Diesel being beyond there time in OO languages i feel as though this is far as one can't take with it.

One only has to look into the results of newer OO programming languages resorting to adding features from other paradigms to make them more expressive, quite often being functional and then there was that recent paper about advance procedural languages. So is there a clear direction that OO research is taking?

Comment viewing options

Cecil/Diesel and Slate seem

Cecil/Diesel and Slate seem to be at the cutting edge on the prototype side, while Scala and OCaml are pushing the boundaries on the class-based side. Then there are E's objects as lambdas + message dispatch and local side effects.

I think we need a good analysis of which of these approaches works the best, and the advantages and disadvantages. To me, prototypes seem more elegant than classes and they don't suffer from a lot of the problems associated with OOP. I'm not sure how well they fit in with a statically typed setting, however. To my knowledge there has only been one statically typed prototype language: Omega. Developing statically typed prototype languages is one direction research should take.

Maybe the OOP paradigm is reaching its peak. I don't see this as such a bad thing. OOP should mature and fit into multiparadigm languages similar to Oz.

OO has always been multiparadigm. Smalltalk was a direct descendent of LISP. It made the syntax easier on the eye, simplified the syntax for anonymous functions. The polymorphic dispatch and tail-call elimination meant that new language constructs could be written in the language itself so there is less desire for macros and are no special forms. However, the programming style still relies on higher order functions and anonymous functions with lexically scoped closures.

It's unfortunate that industry picked up only on "objects" and not on the whole package. As snk_kid points out, as new features are being added to current mainstream OO languages they are becoming more complicated. It should have been possible to add many of those features as libraries, but the design of the languages make it impossible.

Asynchronous higher-order dataflow programming

Concurrent programming is hard in OO languages. This is getting more and more painful as more applications are becoming distributed and as hardware is becoming more parallel (multicore processors).

Concurrent programming is much easier in a paradigm that I call "asynchronous higher-order dataflow programming". The combination of dataflow, higher-order functions, and simple communication channels (like Oz ports) hits a sweet spot. It's easy to write and naturally asynchronous and concurrent. A simple example is a Contract Net Protocol in three lines (see the talk How to say a lot with few words). We are currently redoing the failure handling in Mozart to fit with this model.

A good direction for research would be to combine this model with OOP to get the best of both worlds (concurrency and data abstraction) in a mainstream language.

Have you looked at how Io

Have you looked at how Io implements the actor model? Or did you mean something else entirely?

Io language

I didn't know about Io. Looking at the language manual it seems that Io's actors and futures are part of what I am talking about. But Io is contaminated with state everywhere. For example, even simple language entities such as lists and objects can be destructively modified at any time. This is very bad for concurrent programming because it invites race conditions. The right way is to have a layered language where the default is being stateless (immutable).

Ok, that sounds a bit over-dramatic, but i really thing that oo has reached it's limits and will start to decline in the near future.

The main problem is IMO the distribution of state over the whole program. Each object has it's own state and each can be updated independently of others. Constraints between objects have to be enforced by explicit programming instead of declare them explicitly in the language (design by contract tried to tackle this problem, but it's still very 'local' whole most structures have lots of implicit global constraints).

Exactly backward

The main problem is IMO the distribution of state over the whole program.

What is this "entire program" thing of which you speak? In a world of distributed/concurrent/clustered/fallible/auto-updating/pluggable/remixable computational services that we seem to be moving to, the very concept of "entire program" is rapidly dying. I've hardly seen one in years, and certainly wouldn't purchase a serious piece of software that wasn't network enabled and field-extensible.

In that world, OO is still pretty viable (although it could stand to lear a lot from Erlang).

The main selling point of

The main selling point of OOP is the modularity and loose coupling achived by having a bunch of independent entities communicate with each other. Why shouldn't state also be local to the module that needs it (regardless of whether you call a module 'object' or something else).

A module can encapsulate

A module can encapsulate lots of data while an objects stands more for itself. Of course the differences are gradually, but OO tends to build more independent entities with lots of synchronisation problems because of the distributed and independent state.

With OO it's also more difficult to implement algorithms more independent of the underlying data. For example in Java the collection classes have only simple methode like 'size()' or 'remove()' while other operations are implemented as static methods (=functions) in the 'Collections' class. This rather arbitrary distinction shows that the OO approach to put functionality into methods has its limits. Same problem for example if you define functions over several objects (the 'friend problem' in C++).

Extensibility in OOP is in fact worse then with functions or procedures. If you want to create some new operation on all array classes, its only possible by changing the array class (practically impossible in Java, and only possible in for instance in Python because if its runtime extensibility - which leads to lots of other problems). With functions, you simply create a new function.

So if you have to fall back to normal functions even in OOP, why not abandon OOP altogether and use the obviously more powerful feature, the function? And combined with pattern matching you have also a more powerful mechanism than the simple method dispatch in OOP.

In Java and C++ you might

In Java and C++ you might need to use static methods and whatnot, but in many languages you don't. eg. in Ruby a method defined at the top level becomes a private method of class Object, not to mention CLOS and Dylan's generic functions.

I'm not saying that OOP is the holy grail, but only looking at the mainstream languages doesn't really give it a fair chance.

Ruby uses it's dynamism to

Ruby uses it's dynamism to circumvent the extensibility problem. But thats not without risks and won't work that easily in a statically typed language.

And CLOS/Dylan: They call them generic functions for a reason.

A function is an abstraction

A function is an abstraction on the act of *computing*. It doesn't make sense to compare objects to functions. Indeed, many OO languages has functions as first class values. I don't see why they can't be combined.

Also, the issues you raise are not problems with OO in general, but with the specific languages you use as examples.

Methods (those are the

Methods (those are the OO-analogon to functions) are less usefull as functions with pattern matching. How do you write methods with two similar parameters? For example an addition? Thats quite problematic with methods and because of this, C++ invented friends, CLOS generic functions and other languages (like Cecil or Scala) multiple dispatch. But the concept remains really a functional one.

And of course can you emulate other programming paradigmns in OOP. Also there are all those multiparadigm languages. But I think the concept of OO itself is flawed.

OOP is a certain way of programming, not a language. And the main problem of its approach is the concept of distibuting mutable state over many objects. And this leads to lots of unneccessary complexity.

Methods (those are the

Methods (those are the OO-analogon to functions) are less usefull as functions with pattern matching. How do you write methods with two similar parameters? For example an addition? Thats quite problematic with methods and because of this, C++ invented friends, CLOS generic functions and other languages (like Cecil or Scala) multiple dispatch. But the concept remains really a functional one.

Functions using dynamic dispatch to match specific types aren't equivalent to member functions. The core idea behind OO is the class--grouping of state with related functions. Extending this, in order to gain the same flexibility as classes requires passing the state and the functions around separately, which, one could argue, is essentially implicit grouping. As such, "OOP" has been around since the first programmer etched bits onto a cave wall and will continue to be used, whether you know it or not.

OOP is a certain way of programming, not a language. And the main problem of its approach is the concept of distibuting mutable state over many objects. And this leads to lots of unneccessary complexity.

Programming is only as complex as you make it.

Generic functions and

Generic functions and multiple dispatch are much more similar to normal functions then standard single-dispatching.

A big part of OOP is modifying objects in place. OOP comes from the field of simulation where each object is some kind of agent which communicates with other objects by sending messages. A method is a special kind of bound function which has access to private data of the object. This should ensure encapusation and separation which should lead to better designed programms and easier programming. A promise OO couldn't really deliver.

Because each object has it own state, you have lots of synchronisation problems between all those different states. Much of the programming overhead in many OO-programms results from synchronising the state of those objects. Patterns like observer or MVC can help, but require lots of effort to. Projects like 'SuperGlue' try to make this synchronising tasks more simple, but I think it won't really succeed in this task because it's just a cure for a symptom and not for the faults of the base itself.

So OOP can't reach its primary goal and makes things like construction of ADTs more complicated than before. Also solid type systems for OOP are much more difficult to design and less expressive than most type systems for functional languages.

A big part of OOP is

A big part of OOP is modifying objects in place. OOP comes from the field of simulation where each object is some kind of agent which communicates with other objects by sending messages. A method is a special kind of bound function which has access to private data of the object. This should ensure encapusation and separation which should lead to better designed programms and easier programming. A promise OO couldn't really deliver.

It looks as though you're talking about the C++/Java approach to "OOP." I'm speaking in general about Object Orientation and what is implied by the name--design using objects which group code and data. In that regard, the presence of access control is completely irrelevant; in fact, my "OO" language doesn't have any notion of it.

Objects?

Curtis W: I'm speaking in general about Object Orientation and what is implied by the name--design using objects which group code and data.

Absent any more information than this, all you've described are Abstract Data Types (ADTs). For what it's worth, I'm extremely sympathetic to this point of view. It's made quite strongly in the CTM: ADTs have become a somewhat overlooked commodity in the OO world.

So some careful thought should be given to what the differences are between ADTs and objects, and what the relative pros and cons are. It's perhaps interesting to note that the "O" in "O'Caml" stands for "Objective," and prior to the introduction of Scala O'Caml was the only statically-typed combined functional/object-oriented language to escape the lab, but most O'Caml code makes no use of objects—ADTs constructed using O'Caml's extremely powerful module system tend to be preferred instead. Why do you suppose that might be?

So some careful thought

So some careful thought should be given to what the differences are between ADTs and objects,

To be honest, I consider all objects as ADTs. There's no point in using objects if you don't gain any flexibility from doing so.

For meaningful discussion, we need to make precise definitions of "Object" and "ADT". In the definitions given by CTM (essentially the same as given by much of the computer science community), objects and ADTs are not the same. It does not make sense to "consider all objects as ADTs". Both objects and ADTs have their advantages and disadvantages. Modern OO languages, even if they are "pure OO", do in fact support both. Smalltalk has ADTs in its implementation. Java "objects" are in fact a mix of objects (in the technical sense) and ADTs. This is all explained in detail in CTM.

For meaningful discussion,

For meaningful discussion, we need to make precise definitions of "Object" and "ADT". In the definitions given by CTM (essentially the same as given by much of the computer science community), objects and ADTs are not the same.

Yup, that's why I clarified. It usually takes me a couple passes to describe my thoughts on abstract topics like this. It seems I'm misunderstanding something, though, and I haven't read CTM, so it appears I need to clarify more: what is your definition of objects and ADTs?

Both objects and ADTs are ways to do data abstraction. An ADT defines two kinds of entities, "values" and "operations" on those values. For example, many languages define integer values and operations on them. The values 9 and 16 can be arguments to the operation +, giving the expression 9+16, which returns the new value 25. An object defines one kind of entity, "object", which combines both the value and the operations. For example, Smalltalk defines integers as objects. If writing '9' creates an integer object whose value is 9, then passing the message '+(16)' to the object will cause it to perform the '+' operation, causing the object to have the value 25. For more precise definitions and code examples, I refer you to CTM.

To think about the advantages and disadvantages of these two ways of abstracting data, you have to remember that a data abstraction provides encapsulation: from outside the abstraction, it is impossible to look inside.

An ADT defines two kinds of

An ADT defines two kinds of entities, "values" and "operations" on those values. For example, many languages define integer values and operations on them. The values 9 and 16 can be arguments to the operation +, giving the expression 9+16, which returns the new value 25.

I see. I was under the impression an ADT is an object which can be treated in an abstract manner. In this case, I believe my previous terminology should suffice.

To think about the advantages and disadvantages of these two ways of abstracting data, you have to remember that a data abstraction provides encapsulation: from outside the abstraction, it is impossible to look inside.

How does the term "object" imply access control? I personally don't believe in the need for such a thing, although this is a separate discussion.

The need

As Peter has already pointed out, ideally you should program in a stateless way where possible; as the default. When this doesn't suffice, you have to sacrifice some of the simplicity and safety of the stateless model in return for the added expressiveness that state can bring.

However, when you program with state you can make things a lot harder for yourself. It can be a lot harder to reason about correctness and can make concurrent programming hard. Access control and design-by-contract are techniques for restraining this and regaining some of the benefits of stateless programming. Having unlimited access to state everywhere isn't a good idea - instead introduce state locally when the benefits of its added expressiveness outweigh the costs in that specific part of the program.

What do you mean by

What do you mean by stateless? I'm interpreting it as "data to process," but that couldn't be it because otherwise your post makes no sense at all.

Sorry. By state I mean

Sorry. By state in this case I mean mutable variables.

Oh, I see. In that case,

Oh, I see. In that case, it's worth noting that using objects doesn't imply state. For example, take the factorial example in Ruby (taken from the ruby vs python thread):

class Integer
def factorial
return 1 if self <= 1
self * (self-1).factorial
end
end

puts 10.factorial


I may be mistaken...

...but I think functions in Python are basically a form of state. That is, a new factorial function can be fashioned to replace the current function from outside the class (the functions are basically held in a state dictionary that is mutable). For Pythonistas, this dynamic nature can be a bonus. But this would not work in it's favor in terms of being considered an ADT.

You could do it that way,

You could do it that way, but then you lose the benefit of it being associated and bundled with the data it works on.

like this?

(define (adt-int n) (lambda () n))
(define (loop i r)
(if (<= i 1) r
(loop (- i 1) (* r i))))
(loop (n) 1))


(define (adt-int-fact n) 'bananas)


This kind of insanity can happen in any one language. It's all up to the programmer and no static type checks nor any kind of contraints can really prevent it.

An ADT is an abstract being by nature. You can't really enforce opaqueness of the ADT if its users don't use the default associated operations to handle the related value. To me, an ADT is an OO class without the rigidness... in this aspect, Ruby and Python classes are much more ADTs than C++'s or Java's...

This kind of insanity can

This kind of insanity can happen in any one language. It's all up to the programmer and no static type checks nor any kind of contraints can really prevent it.

Doesn't happen in Standard ML, for example.

Contamination by state

This kind of insanity only happens in languages that are contaminated by state (everything is stateful by default). I consider all such languages dangerously flawed. Think of your favorite language: does it have this flaw?

What does state have to do

What does state have to do with returning nonsensical answers?

Functions can be state as well

Though I don't know if that's what PvR is talking about, my python example showed how a function within a class can be replaced on the fly. Similarly, the Lisp function above changes the definition of the factorial function dynamically. I take this to mean that state is more encompassing than just variables - although what it really means is that the handle to a function is held within a state variable within these languages.

I disagree that this is a problem with all languages. ML does not allow you to redefine a function within a module that is accessed through an opaque signature (which is why I said ML considers this to be an ADT). In Oz, functions are given names as part of something that resembles unification - once you name a function, the name can not be redefined or else you get a unification error. Also, CTM shows how to use encryption to shield the data abstraction from access.

ML does not allow you to

ML does not allow you to redefine a function within a module that is accessed through an opaque signature

Actually, signature ascription is not the mechanism that makes functions unredefinable in ML. Signature ascription (opaque and translucent) is for enforcing type abstraction (you can always use local-in-end in SML to hide value bindings - you don't need signature ascription for that). The thing that makes functions unredefinable in ML is that all bindings are immutable. To introduce state, you need to explicitly create ref-cells or arrays.

I disagree that it is a

I disagree that it is a problem at all. On the contrary. One of the best features in languages like Python, Ruby etc. is to enable generic customization of functions and replacing existing ones by wrappers. There are of course pathologic use-cases due to the lack of a type system but they are mostly irrelevant in practice. I can't see why it should be harmfull in statically typed languages like Haskell to replace a function by another one with the same name and signature but extended properties. Replacing a function by a wrapper is really a lightweight adaption of a system. No need to create subclasses or refactoring large parts of the code.

Undecidable

Kay Schluehr: I can't see why it should be harmfull in statically typed languages like Haskell to replace a function by another one with the same name and signature but extended properties.

For one thing, proving that the two functions have the same semantics is undecidable. Even if you relaxed that requirement, just proving that both terminate (or both don't!) is undecidable. Proving that one has "extended properties" over the other, likewise. So the whole point is that substituting one function for another isn't at all safe or sound.

Kay Schluehr: Replacing a function by a wrapper is really a lightweight adaption of a system. No need to create subclasses or refactoring large parts of the code.

Syntactically lightweight, yes. Semantically, not so much.

For one thing, proving that

For one thing, proving that the two functions have the same semantics is undecidable. Even if you relaxed that requirement, just proving that both terminate (or both don't!) is undecidable. Proving that one has "extended properties" over the other, likewise. So the whole point is that substituting one function for another isn't at all safe or sound.

The same could be said for redefining a function through inheritance. In fact, the same can be said for any function, regardless of if you're allowed to redefine it.

Embracing changes

Behavioural changes are indeed appreciated while interfaces shall stay stable. Curtis correctly mentioned subclassing and ad hoc polymorphism, which is together with late binding a standard OO mechanism for "substituting one function for another".

A convenient use cases for runtime function replacement is mockery for test-purposes: replacing a method on the fly by one which has the same interface but provides some pre-selected values only. Another one is replacing a method-stub by a user defined function without enforcing inheritance. It's like having an abstract method in Java or C++ but instead of writing a concrete implementation by means of subclassing the abstract class, a function will be assigned to the object during or short after instantiation. I'm unconvinced that this is "unsound" and languages that permit these practices are "flawed". On the contrary.

Changing things

I'm unconvinced that this is "unsound" and languages that permit these practices are "flawed". On the contrary.

Sigh. When I said that those languages are flawed I did not shoot from the hip. There are really good reasons for it. The points you raise have been carefully considered and they do not change the conclusion.

Letting any part of a program that references a function change that function is clearly wrong. The right way to do it is through capabilities: the authority to change a function body should be carefully controlled. For example, only the part of the program that has defined the function should have that capability, and it controls how the capability is propagated.

hacks

"Letting any part of a program that references a function change that function is clearly wrong."

yes, i agree. there's nothing wrong in doing that for in limited scopes, but languages permiting the change to be propagated to the global scope -- like changing the way a Fixnum behaves in ruby -- is really sad. it may be cool to produce quick hacks but will quickly become unmaintainable...

As I tried to say above...

...some consider this a feature. What I was trying to say above is that Python and Ruby code that defines a class with a single function (factorial) does not qualify as an ADT, since there is no separate mechanism from defining the type with the actual definition of the function. I was trying to say that this method can replaced on the fly (and possibly even changing the type - bananas in the Lisp example).

I guess we could question whether this is a reasonable feature, or whether it makes reasoning about the code hard. But the immediate question is whether such things really qualify as ADTs. I say that the distinction between variables and functions is gray, as the functions within an object are themself a form of state. So just because you can define a class with no methods, only properties, doesn't mean that it's an ADT.

ADTs in a world without boundaries

Your argument is quite subtle. Imagine you'd have a freeze() operator which, when applied to a class A, makes it immutable. There is another operator called thaw() that suspends freeze() and makes A mutable again. Lets further assume that everything is frozen initially. The mere existence of the freeze() / thaw() pair makes the language not just unsafe, but extinguishes ADTs from it (?). We can go even further and mark each class with an attribute which states that it is frozen or thawed and if the state has once changed. Fortunately we can statically track those changes and so we know at compile-time that some class A will be frozen during its whole lifetime. Now another program uses A as well but modifies it applying thaw(). So what is the reality of A? Having a freeze()/thaw() pair introduces some uncertainty in the global context of the language but not necessarily the local context of the program.

Putting Paul Grahams Lisp philosophy to the extreme: there is no language unless I define it in the context of the programm I write. There are just language games but there is no such things as a language with fixed meanings. Programs together with an interpreter have fixed meanings, not languages. I guess Paul Snively finds this horrible and Peter van Roy wants at least some language wise regulating authorities that distribute capabilities for freeze()/thaw().

I'm amused by the fact that seemingly innocent technical questions turn out to have some rather difficult epistemology.

Au Contraire!

Kay Schluehr: Programs together with an interpreter have fixed meanings, not languages. I guess Paul Snively finds this horrible...

A language is a program together with an interpreter, and you can nest this observation arbitrarily deeply (cf. "reflective tower"). Given an interpreter I for a language S and a self-applicative partial evaluator P written in target language T, you can derive a compiler C = S -> T for the language by applying P to itself and the interpreter: P(P, I) -> C. cf. Futamura's Second Projection. Far from finding this horrible, I find this beautiful in the extreme.

What I find horrible are arguments that unconstrained side-effects—particularly something as radically unsafe as redefining functions at runtime—don't represent a software engineering problem. Arguing otherwise isn't arguing for "embracing change;" it's arguing for removing the brakes, the curbs, the traffic signals... a good argument for embracing change would be something like Acute, which deals with distributed computing, live update with versioning, etc. in a type-safe way. To quote Austin Powers: "We have freedom and responsibility. It's a very groovy time, baby!"

I've seen this argument

I've seen this argument before. I'm curious, what exactly are the dangers of state? I can think of some, but I'd really like to understand why you consider it to such a fatal flaw in a language. I couldn't find anything about this problem in your beginners thread.

Brief, Superficial Point

Off the cuff, "state" (really "unconstrained mutation") is problematic because without taking great—I'm tempted to say "superhuman"—care, it tends to introduce non-local effects into a system, making reasoning about the system increasingly difficult. In layman's terms, heavily mutation-laden code tends to exhibit bugs of the class "this code assumed the world was in state X, but it was actually in state Y, so it crashed." The most pernicious form of this is shared-state concurrency, where code might work with a single thread, but fail unpredictably (a "Heisenbug") in the presence of multiple threads of execution.

Many languages treat this by convention—for example, mutating syntax/functions in Scheme end with "!" (e.g. "set!"). The idea is to remind the programmer that they're doing something dangerous. Other languages, such as Standard ML and O'Caml, treat everything as immutable by default, and there's an explicit construct, a "ref," for things that are mutable. The presence of "ref" has dramatic implications for the semantics of the language; Google "ML value restriction" and "ocaml relaxed value restriction" for more information.

Finally, Haskell takes the radical step of eliminating unconstrained mutation. This allows consistent lazy evaluation in the language, but also means that I/O gets interesting, among other things. This is what all the fuss about "monads" is about. Loosely put, monads are a mechanism for ensuring that things that need to happen in sequence do.

Finally finally, a much better and more comprehensive explanation of the issues can be found in the CTM (Concepts, Techniques, and Models of Computer Programming). This is head and shoulders the best computer science text available today. Please do yourself a favor and plunk down the money for it. :-)

Since this subthread was started with the question about the difference between ADTs and Objects, the LtU discussion on CTM that you point to is a good reference for reading up on the subject - especially since some of the ideas ended up in the final edition of CTM.

Remember goto

Or, to put it sarcastically and adapt a famous quote: mutation is the goto of data structures.

.

Aren't pointers the goto of data structures?

Original quote

Yes, that's the original quote. But it was for C. In a high-level language, mutable state and pointers become the same thing - called references in most such languages since Algol-68. Hence I think it is appropriate to adapt the quote accordingly.

Why Functional Programming Matters

If you aren't familiar with this paper, you should.

It should be mentioned in the getting started thread.

What craziness are you

What craziness are you talking about? Sorry, usually I'm pretty good at figuring out lisp, but I can't figure out what the last line means.

Redefines the factorial

Redefines the factorial function to give the symbol bananas. :)

Oh, thanks.

Object Identity

Implicit in Peters' example of the smalltalk Integer object is the notion of identity. 9 creates a new object, then applying the (+16) operation to it causes that *same* object to transition to 25.
Compared to ADTs, this is a more significant difference than encapsulation in my opinion.

That's true for smalltalk's

That's true for smalltalk's implementation (or so you say), but that's not inherent in all objects. See the factorial function I posted; it returns a completely new object rather than modifying itself, if that's even possible in ruby.

The SML perspective

I alluded to the distinction between ADTs and OOP in another thread where I mentioned CTM (though I was scattered brained at the time). From my understanding of ML, only modules with opaque signatures (:>) are considered to be ADTs while those that a transparent are not (:).

I think this means that ADTs in Oz are the Secure Bundled types of Chapter 5, where the encryption is what guarantees the abstraction?

Smalltalk doesn't work that way.

In Smalltalk integer objects are immutable.

9 is a literal that refers to the object that represents the integer 9.

9 + 16 returns a reference to the object that represents the integer 25.

i := 9. stores a reference to the object that represents the integer 9 in i.

i := i + 16. stores a reference to the object that represents 25 in i.

At no point has the state of 9 or 16 been changed.

Under the hood, the VM optimises the implementation so that small integers are stored directly in variables, large integers are stored on the heap and operators convert between large and small integer representations as necessary. However, that's an implementation detail that is not exposed by the integer abstraction.

Integers are immutable in Smalltalk

Thank you for the correction.

Just a Java/C# restriction, not OO

For example in Java the collection classes have only simple methode like 'size()' or 'remove()' while other operations are implemented as static methods (=functions) in the 'Collections' class.

This is annoying, but it doesn't have anything to do with OO per se. Objective-C has included the capacity to extend classes "in place" for at least a decade, now, and I wouldn't be surprised the hear about Smalltalk dialects that have had that functionality since the '80s.

What annoys me in mainstream

What annoys me in mainstream OO languages (C#, Java, C++) is the amount of glue code one has to write in order to make different parts of the program communicate between themselves. In almost every case, one has to write many interfaces, model instances (if the MVC pattern is used - something that is necessary in the long run), etc.

I think all the above get in the way. I would prefer that:

• interfaces where implicitely created by the compiler at the appropriate places.
• any state change could be modelled as a signal where code is attached to.
• the compiler dealt with thread synchronization.

* interfaces where

* interfaces where implicitely created by the compiler at the appropriate places.

Already possible through structural subtyping, commonly seen in dynamic languages. It's possible to do the exact same thing with static typing, too, if that's your thing.

Unfortunately it is not

Unfortunately it is not where near close to have static structural subtyping in any of the mainstream languages.

Except, Probably...

The next one. :-)

So? You think you'll be able

So? You think you'll be able to achieve those goals without any radical modifications to your so-called "mainstream languages"? Heh.

There's always an 'entire program'

An 'entire program' exists, even if it's distributed over several servers/clients.

Precisely because we are moving toward those "distributed/concurrent/clustered/fallible/auto-updating/pluggable/remixable computational services", it's getting more and more difficult to manage the complexity if the programmer has to do it all 'by hand' like it is required in the OO-pardigm. And because of this complexity OO will fail.

If we instead look at the program as a whole and allow all-embracing constraints over all components of a design, the creation of those systems would be much easier.

Definitional problem

An 'entire program' exists, even if it's distributed over several servers/clients.

From an epistemological point of view, I'm not really sure it does. If you can't really say anything about your entire program (because it is scattered over a literally unknowable set of codebases/processors/administrative domains/security domains/org-charts), what point is there in the concept?

If you type www.amazon.com, you trigger something on the order of 100 services, and that probably doubles if you buy something. Those services are running on many thousands of servers, and that doesn't count the innumerable machines between you and Seattle. Most of those services don't know about each other, and many of them aren't even built or maintained by Amazon. What possible globally interesting invariants could there be over such a program? To what extent does the "entire program" Amazon-as-a-whole make any sense to talk about?

I would view different

I would view different services as different programs as long as a service is able to work on itself. I see a distinction between 'components' and 'objects', even if it's a bit hard to define exactly.

An ADT for example is not a component, but can be implemented as a single object. OTOH a web-service is made of maybe only a single object, but is clearly a component. But the web-service as a whole consists of lots of framework objects in addition to the actual 'worker'. If you have constraints over a set of objects, than those objects build a common program or component.