## OO Language Engineering for the Post-Java Era

... Java also acted as a brake especially to academic language design research... The goal of this second edition of the workshop was to address object-oriented languages that diverge from Javaâ€™s doctrine but support a much more dynamic way of constructing software. In the near future, this dynamicity will be required in order to construct software that is highly context-dependent due to the mobility of both the software itself and its users...

ECOOP 2004 Workshop - Back to Dynamicity

ECOOP 2003 Workshop

## Comment viewing options

### Concurrency in Java and Erlang

From the conclusion of Nystrom's paper:

Is it possible to implement Erlang-like concurrency as a library in Java?
Certainly, but the result is likely to be harder to use and less efficient than the
same features integrated in a programming language.

While not quite Erlang-like concurrency, Peter Welch's JCSP provides a nice implementation of occam-style process semantics on top of Java. There's a good introduction to JCSP in this IBM developerWorks article. The nice thing about JCSP is that it provides scalable, compositional concurrency in a form that is readily reasoned about using standard CSP process algebra. In fact, the implementation of JCSP on top of the built-in Java threads has been formally modeled and verified using CSP.

Unfortunately, as Nystrom points out, using a library on top of Java threads is more cumbersome than using the built-in constructs (certainly, using JCSP is more clumsy than using occam - although still better than raw Java threads). It would have been nice if the new 1.5 release of Java had incorporated JCSP directly. I believe there was some campaigning towards that end, but it was ultimately unsuccessful. It'd be nice if Java supported an easy way to extend its syntax (ala Lisp or Camlp4)...

### Open Java

Ulf Wiger previously opined: "Perhaps your best bet is to simply go with JCSP, if you really need to use Java..."

"It'd be nice if Java supported an easy way to extend its syntax"
Open Java?

### Great way to extend programming languages!

I did not know that something like OpenJava could be possible. When I was thinking of compile-time programming, I thought of complex protocols including special keywords, special operations etc which made my efforts to design a programming language problematic...but now I realize it was actually very simple: what is needed is to mark the code as 'compile-time', as in OpenJava's metaclasses, so as that it is run at compile time.

For Java it is especially easy due to bytecode nature. But now that I have seen it for C++ (OpenC++, just follow the links from the above URL), what can I say? I think that it this way of extending programming languages must be a defacto way in the future. If people can do it with such ease on a difficult compiled language that is C++, then anything is possible...

### Indeed...

...I think one of the next major evolutionary steps in programming is going to be growing acceptance of multi-stage programming and languages that are explicitly designed to exploit it. Syntactic extension plus runtime code-generation plus just-in-time compilation is just too powerful to ignore, and multi-stage programming puts it all on a firm foundation.

### Drawback

Although I do enjoy the concept, I'm scared that this might make "quick" programming more difficult in the long run, as one might need to play the full orchestra (language, meta-language, meta-typing, etc.) to develop simple stuff.

### Hm.

Kind of like C++ templates. :)

### Compile-time code is quite useful

There are plenty of things that can be done when an application compiles:

-check the environment
-create/update the database
-create unit tests
-check the code against specifications
-automatically install new versions
-notify developers for new versions

...and probably many more than I can imagine.

### "OO Language Engineering for

"OO Language Engineering for the Post-Java Era" - To me the title indicates refining oop and that Java has held back the improvement of oo languages; ie as in adding a really good type system so a program can be reliably statically checked for all cases at compile time (type inference?). It seems that adding concurrancy-oriented support ala Erlang is a separate issue entirely, as is metaprogramming.

But then I'm not a fan of 'everything and the kitchen sink' in languages. Right now my thinking is oriented more towards 'right language for the job'. That might entail using the EATKS to build the right language, but that's just an intuition....

Edit; Oops, how'd this get necro'd?

### APPLE

OO is nothing more than pattern matching on the type of object: if the object is of type A, then do this, otherwise if the object is of type B, then do that etc.

The best approach for solving the problems the paper describes is to declare functions with different types and values as parameters. For example, in a GUI library, the paint function could be declared not only on the type of objects it is passed but on the state of those objects:

void paint(Button btn : btn.enabled == true, GraphicsDevice gd);
void paint(Button btn : btn.enabled == false, GraphicsDevice gd);
void paint(TextBox tb : tb.enabled == true, GraphicsDevice gd);
void paint(TextBox tb : tb.enabled == false, GraphicsDevice gd);


There are lots of problems that can be solved with that or patterns that can be implemented:

1. new 'methods' can be added to existing classes.
2. existing 'methods' can be redefined.
3. methods and multimethods get unified syntax.
5. cross-cutting concerns can easily be expressed, since existing methods can be redefined.

Implementing such a mechanism is not very difficult:

-for functions applied on different types, implementations can exist in hash maps or tables and selected at run-time using the type tag of each structure as key to the hash map or index to the table.

-for functions applied on different values, the compiler could build one function with different 'if' statements.

### Nothing more than pattern matching?

OO is nothing more than pattern matching on the type of object: if the object is of type A, then do this, otherwise if the object is of type B, then do that etc.

From the perspective of the designers of Smalltalk, object oriented programming is about independent processes that communicate via messages. The problem with the view that OOP is pattern matching is that it relies on a benevolent dictator model. That is, the caller is dictating what code will be run on the called object. However, what if we assume though that the caller program may hostile intent toward the called object? In such a case, pattern matching fails as a security model for distributed objects.

A good resource on the subject is Mark Miller's thesis on Robust Composition and the E Programming Language.

### How is that different from pattern matching?

That is, the caller is dictating what code will be run on the called object.

Isn't the above valid in Smalltalk as well? when you use an object, you are calling it. If you called a different object, different code might be executed. In both cases, the caller determines what code will be executed.

### Predicated dispatch vs. dynamic inheritance

What you've just described is predicated dispatch right? I would prefer dynamic inheritance: a button or text box can inherit from the Enabled trait when it is enabled, otherwise it does not. Solves pretty much the same problem with a different mechanism. Of course, dynamic inheritance with static typing is a bit tricky, but dynamic inheritance works well enough in a dynamically typed language (e.g., Self, JavaScript).

### Predicate dispatch

As Sean pointed this is basically just predicate dispatch. Google is your friend.

### Question about the paper 'Laziness and Declarative Concurrency'

What is the difference between the Oz model presented the in paper and a synchronized buffer class in Java? in both cases, a thread that needs data from another thread will block until the data are available.

### Adding dataflow and laziness to a language

The paper explains that laziness can be added easily to a language with dataflow variables. The ADT is as follows:

• {NewVar X}: create new dataflow variable X.
• {Bind X V}: bind dataflow variable X to value V.
• {Wait X}: suspend the current thread until X is bound in another thread.
• {WaitNeeded X}: suspend the current thread until another thread calls {Wait X}. This is the operation that adds laziness.

The power comes when these operations are done implicitly. That is, when you do an addition Z=X+Y, the + operation implicitly does a Wait for each argument and the binding to Z implicitly does a Bind. This makes dataflow programs very concise, much more concise than explicit calls to synchronized buffers in Java. An experiment was done to add this implicitness to Java, giving a new language called FlowJava (see the paper on FlowJava). Simple concurrent programs in FlowJava are possible in half the number of lines of code as in Java. The difference is even bigger in Oz: a bounded buffer can be completely implemented in just a few lines of code (see the paper by RaphaÃ«l Collet or chapter 4 in CTM). This uses WaitNeeded. Another example is making a program incremental: any program can be made incremental by sprinkling it with WaitNeeded calls. The WaitNeeded operation is extremely powerful.

Improving expressiveness by adding implicit operations reminds me of the famous quote by A. N. Whitehead:

Civilization advances by extending the number of important operations which we can perform without thinking of them.

This applies to programming languages as well!

### Further explanation desired

"The power comes when these operations are done implicitly."

So a C++ solution with overloaded operators that implicitly synchronize operations (so as that Z=X+Y) is the same as in Oz?

I still don't get the difference/advantage. Ok, Java can not overload operators, but synchronization can be built into classes, and then be implicitly used, just like in Oz.

### Where the power comes from

In the Oz language, dataflow has the following four properties:

• It is concise: No extra syntax is needed beyond using the identifiers and the operations in the usual way. An unbound variable when bound becomes its value transparently.
• It is ubiquitous: All variables and all operations do implicit dataflow synchronization.
• It is factored: Dataflow does not interfere with other concepts in the language. It works well together with functions, objects, classes, etc.
• It is optimized: The implementation is optimized to reduce the overhead of the implicit synchronization operations.

A final point is that the Oz implementation makes threads cheap enough so that the programmer is not discouraged from using them when doing so would improve program structure.

If you want the same kind of expressive power in C++ or Java, then you have to add as many of these properties as possible.