Noop: Google Language for the JVM

It looks like a couple developers at Google are creating a Java-like language for the JVM called Noop. Some of the highlights are no implementation inheritance (a delegate mechanism), properties, built-in dependency injection, no statics, no primitives....

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I'm a little unclear on

I'm a little unclear on exactly what constitutes a "primitive". Perhaps they mean no "objects that do no inherit from Object"?

agreed

i assume so.

WHY

Okay. but WHY...

The goal is to build dependency injection and testability into the language from the beginning, rather than rely on third-party libraries as all other languages do.

What is wrong with relying on third-party libraries for this task? How will they do it at the language level that makes a difference in contrast to the why modern DI / IoC frameworks work? If you look at the Proposals detail section, they say:

Use Guice or PicoContainer under the covers

So it's ultimately a library.

That's silly. The only purpose here is to provide certification and verification of certain program properties. But since its dependees you are injecting, anything more than the interface is a mistake to begin with. So what's the benefit.

They are also going to

Noop is also going to support type inference and static typing, amongst other things. This isn't just Java with some syntactic sugar, it's really a new language, and they're just leveraging the JVM. I'm not sure what's controversial here really.

I'm interested to see what they do to support testing though, as I've come up with some ideas myself.

Hmm...

I'll be interested to see what they come up with, but only because I'm generally interested in the design space... Their lists of "Says Yes To" and "Says No To" reads like a grab-bag of some hard research problems, if you take them seriously...

  1. DI/IOC and "testability" are basically a set of modularity and module system issues, if you take it to the language level. Same with "no statics anywhere"
  2. Readability, executable documentation... not trivial, depends how far you want to go
  3. Strong typing and sensible modern lib, no unnecessary boilerplate: this has definitely kept the Haskell guys busy for a few decades... ;)

In short, it sounds way too ambitious for just "an incrementally better Java," but hey... aim high! And maybe they mean these things in a much more conservative way, it's just very difficult to tell from the summary. Anyway it's an exciting time for language design on the JVM, I wouldn't rule out any contenders.

DI as a language-level construct

The big advantage I see is that completeness and correctness of available dependencies becomes statically enforcable by the compiler. Additionally, by making the injection phase of object creation a part of the language semantics, you can greatly tighten up the contracts enforced by the language spec. It also opens up the possibility of notably richer object lifecycle semantics, statically enforced. There was a post here a while back about mining frameworks for language features. This is a perfect example of that, and could provide a lot of value (admittedly at some cost).

As for their goals, I'd say they would probably be better off looking to implement some (comparatively slight) extensions to Scala rather than incrementally improving Java. Much of what they are trying to do (no statics, immutability, properties, type inference) Scala already does just about as well as can be done in the JVM context.

Agreed

Agreed on all counts. I didn't mean to suggest that DI-in-the-language is a bad goal. In fact, I really agree with everything you said, and I think that the intersection of DI and module systems for OO languages will probably be a fruitful area. Java's package system is basically a punt, and coarser-grained modules/components are sorely needed. In fact, I think it's likely to be such a fruitful area that it's a pretty ambitious goal.

I agree with you about Scala, too. It's easy to look at Scala and say "Well, it's gotten too complicated," but I think that if you want "strong typing" with inference and subtyping (with or without implementation inheritance really doesn't matter much, AFAIK), it's quite hard to keep it simpler, particularly if you want to be on the JVM.

Also, while Scala does have mechanisms for avoided static names, they certainly don't forbid them. Bracha's Newspeak is the only recent OO work I'm aware of that really takes this idea seriously.

But anyway, yes, I basically agree with you.

The big advantage I see is

The big advantage I see is that completeness and correctness of available dependencies becomes statically enforcable by the compiler.

You say this as if this wasn't expressible using Java annotations.

Annotations

It's certainly possible to dynamically check dependency availability using annotations, but determining this statically requires building an annotation processor, essentially extending the compiler and the language.

(Unless you know another way, in which case I'd love to hear about it.)

After thinking about your

After thinking about your comments, you are right.

Google account "axilmar" (which I believe is Acillieas Margaraitis) actually gave the noop folks the best advice for this feature: build compile-time evaluation into the language, thus solving a more general problem. DI then becomes a compile-time library. Here is axilmar's quote from the proposed features page's comments section:

What you need is a strong dose of LISP. Forget dependency injection and such stuff. What you need is compile-time evaluation.

i'm (easily) confused

i thought DI was about run-time configuration, not compile-time?

Intent vs. Execution

Most DI frameworks started out being about run-time configuration of external dependencies, but were quickly found to be so handy at what they did that most of the "configuration" in any given application is actually about defining, wiring, and managing application internal components. In effect, they became incomplete, poorly spec'ed, and shockingly useful examples of what used to be called architecture description languages. Moving that sort of thing to compile time has obvious advantages.

Exactly

And most of what is now currently branded as architecture description languages are now model checking tools! :)

The wheel turns!

So the big idea is to support a fluid notion of static vs. dynamic, compile-time vs. run-time. In doing so, nothing is ever really static, and nothing is ever really compiled.

The fundamental block right now is JVM limitations. Even .NET has some of these problems (arguably, .NET moreso than Mono, since Mono lets you compile your own ".NET framework(s)"). You could read somebody like John Rose's blog to catch-up on where JVM kitchen sink features are, as part of the MLVM project.

Annotation processors or javassist

Either one uses annotation processors which are featured by the JDK or a toolkit like javassist. As far as tooling goes one can insert a particular Ant task into the dependency chain of build.xml. This is all pretty convenient.

How is "no statics" a hard

How is "no statics" a hard research problem? By statics, I take it to mean no static mutable state, and static methods are pure. It's already been done for Java in Joe-E.

Not sure

Well, I'm not sure what they had in mind, but I guessed they meant something stronger: no globally bound names at all (I would consider all Java's class names "static" in this sense). This might seem like a stretch, but when you factor in their interest in DI and testability, it might make more sense. The conventional wisdom is that any reference to a global name is a too-tight coupling, since it makes reuse and testing more difficult. This certainly includes static methods, concrete classes (e.g., in "new" statements), etc.

So I took "no statics" to mean the complete elimination of static references, a la Bracha's Newspeak. But maybe this isn't what they meant.

You could be right. I would

You could be right. I would think enforcing pure globally static methods alleviates all of those testing difficulties anyway though, assuming you also eliminate nominal typing by compiling all code to depend on interfaces (which I assume is how they'd be doing this).

Even still, hasn't this also been solved with type classes? It's merely a matter of how the name is interpreted, and as long as names are resolved locally, via an implicit mechanism like type classes for instance, it doesn't seem infeasible. Maybe I'm missing something.

ML Functors for DI

I'll admit that I have not really used dependency injection that much, nor thought about it extensively. But from my impressions of it, why doesn't ML's functors solve the DI problem? Are functors not dynamic enough for most DI practitioners?

They do solve it

There are more "object-y" solutions, as well.

The one I think is the smallest conceptual extension to Java is to a) view static methods as methods on the object's class (a la Smalltalk), and b) permit interfaces to specify abstract static methods as well, so you can typecheck method calls on the object's class. Then, the List interface might look like:

    interface List<T> { 
       bool isempty();
       int length(); 
       Iterator<T> iterate();
       // ... etc 
 
       // use static methods in an interface to specify constructors
       static List<T> nil();
       static List<T> cons(T hd, List<T> tl);
    }

The use of it would be something like:

    void m(List<String> lst) { 
       List<String> newlist = lst.class.cons("yo", lst); 
       // less verbose syntax would be nice, but hey, it's Java
    }

So the static cons method that gets called is determined by the concrete class of the list argument passed in. Furthermore, we know it's there, because the List interface promises us it will be. So a constructor can be called, without having to nail down the concrete class to use directly in the code.

Java supports static fields

Java supports static fields in interfaces, so you might be able to concoct a solution by elaborating the List interface to two interfaces:

interface List<T> { 
   bool isempty();
   int length(); 
   Iterator<T> iterate();
   // ... etc 

   static List_static<T> class;
}
interface List_static<T> {
   // use static methods in an interface to specify constructors
   static List<T> nil();
   static List<T> cons(T hd, List<T> tl);
}

and assigning an instance of List_static at List initialization time. You might have to be careful in the presence of implementation inheritance with overriding, but Noop claims to avoid inheritance anyway, so perhaps this isn't an issue.

There are other things, too...

DI frameworks also typically include other features that are hard (AFAIK) to get with ML functors. For instance, they allow detailed control over object lifecycle, initialization order, etc. Basically I would say these features do fall into the category of "more dynamic." But ML functors get you a long way. This intersection is exactly what I had in mind as far as research, in any case.

Previously...

But from my impressions of it, why doesn't ML's functors solve the DI problem?

Frank Atanassow wrote about this here a few years ago - not about ML functors specifically, but about a functional/typed approach to DI.

Misleading article title

From the linked page: "Noop is a side-project from a collection of like-minded developers and contributers (listed at the side). We hail from several companies, including (but not limited to) Google."

That seems to pretty clearly state it's a project by some Googlers and others, and not directly associated with Google. It doesn't appear to be, as the original poster states, a "Google language". Attention-grabbing misleading article titles aren't helpful.

Whatever...

At least I didn't claim like eweek that Google "Delivers" Java-like language.

But in any case, I presumed that most around here would be able to figure out that it was one of those 20% projects once they clicked on the link.