Expect New Major Language Within Five Years

An eWeek article reports that "A group of software gurus gathered at TheServerSide Java Symposium to discuss the future of programming, saying we should expect to see more dynamic languages and possibly a new major language in the next five years." Some predictions mentioned:

  • "More scripting languages added to the JVM" - Eugene Ciurana, enterprise architect at Walmart.com
  • "More in the way of language experimentation" - Ted Neward, founder of Neward & Associates
  • "In languages I hope we get to something that has a message-based paradigm." - Adrian Colyer, CTO of Interface21.
  • "I think we're five years from the next big language—to be where Java is today" - Gil Tene, CTO of Azul Systems.

The authors of the next big language better hurry up and release it!

I find it interesting to see this degree of receptivity to new languages within a single language community, i.e. at a Java conference. Are attitudes towards new languages changing, perhaps because there's more awareness of alternative approaches these days?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

It's Already Here...

...at http://www.scala-lang.org/!

I'm only semi-kidding.

Ironically, I think it's exactly Sun's hanging onto the Java language spec for so long that's led to the level of experimentation in other languages targeting the JVM that we're seeing. I find it fascinating that the JVM hosts a Scheme as good as SISC and a statically-typed OO/functional language as good as Scala.

Bold prediction...

Best-of-breed Scala tooling will be better than best-of-breed Java tooling in less than 30 months (with seamless integration between them). C# will be getting to approximately Java levels of tooling in that time-frame, and nothing else will be even close. That's not enough to make Scala the next major language, but it certainly puts a thumb on the scale.

Good tools

I would really like to learn more about what good things the Java tools are doing. I've seen some demos but that was years ago. I'm interested in things that Erlang, Lisp, Smalltalk, etc programmers would kill for if we only knew what we were missing.

Are there features that would rock a Squeaker's world? What are they?

RE: Good tools

I would really like to learn more about what good things the Java tools are doing. I've seen some demos but that was years ago. I'm interested in things that Erlang, Lisp, Smalltalk, etc programmers would kill for if we only knew what we were missing.

He's talking about Eclipse, Netbeans, Intellij, and all the plugins that exist for those tools.

Are there features that would rock a Squeaker's world? What are they?

Is the average Squeaker interested in Java? If so, I would suspect it would be what a modern tooling interfaces look like. Of course, the average Squeaker would probably be more interested in looking at Dolphin Smalltalk instead though.

We clearly need better tools.

I am using the tools you mention as we speak, along with many plugins for said tools. Quality is average to bad...I think the next best programming environment would be the one that will offer excellent tools that minimize development.

Personally, I would prefer a Lispy language with top quality tools than a language that does everything but with average tools. I even don't mind the parentheses any more...or dynamic typing...

Lisps can be statically typed

Dynamic typing is not a necessary part of Lisp. The Liskell language, for example, is just a Lispy syntax on top of Haskell. The code looks immediately familiar to someone who knows Lisp and Haskell, and it is a perfectly okay language.

Plus, it has the parentheses that everyone likes so much!

Squeak

As so often happens, a Java advocate forgot the state of Smalltalk tools. Mea culpa.

Going from Squeak to a modern Java IDE, the big differences you would see would be scope (75-100 automated refactorings, 500-1000 interactive code audits), precision (strong typing means tighter refactorings), and cross-language integration (refactorings/audits/navigation/completion all work with the various 'little languages' around Java projects, including XML, HTML, JavaScript, and properties files). There is also some stuff at the margin that would be new to a Smalltalker, such as structural code search-and-replace, user-definable code audits, and automatic detection and abstraction of duplicate code, but I doubt those rise to the "rock your world" level.

Thanks

I really appreciate this summary!

BTW Squeak's Refactoring Browser seems to support structural search (Code Finder) and structural search-and-replace (Rewrite Tool) but I don't have any experience of those personally yet. I'm still weirded out when my code is changed automatically but I guess that will pass :-)

You get used to it

I'm still weirded out when my code is changed automatically but I guess that will pass :-)

It's odd at first, but you quickly get used to it. Particularly when you see how many fewer bugs show up when the machine does the rewriting for you. In Java with modern tools, I'd guess that I actually type less than one character out of twenty that shows up in the source, and I rarely retype anything. Bugs tend to congregate in places where I had to do the typing, and cluster especially strongly in places I had to retype.

I would be very surprised if Java (and Scala) automation doesn't go a lot farther in the next two years. Automated control-flow rationalization, automated detection and extraction/creation of "missing" classes, and semi-automated assistance for extracting code fragments into methods are all pretty obvious next steps.

Less impressive than it sounds

In Java with modern tools, I'd guess that I actually type less than one character out of twenty that shows up in the source,

That's something to be proud of if you develop the tools which do that, and it's true that a lot of useful automated refactorings are now possible. However, the numeric factor is misleading when comparing across languages, because Java programs involve a *lot* of boilerplate.

If 1 in 20 characters in the source are automatically written, it seems to me that what's really wanted is for that hand-written content to be factored out into a higher-level description, so you don't usually have to look at or touch the generated code.

Not really

Some of that factor of twenty was boilerplate, but most is not. Name completion, automatic guessing of method arguments, automated dependency import management, generation of class/method stubs from use, and generation of locals variables from expressions make up the bulk of that multiplier, and those have little to do with boilerplate (bar explicit typing, if you wish to consider that boilerplate rather than documentation). Indeed most of them would be of use in any language with names, procedures, and scopes, although savings would admittedly be less in languages where types are implicit or dynamic.

If 1 in 20 characters in the source are automatically written, it seems to me that what's really wanted is for that hand-written content to be factored out into a higher-level description, so you don't usually have to look at or touch the generated code.

The hand-written content doesn't generate the auto-written content. The hand-written content merely acts as a scaffolding, around which the IDE can guess at what you mean to write (or how you might mean to transform what you've written), and offer to do it for you. The level of tight feedback involved and the high number of decisions taken, fully generative solutions are pretty much a non-starter out of limited domains. Even then, that just moves the desire for auto-assist up to the level of the generating language.

Boilerplate

Some of that factor of twenty was boilerplate, but most is not. Name completion, automatic guessing of method arguments, automated dependency import management, generation of class/method stubs from use, and generation of locals variables from expressions make up the bulk of that multiplier, and those have little to do with boilerplate

I think that depends on the system in question. If the majority of a class can be generated from a higher-level description, then it contains all the stuff you've mentioned, but in that case much more of it counts as boilerplate than if the code had been written by hand.

(bar explicit typing, if you wish to consider that boilerplate rather than documentation)

In Java, much of the explicit typing is unnecessary, even from a documentation perspective.

The level of tight feedback involved and the high number of decisions taken, fully generative solutions are pretty much a non-starter out of limited domains.

I think that languages with macros, i.e. syntactic abstraction, contradict this.

Even then, that just moves the desire for auto-assist up to the level of the generating language.

I'm not arguing against auto-assist, only saying that the apparent scale of the benefit of auto-assist in the Java case is largely a workaround for Java's limitations.

Cross-purposes

I'm pretty sure we are talking about completely different things, evidently because I somehow brushed up against one of your tripwires. I'm explicitly not talking about code generatable from higher-level descriptions. That's nice when you can get it, but you can't always, and is nearly completely orthogonal to the amount of effort saved by auto-assist.

I'm currently working extending with Scala plugin for IntelliJ IDEA with the various automated code assists available for Java. It is largely written in Scala. My back-of-the-envelope guess is that my Scala code comes up somewhere around 3x as dense as equivalent Java code in terms of characters-on-screen-per-feature. With closures, type-inferencing, currying, for-comprehensions, operator overloading, etc., Scala has pretty much boiled all of the boilerplate out of Java (arguably at the cost of some clarity). Unfortunately, that 3x still leaves Scala far behind Java in terms of characters-typed-per-feature. It is an amazingly frustrating experience.

There is hope, however. The bulk of the character-typed-per-feature overhead of Scala comes from fixing errors not found till compile- or run-time. These are errors varying from simple typos to misnamings to high-level design errors. They don't appear to differ significantly from the errors one would make if one were to code Java without code-assist, either in type or in per-character-density. With code-assist, the picture changes. Many errors are prevented from occuring (e.g. by completing names to avoid typos and automatically disambiguating references). Detection occurs much earlier for those that are not prevented (e.g. by type errors or corrupt control-flow shown in the editor). Cost-to-fix is decreased drastically (either by auto-correction, or by code transformations that are done precisely and thus avoid cascading errors).

Perhaps this is the root of our misunderstanding. It is very easy to look at our program texts as final artifacts, and disregard the fact that programs are created very much by trial and error. That's a very blinkered view. On a per-character-typed basis, coding is largely about error detection and correction.

I'm not arguing against auto-assist, only saying that the apparent scale of the benefit of auto-assist in the Java case is largely a workaround for Java's limitations.

I will be very suprised if I can't get my Scala coding ratio down to one character typed per twelve-to-fifteen on screen. The bulk of the benefit of auto-assist is not due to lack of abstraction, and adding higher-level abstractions does not prevent large improvements due to auto-assist. They are largely complementary technologies, rather than competing.

Completing Identifiers

Completing identifiers is a clear example of saving characters typed that doesn't correspond to any higher level description. You probably wouldn't want a language that allowed source like

m\t (p\t 1) l\t_o\t_i\t

for

map (plus 1) list_of_integers

even if you might prefer to type the former. As clear as that example is, I'm not sure what you meant by many of the other things you casually mentioned. More concrete examples would help those of us without experience with fancy Java IDE(A)s.

Concrete examples

Completion:

f(alt-enter)(b(alt-enter)(ctrl_alt-v)baz(enter) (9 keystrokes total)

becomes


@NotNull final FooType baz = foo(bar); (37 keystrokes + plus probably 30+ more for import if FooType is in another package)

More complex completion

new BOFH(alt-enter)(ctrl-f)foo(enter) (14 keystrokes total)

becomes


private final BastardOperatorFromHell foo = new BastardOperatorFromHell();

with "foo" left at the edit point (76 keystrokes total)

Trivial code creation:

(ctrl-n)f(down-arrow)(enter) (4 keystrokes)

becomes


@NotNull
public Foo getX()
{
return x;
}

public void setX(@NotNull x)
{
this.x = x;
}

(78 keystrokes total)

Trivial refactoring:

(Ctrl-F6)bar (4 keystrokes total)

becomes

rename foo and every reference to it to (>1000 keystrokes in thirty files, over the course of many edit-compile-debug-swear cycles)

Note that these are the simple examples, explicable after a bottle of Pinot Grigio and an incredibly lovely plate of hanger steak and lentils. These are the trivial examples, that I do on average once every thirty minutes. The complex examples, where auto-assist really shines, are much more subtle and powerful, mostly in terms of errors avoided. None of these are really about abstraction or removing boilerplate (other than the code generation example, thrown in for fairness sake). Even fancy dynamic/type-inferred/macro-driven/generative language environments include a lot of entropy for the purpose of human understanding. Modern auto-assist systems can remove a lot of that entropy from the edit process, while retaining it for code-reading. Abstraction really is orthogonal to that.

High-tech solution to the wrong problem?

How helpful would this be in writing denser code, like Peter Norvig's, Darius Bacon's, or Mark Johnson's for a few concrete examples?

Sure, why not

The generative capabilities of code auto-assist are lessened for dense code, but the error-prevention/detection/correction capabilities become correspondingly more useful. Why is this surprising?

We should be careful...

Even fancy dynamic/type-inferred/macro-driven/generative language environments include a lot of entropy for the purpose of human understanding. Modern auto-assist systems can remove a lot of that entropy from the edit process, while retaining it for code-reading. Abstraction really is orthogonal to that.

I'm a big fan of auto-assist features, but I think we should distinguish between two classes. The first class, local changes like name/type completion, "assign expression to local variable," etc., is an unqualified win.

The second class, though, should make us very suspicious. Into this class go things like "generate getter/setters", "generate delegating methods", "generate for/while loop", etc. I use these in Java as much as anyone, but in most cases the code they produce is pure boilerplate, impedes comprehension and makes the code more brittle, etc. We really would be better off improving the language in a lot of these cases.

Take, as an example, delegation. It's great that I can easily implement an interface, instantiate a helper object and delegate a whole bunch of methods without having to do a lot of typing. This is a useful way of reusing code when my only other choice is inheritance. On the other hand, I'd really be better off with mixin composition in many of these cases, and I'd really be better off if my code wasn't littered with boilerplate. So, while having this auto-assist in the IDE is better than nothing, it would be even better if we didn't need it at all (or needed it much less often).

We could argue for awhile about exactly which auto-assist features are good and wholesome, and which are pernicious crutches for missing language features, but my only point really is that it's foolish to argue that the entire lot is one or the other.

Yes and no

If 1 in 20 characters in the source are automatically written, it seems to me that what's really wanted is for that hand-written content to be factored out into a higher-level description, so you don't usually have to look at or touch the generated code.
While it is certainly true that reducing unnecessary boilerplate material is good, there are still decent reasons to bother to have some boilerplate material that can be automatically generated. The core reason is readability. After all, we spend far more time reading code than writing it, so ensuring code is easy to read is worthwhile. Too much boilerplate can obscure the point of code, but then a certain amount goes a long way toward making something more readable. Consider, for example, reading something in English; There's actually a lot of boilerplate here -- unnecessary words and punctuation that don't effect the meaning, but help to space out and provide a framework for the ideas being communicated. I think the same applies to code -- you can get too terse and too dense and end up with something that is hard to read. If code was all about communicating with the computer then I would agree with you, there's no point in any redundant boilerplate material. Since I view code as being about communicating with other people as well, I see value to a certain amount redundancy as long as it makes code easier for humans to read.

I'm not sure I buy your

I'm not sure I buy your analogy. Wordiness isn't a characteristic of well written English. I don't think Anton, or anybody else for that matter, is suggesting compact languages with little or no syntax are superior. Rather, he's suggesting languages with little or no “deadwood” are superior because they've provided the higher levels of abstraction needed to shave that deadwood off.

Redundancy

Agreed. Redundancy can be important and good, but forced redundancy due to the lack of ability to abstract is bad. It always results in the wrong kind of redundancy, even if you get some good redundancy as part of the deal.

rewrite rules!

Here's an OOPSLA 2002 experience report pdf - "allowed us to systematically make 17,200 changes almost bug free".

Scala

As far as new languages go, it has a good shot. I admire the underlying theory, as well as the high degree of pragmatism that went into its design. I make sure that all the grad students in my PL course get to write some Scala. If you need to target the JVM, then it certainly would be my choice. I have recommended it to many people, especially if they are currently using Java.

That said, personally, I don't use it. The language is still too verbose, the OO too much at the surface, and the functional parts buried in too much syntactic noise. But that is mostly because my main interest is to ``teach computers to do symbolic mathematics''. Math is not object-oriented. Actually, it isn't parametric-polymorphic either, but that's another story.

Scala rocks

I make sure that all the grad students in my PL course get to write some Scala.

I thank you for that!

There's a pretty good post on Steve Yegge's blog speculating on the next big language. The funny thing is that in the comment thread pretty much every language, Scala, Lisp, Erlang, Ruby, Javascript... are all suggested to be the next big language.

God help..

Does anyone actually want their favourite language to be the next big thing? Like, Enterprise Erlang Beans WSDXMLT?

I for one hope the Enterprise crowd continue to amuse themselves very far away from my turf :-)

Enterprise Erlang Beans WSDXMLT

Enterprise Erlang Beans WSDXMLT rock man, the urlaub scripting language that allows one to instantiate business logic directly on the underlying J-EAI engine allows the easiest XML messaging ever.

you know it's coming. accept it.

OTP

If I were you I'd be worried. It starts small, like "Erlang/OTP". But pretty soon...

It's in the air

Are attitudes towards new languages changing, perhaps because there's more awareness of alternative approaches these days?

There's a definite buzz about something post-Java coming about soon. As another recent example, this blog post about The Next Big Language was widely discussed in different forums. There's tons of other examples everywhere you look.

My personal experience is that after grinding out "enterprise" Java code for over 5 years, at some point it wore me down. The thought that there had to be a better way renewed my interest in language alternatives.

Something will overtake Java 1.x, but what? Fun times ahead.

My own bet

My own bet is on haXe, especially for all kind of web applications ;)

My own wishes

Factor. I seriously doubt this will come to pass, but in any case I am impressed by how fast the implementation has evolved. Factor already has an interactive top-level, machine code generating optimizing compiler for three major processor families (x86, PowerPC, ARM), a web server, GUI package, IDE, etc. Look at how long it took Python to hit the same point...well, Python still doesn't have a standard machine code compiler and still uses Tk for the GUI.

Erlang 2. There is no such language yet, but I keep expecting it. Take the core of Erlang and fix all the sticking points and bits of ugliness: switch to indentation-based syntax instead of all the issues with commas and semicolons as separators; add destructive local variables; add local-to-processes updateable arrays and hashes; allow user-defined guards; make modules be first-class.

Semi-troll

we should expect to see more dynamic languages

I expect that within five years we will manage to come up with a less stupid name than "dynamic languages" ;-)

Please, Dear God...

...let's hope that in five years we'll have made real progress on abstract interpretation, usable dependent types, multi-stage programming, etc. and both "static types" and "dynamic types" can be retired with due honors. :-)

I don't see that in five years

I don't see that in five years. All of this stuff has at least been simmering for threu-, four-times that long that long and has roots to early computer science.

flexitype is the new dynamic

Newsflash from the 54th Futurological Congress. In five years dynamic languages will be ruled out by languages with flexityping formerly known as "optional static typing".

optional ?

I guess that once both static and dynamic camps will be tired of fighting each other, they'll come to agree that a good system have both static + dynamic features, and then they'll fight for which one should be optional :-) I for once prefer static + optional dynamic, but I guess that depends where you're coming from.

I for once prefer static +

I for once prefer static + optional dynamic, but I guess that depends where you're coming from.

*sigh* This might not become an easy peace ;)

They're both optional, it

They're both optional, it just depends on which way you choose to look at it.

I know that you're language, haXe has "optional" dynamic typing, but you could just as easily declare all variables to be dynamic and either leave out a declaration or explicitly declare others to be static.

Situation Language

What is a dynamic language? What about this: A dynamic language is one that allows ongoing partial evaluation. Think about an engineer using a simulation. The engineer generates many partial results as he/she progresses. It makes sense to reuse these results where they are available. This is essentially what a dynamic environment does. It allows one to work through a problem from situation to situation to a final result.

In terms of a name perhaps "situation language"?

Sounds like what I'm trying

Sounds like what I'm trying to call a "live programming language." Basically, the environment is completely incremental in that when the engineer changes code, the program execution is updated responsively as needed. Most text-based dynamic languages only support hot swapping (the new code will be used on re-execution), while visual languages have traditionally supported live program updates (the new code is immediately re-executed where applicable in the program).

Freeeeeedom!

I expect that within five years we will manage to come up with a less stupid name than "dynamic languages" ;-)

I'm guessing that freedom language doesn't do it for you, either? ;-P

I am all for bondage and

I am all for bondage and discipline, myself...

But you know something? "Freedom languages" is slightly less stupid than "dynamic languages" ;-)

Labels

I thought it was more stupid. The thought "would you like some Freedom Fries with that?" came to mind. It'd be nice if the debate could move beyond politically charged words and emotional rants. Most people recognize there are tradeoffs involved, but when they lean towards one side the arguments get a bit too screechy.

Terrorist languages

Yes, of course. Either you are with FL or you are programming terrorist languages. Everybody gets that. It is never to late to ridicule inappropriate lables.

Re: Terrorist languages

I didn't mean to suggest that the blogger was calling the other side "terrorist" languages. My point was that he picked an emotionally positive word for his point of view, and put it up against a stodgy word like "safety". This just reminded me of the silly political tactics that go on, instead of real intellectual discussion.

I didn't mean to suggest

I didn't mean to suggest that the blogger was calling the other side "terrorist" languages.

This is one of the not so very rare occasions where the language is smarter than its speaker. To supplement the cultural context in which a certain phrase is used was just too obvious. Not everything can be controlled by original intentions. Sorry, but I couldn't resist.

Note that I don't even have an idea about the shape of a *real* intellectual discussion about programming languages. There are serious technical discussions, discussions about concepts and progress in the one or another category, of course, but since it is not all about truth values but mostly about use values I don't see any escape into the ivory tower. Marxists and system theorists always obtained that there is no neutral point of view on society and I guess the same is true with complex artifacts like programming languages, their interdependent tools, their jargons and their surrounding communities. You can't separate the intellectual discourse about PLs from technicalities but you also can't reduce it to positivism.

Note that I don't even have

Note that I don't even have an idea about the shape of a *real* intellectual discussion about programming languages.

It's true we all carry are biases with us, and we can't be completely objective. I'm not advocating a dry, purely theoretical, ivory tower discussion either. What I am advocating is that people emotionally detach themselves from the arguments, and discuss the issues and tradeoffs involved, rather than trying to win some debate. More light, less heat. I think it's clear that framing the issue as "freedom vs safety" generates more of the latter.

As an example talking about issues instead of getting emotional, I think the recent discussion about how much tools help vs just cover up for inadequate language design was pretty good. Lots of concrete examples without emotional arguments.

Rantidote

It'd be nice if the debate could move beyond politically charged words and emotional rants.

That's what LtU is supposed to be for. The last best hope of programmerkind. :)

Maybe not stupid, but misplaced

After reading Anton's reply in another thread, I tend to think that what is meant by dynamic languages is actually implementations of languages with dynamic runtimes. The confusion may be arising from the fact that people treat "language" and "implementation" quite interchangeably nowadays.

Nice! "If you're fixing

Nice! "If you're fixing compiler errors, the type terrorists have won."

And I though a "type terrorist"

was someone who did things like reinterpret_cast floats to pointers. (And I'm well aware of the common trick of using double-precision FP registers to store 64 bit quantities on 32-bit processors... which is a perfectly fine optimization if output by a compiler or a skilled systems programmer).

Perl6 should be released by then...

Whether or not it becomes a BIG thing remains to be seen. It is, however, going to be used by lots of current Perl5 programmers.
It's going to have hyper-operators (think APL), a way to change the grammar rules from within the language (it'll be a great tool for making DSLs), optional static typing, and much of the other buzzword-compliance features. It may not be considered a "respectable" language by some, but I have no doubt it'll find use somewhere. It may even become bigger than Perl5 ever was.

Python will be more mature, as will Ruby. Microsoft may release a new language which becomes "big" in numbers merely because of the installed base of their operating systems. Some new Pascal derivative always has a chance at being big. D might be the next big thing, since it offers much of the speed of C but with Unicode support, garbage collection, associative arrays, scope modifiers, classes, exceptions, mixins, and a foreach loop.