ECMAScript 4 overview paper

An overview paper describing ECMAScript 4 has been added to the ECMAScript site. It was recently announced on the mailing list:

I'm pleased to present you with an overview paper describing ES4 as the language currently stands. TG1 is no longer accepting proposals, we're working on the ES4 reference implementation, and we're expecting the standard to be finished in October 2008.
This paper is not a spec, it is *just* a detailed overview. Some features may be cut, others may be changed, and numerous details remain to be worked out, but by and large this is what TG1 expects the language to look like. Your comments on the paper are very much welcome. Please send bug reports directly to me, everything else to the list.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Home page material...

Chris, as a CE you can (and should) post items like this to the homepage. Use the link at the bottom of the FAQ. (I promoted the item manually, so there's no need to repost it).

JavaScript Department

Thanks for that. How can this be added to the JavaScript department so it gets listed there?


We really need a button for that, but it's not built into Drupal. I've updated the database manually.

Let me tell you

Let me tell you that the grammar is rather annoying to parse (I've just spent the best of last week attempting to write a parser) and that the specifications are somewhat hairy.

Still, at least on the paper, JavaScript 2 looks impressively better than JavaScript 1, and becomes an interesting contender in the PL field.

What is the future of ECMAScript/Js?

I am wondering, what is the future of it? In browsers, we'll always have a vendor specific dialect of it and the legacy monster will never be slain. For embedded languages in scripting, surely ECMAScript isn't the only worthy competitor - Lua, some sort of LISP - Slang or likes. Not to mention the trend to allow non embedded languges with their own run-time to be ran from applications, e.g. Python, the VBasic family.

What's LtU take on future of ECMAScript?


I can only tell that there's strong support for incorporating JavaScript 2 in Firefox. Let's not forget that Firefox (and Thunderbird, Songbird, Nvu...) is in large part written *in* JavaScript itself and will benefit a lot from added static checks, generators and other features -- in addition to which, the work on JS2 is done in part by Firefox people and during the big renovation of Firefox' JavaScript interpreter.

With FF having iirc nearly 30% of the market in Europe and Oceania, and between 15% and 19% on the other continents, this means that JavaScript 2 will have a future. Hopefully, other non-Microsoft browsers will follow suit relatively soon. The existence of a (functional) reference implementation rather than an informal and unintelligible syntax-based specification will help both writing deployed implementations and prevent fragmentation. The fact that Adobe is joining forces with Firefox on the JavaScript front for Flash (and presumably Air) is also a positive point.

So, I'm rather positive for the moment.

Monkeys will save us

I have a solution for the legacy monster, called ScreamingMonkey -- it's an Active Scripting engine for IE built around Tamarin, which as you may know is widely distributed.

You can read about ScreamingMonkey and related projects here. One of those projects is IronMonkey, which hopes to support optional (downloadable) Python and Ruby memory-safe implementations on Tamarin.

That brings up a good point. The default scripting language supported by browsers will remain JS. It's not going to become some other language for the foreseeable future. And the cost of supporting more languages is quite high, especially if they are not co-hosted on the same VM hosting JS. Consider these points against trying to wedge existing runtimes into browsers:

  • Memory-unsafe implementation languages.
  • Cross-heap cyclic leaks due to multiple memory managers.
  • No common browser security model (which is evolving, and not present in the native runtimes).
  • Large compiled code footprint (download and in-memory)
  • Lack of cross-platform download on demand.
  • No stable ABI for embedding native runtimes.

But worst of all, if you think it's hard to get browser vendors, including the "legacy monster", to agree on one language, consider how impossible it is to get them to agree on many languages. Unless, of course, one vendor dominates and forces everyone to reverse-engineer, or instead license, its code. I hope the hazards with that kind of monopoly are clear by now.

Some might argue that Firefox should take on all of the above problems in order to win developers and pressure other browser vendors. This is good strategy when done incrementally, by embracing and extending IE's de-facto standards in concert with other vendors and web developers in a truly-open standards body like the WHAT-WG. But it's suicidal to try to please everyone -- solving the above problems takes way too long and has a huge opportunity cost. You end up pleasing no one, and losing users because you didn't innovate asymmetrically, in user-facing features and better platform APIs apart from programming languages.

Firefox must remain small (it's still under 6MB), because people have to choose to download it, over various kinds of networks; we don't have many OEM bundling deals that make us the default browser. Same for Opera desktop and Safari on the iPhone, and of course the mobile web cannot tolerate N language runtimes, with O(N) or O(N2) bridging costs, for N > 1. It's JS or bust for the mobile web, which I keep saying is the same as the web.


what you say is true

What you say is true and points to the fact ("imo") that we are so very lucky that the people developing Javascript have, at least, "not bad taste". Nothing is perfect about Javascript but nothing is terribly offensive, either, and it is darn flexible.


JavaScript the Virtual Machine?

The problem with getting traction of other scripting languages on the browser is that the browser has no common virtual machine that can play host to the variety of languages. Sure you can try to plug a machine onto the browser, but that hasn't proven successful in the past. Part of the reason for the success of JavaScript is exactly because people don't want to be downloading programs that run locally.

Also, from what I see, a big push is in making JavaScript an intermediate language. For example, Ajax is mostly about using various languages like Java, C#, Python, PHP, etc... that auto-generates the JavaScript glue code. The generated code is not intended for human consumption. I'm left wondering whether this use of JavaScript as an intermediate language has influenced the design of ES4? That is, does ES4 make compiling to JavaScript easier and/or more efficient?

JS as IL

Supporting code generators targeted at JS has been at most a secondary goal. If only JS were never written or read by humans. But it is, in spite of crunchers and obfuscators, and that's still a huge virtue of JS and other web content languages -- the view-source advantage.

Nominal types, final classes and methods, let, let const, and other name binding forms that can't be overridden, all will help code generators produce more robust code that can be more readily optimized by browser JS engines.

On the list, the topic of call/cc came up, as part of a concurrency thread, but also in part because some code generators (hi, Nicolas!) would love to have it in the target IL. See this message and the responses in its parent thread.


tail calls

Nominal types, final classes and methods, let, let const, and other name binding forms that can't be overridden, all will help code generators produce more robust code that can be more readily optimized by browser JS engines.

And don't forget tail calls!


some ES4 implementations are expected to exist in “hosted” environments, like the JVM or .NET. These environments do not always provide powerful control operators like first-class continuations or stack marks. Therefore ES4 is also precluded from incorporating such features.

Given that JS has closures, a "stack frame" can continue to exist indefinitely. That means that ES4 on the JVM or .NET can't use the native stack anyway, right? So why not have continuations?

closures & continuations

Given that JS has closures, a "stack frame" can continue to exist indefinitely. That means that ES4 on the JVM or .NET can't use the native stack anyway, right?

You can have closures and a native stack. Closures pointing to stack frames is only one way to implement closures, and not the best way since it can lead to not being able to garbage collect items in the same stack frame that were not captured by the closure that would otherwise be garbage.

Not having continuations allows a language to fully interoperate with native APIs that have callbacks. With continuations you can't do that because you can't yield and resume control across native API stack frames. With continuations all of your native calls must be leaf functions unless you put some restrictions on use that require the programmer to know when he can or cannot call a continuation that might cross native stack frames.


The thread I cited in a reply just above tells why we are not imposing coroutines or call/cc on the standard. See this message, which nicely summarizes one goal in specifying generators but not coroutines or call/cc: greater interoperation through lowering the requirements bar a little. (We have other goals and reasons, too; see the thread.)



I believe these archives have the bad taste of being private.


Sorry about that -- mailman did the right thing, yay.

As and other sites host digests or archive mirrors, I think it's ok to just open this up. We never intended to keep the list private, and anyone can join. But any LtUers interested in JS2/ES4 and not on the list, join already! We crave feedback.


It seems very big

Just some thoughts. I'm not any sort of an expert in PL design, although I do code in JavaScript fairly often.

It was very interesting to see what ES4 is actually going to look like, given everything I've heard about the features that it's going to have.

As a check list, the things that are being added to ES all sound like very good things. However I can't shake the feeling that they've taken a small language and turned it into a big language, and I'm not sure that this is going to be to anyone's benefit in the long term.

I wonder if perhaps ES4 is making the same mistake as C++[1], in trying to add new features to a legacy language, and ending up with something that has a lot of syntax, and idiosyncrasies. In particular, despite what the document says, it isn't clear to me that the class based OO and the prototype system really fit together particularly neatly, and I think that the let expressions are really a work around for the fact that variables aren't block scoped in previous versions.

It's a bit like there's two languages in there: the JavaScript we all know and quite like, and something new and interesting[2], but not obviously the same thing.

Still, I guess we won't really know if ES4 is any good until people have tried to actually program big[3] things in it.

[1] I believe Bjarne said somewhere that trying to be a superset of C was a mistake. I may be wrong. I'm sure you can find someone who's said that though!
[2] The structural types are cool, I've never seen that in an OO language, and I can see them being very useful. Shame they can't be recursively defined. Also interesting is having two word keywords, like "this function"
[3] I've seen it often said that ES4 needed classes for making "programming in the large" easier. Has anyone come across evidence for this based on actual projects?

In the long term, we'll all be dead

We have indeed turned it into a bigger language. Some new features probably have more utility than others, but we've tried to remain use-case driven, and ES4 is a language for the next few years. Nobody can say what the situation will be like in a decade -- ES5, ES6, or something new.

As Brendan wrote, the legacy language is a fact to be dealt with, not something we can easily work around. Backward compatibility (compatibility with the web) is of extreme importance; in some cases ES4 is not compatible, but we've tried to minimize those and do only ones that we expected would be OK.

To address two concrete points:

The class-based system and the prototype system do in fact fit together in an OK way, but not without warts and subtleties (not all of which are completely worked out yet, I think). The main problem is working out how overriding a method in one hierarchy affects the other hierarchy, and not the least when it should and when it shouldn't.

The let expressions are not a work around for the lack of block scoping in previous editions; that was fixed by the let directives ("let is the new var"). Let expressions just provide even narrower scopes for bindings and are ready for macros, should those ever make an appearance. Let expressions emphasize the (quasi) functional flavor of the language. From the Date class:

  intrinsic function getDay(): double
      let (t = timeval)
          isNaN(t) ? t : WeekDay(LocalTime(t));

I'm sorry, I probably meant

I'm sorry, I probably meant let directives, not let expressions.

Your comment led to me into actually looking at the source code of the standard library. I completely see what you mean about let being the new var, and thank heavens for that!

"a fact to be dealt with"

With respect to your comment that the legacy language is a fact to be dealt with, I'm not entirely sure you got what I meant by my original comment.

Rather than saying that you should have thrown out the legacy language, I really meant that you'd done too much to it, fixing it where it wasn't really broken (and of course, compatibility means that you can't fix it where it really is, for example the binding of this).

This isn't really meant as a criticism at the moment, as I said, people (other than the working group) need to try to use it before we know if it's an improvement. Rather it is an impression, an aesthetic judgement, and I am quite prepared to find myself wrong on it - it would not be the first time.

I showed the document linked to one of my colleagues who said something along the lines of "They're trying to turn it into Java, it's disgusting"[1], and I know what he means (especially having now looked at some ES4 code). That said, if you need to run a Java in the browser (and I still don't quite see what the use case is, FireFox extensions maybe?), then the Java that has first class lexically scoped functions, a more expressive type system, and some nice things like destructuring bind and list comprehensions doesn't sound like a bad Java to me.

[1] This particular colleague is of the bent that thinks that static typing and information hiding[2] are necessarily bad things. We both love Python, but he thinks that the fact that Python is dynamically typed is a feature, I just think that the fact they left out static typing has allowed them to make a beautiful language with no cruft that's there to keep the compiler happy. If you can add static typing to Python without detracting from the beauty of it, I'll think it's a good thing. Needless to say, my colleague hates Java.

[2] Yes, I pointed out that you can already do public and private variables in Javascript. And yes, we do put underscores in front of variables in Python to say "don't look at this unless you really want to". I guess he thinks that it shouldn't be a language feature.


Peter, your colleague sounds like many people I know. There is intense dislike of Java, and a knee-jerk response to seeing class in JS code. Of course, Python has classes, too, and both Python and ES4 are dynamic (ES4 by default). Types in ES4 are optional, and the strict mode is optional at the implementation's discretion.

What's more, as the overview points out, classes are baked into the standard ES3 library, and of course the DOM and many other libraries used by JS hackers.

Can any of these considerations help unjerk your colleague's knee? It would be interesting to hear more.

On the general question of evolve JS vs. do a new language: TG1 has many implementors who reckon we can't afford two disjoint implementations. Now there could be aggressive code-sharing, even with a "new JS" that is not backward-compatible. But we also want to re-use brainprint, allow for code migration and gradual typing, and otherwise evolve rather than supplant.


Wow, look at me, I've got opinions!

Wow, I'm talking to the creator of one of the world's most programmed languages!

I can't speak for my colleague (he's home sick today), but here's my (overly long) thoughts on these issues:

With regard to size and optionality:

I hope you won't mind me saying that ES4 is going to be a large language. By a large language I mean that there's a lot of details in the overview, a lot of keywords, and a lot of concepts.

I learnt JavaScript from the JavaScript 1.1 guide and reference from Netscape, at the age of about 14 or 15, not really knowing any programming apart from some very rusty BBC BASIC, and not enough C to do anything. I recall the JavaScript guide being pretty comprehensive, and pretty easy to understand. I wonder if a 14 year old will think the same of the Javascript 2 guide, regardless of how much is optional?

I guess that the language becomes more complex as you add features to the language that are not easily defined in terms of the core language. Obviously static types need to add complexity, and to be statically typed, so do classes (this trick won't give you the advantages you require).

I'm not saying that being a large language is a bad thing and I'm not saying that the additions made are in bad taste (some of them make me quite excited, keep borrowing from Python just as much as you want!). It's just interesting to compare ECMAScript's approach to complexity to, for example, Scheme. Would it be fair to say, that Scheme is successful (Anything that's been around that long, must be considered some sort of success) because of what it is: a very small and elegant lisp, and to make it anything else would be to destroy it; whereas JavaScript is successful despite what it is (I hear this often) and change at the cost of complexity is worth it?

With regard to adding classes:

Ecmascript is (as far as I know) the best example of a prototype based language that is currently used, and so if the designers of the language are now moving away from that model (and I think it's clear that this is what is happening) towards classical inheritance, does that suggest that prototypical inheritance was a bad move in the first place? Is class based inheritance superior? I'm assuming that the language designers aren't of the opinion that the mixture of both forms is the best option, but I may be wrong. I'm still interested in hearing where the pressure to add classes came from.

The argument that Python has classes too is a bit of a red herring. Python never tried prototype based inheritance, and it doesn't have to support the legacy of it. The argument that the DOM is implemented using classes is too. A JavaScript program (in the browser) only ever has to deal with DOM instances and factory functions, and I've never thought to myself "I wish I could subclass DIV". I'm not sure which other libraries you mean.

With respect to evolving JS versus doing a new language:

I'm not entirely sure which you've done! You can re-use brainpaint (excellent expression btw) between C and C++, but good style in one language isn't good style in another. I wonder if perhaps what will happen is that library developers who write code with complex inheritance structures, and people who always wished that JavaScript was Java will write in an "It's Java for the webbrowser" dialect, and people like me will just use the "old JS" dialect as a sort of "JavaScriptScript" to manipulate objects created from the libraries.

Urgh, I've got real actual work to do now :-(

Growing a language

Would it be fair to say, that Scheme is successful (Anything that's been around that long, must be considered some sort of success) because of what it is: a very small and elegant lisp, and to make it anything else would be to destroy it;

Scheme is not immune to the question of growth. The same debate has been going on in the Scheme community since R4RS, and even more-so since R6RS. All languages have to decide how they want to evolve. And there's always someone pointing out how much simpler things were before. :-)

I'm aware that there was

I'm aware that there was quite a furore around the amount of change that was involved in R6RS, I'm not a Scheme programmer and I don't know if the report authors got it right or not.

I hope you take my point though, with Scheme there is a principle that it's meant to be a small language, and while it has grown, that the cost of growth is something that the report authors will have had at the front of their minds.

I might be pointing out how much simpler it was before, but not to the end of telling the ES4 team that they've got it wrong.

I'm fully aware that I'm only an armchair language designer, but the cost of growth versus maintaining simplicity does seem like a valid topic for friendly debate.

Size matters

No bad religion

It's true that older "JS2" or "ES4" drafts, led by Waldemar Horwat with Microsoft people on board (see JScript.NET), sent down a summary judgment that classes have vanquished prototypes in OOP -- not far from the truth in academic research, FWIW.

These days, the Ecma TG1 crowd is not dogmatic at all about classes being in any way better than prototypes for general OOP. We expect untyped code and prototype-based programming to be commonplace on the web, for a long time if not forever.

Why classes?

However, prototypes in JS have their weaknesses, which cause problems on the web, notably in Ajax libraries, which strive for good reasons to be compositional. JS's single-prototype system is flexible enough to support many OOP styles, but you have to totally order objects along the prototype chain, which sometimes bites back. And you still don't get fixtures: efficiently sealed instances that can't be tampered with.

Anything directly supporting high-integrity abstractions in the language will be novel, and it will look like classes, whatever the keyword names. It will entail declarative special forms for fixing the fields and methods produced by an object factory, detailing the construction protocol for the factory, and efficiently hiding member names.

The alternative involves closures for private variables, which means (a) idiomatic JS FP, not OOP, which is inconvenient and mysterious to most programmers; (b) unlikelihood of optimizing away closure costs in most implementations (think of browsers for cell phones).

Other libraries

By other libraries than the DOM requiring classes and even interfaces to reflect in a first-class way, I mean toolkits including Adobe Flex for Flash, the WPF and Silverlight frameworks from Microsoft, the Java reflections in JS, etc.

DOM extensibility

Browser programmers absolutely do want to subclass DOM types, or more generally mock up and self-host DOM-like abstractions that play nice with the builtin DOM classes. Again this requires novelty in the language, and the canonical way to provide it is something very much like we've done: classes and interfaces.

This demand goes beyond the DOM, to general browser extensibility (see the Self-hosting sub-section in the overview). Browsers have a cliff below which lie ActiveX and the Netscape Plugin API. On the safe side of the cliff, crowding ever closer to the edge, are today's limited Ajax apps. Tomorrow, there should be no unreasonable limit to what you can do in downloaded JS2 -- there should be no cliff apart from the one wisely warning people away from binding to native platform DLLs.

Language "mood"

My point about Python having classes was more that a dynamic language can have some amount of classical OOP support and not be flamed. But I take your point. What this comes down to, more than arguments over the sufficiency of prototypes making classes redundant, is the "mood" of the language. And about that, people are very "fighty" right now.

I'm a pragmatist. I say choose your own mood, but if you need classes, they're there. Why should JS be gratuitously different for such use-cases? I have my own taste, but I won't confuse it for necessity and stubbornly dismiss arguments about the insufficiency of JS1, and say "work harder, more closures!"

Multi-paradigm already

A key phrase in the overview is "multi-paradigm". JS is not just Scheme, not just one style or way of doing OOP. This is true in both the language (a hybrid of Self and Scheme, with some brutal and even broken simplifications), and in its Ajax library ecosystem (Prototype feels like Ruby, MochiKit like Python, etc.).


About size of Scheme vs. JS: Scheme is intentionally for teaching and research, so can (try to) stay small. If you talk to people who have used Scheme in industrial settings, you hear stories similar to those who have to find and use Ajax libraries, except that inside a Scheme shop "silo" one can make provision for the right compiler and libraries -- on the web, non-interoperation is not an option (see also how the spec must over-specify compared to Scheme or C).

For anything like the web, Scheme, like JS1, is too small. You have to procure or invent a lot of library code just to make the core language usable in real-world settings.

Bigger is better

So it's emphatically the case that TG1's majority, including me, believe that JS should grow. Its over-minimization has imposed a big complexity tax on JS programmers that cannot be paid merely by piles of Ajax libraries. And the end-game is hollowing out the browser, so it becomes a minimal and more stable runtime for a web-wide world of JS that can do what today only plugins now boast about.

All successful programming and natural languages get bigger over time. Eventually old forms (the subjunctive mood in English, sigh) dwindle away. With the Web this can take years or decades. The ES3-compatible subset will remain, and perhaps endure for most JS scripts (counting without coalescing copy-and-paste hacks that are shared widely and skew the statistics).

In this light, "JS1" certainly can continue to be the subject of primers and books for n00bs, so I wouldn't worry about that. But majority needs should not dictate majoritarian exclusivity: there are valid minority-share use-cases for JS that JS2 meets with optional types including classes.


Imagine a world with browsers exposing fancy text, 2D and 3D graphics, multimedia, and other low-level APIs; with shared-nothing concurrency for background computation, and SQLite available to web apps. This vision is not far away[1], but these primitives need JS2 for programming in the large and performance scaling.

Without JS2, the plugin cliff looms close. With JS2, browser vendors can get out of the way, and JS developers can drive innovation. You won't have to wait a year for the new Firefox, or three to five years for a new IE, to see material progress.


[1] Microsoft would tell you this vision is called Silverlight, Adobe would say AIR, and both would lock you into a single-vendor solution. There's no reason all of these facilities should not be available in cross-browser open standards such as the WHAT-WG is working on right now. But notice how both Silverlight and AIR are sold not only with fancier text and graphics APIs -- they variously tout more, bigger, and "faster" programming languages than JS1. This is not (just) over-engineering or marketing.


Thanks for a very detailed and interesting answer. :-)

Irreverent questions

"unlikelihood of optimizing away closure costs in most implementations (think of browsers for cell phones)" - you mean ES4, a much larger language, will be easier for cell phones?

"Browser programmers absolutely do want to subclass DOM types" - they can extend Array already. Subclassing DOM types is a browser API issue, not a language issue.

"fancy text, 2D and 3D graphics, multimedia... these primitives need JS2 for programming in the large and performance scaling" - are we really out of ideas for optimizing JavaScript?

Sugar, protein, bones

you mean ES4, a much larger language, will be easier for cell phones?

Yes. Don't be distracted by surface syntax, which is of course bigger. The runtime semantics (recall that strict mode is optional and no analyses are required by the spec) are not nearly as big, and much of the standard library is self-hosted. This design can be built on top of a modern, well-factored ES3 implementation, such as Opera's latest -- and Opera is part of TG1 and in favor of ES4.

Subclassing DOM types is a browser API issue, not a language issue.

Sorry, this begs the question of why should it be so. Thinking like this, based on artificial or historical divisions of labor in existing browsers, made worse by IE's monopoly stagnation of the web, is exactly why there's a plugin/ActiveX cliff, and why plugins like Silverlight and Flash still tempt developers away from the open web. But your assertion is not its own justification.

are we really out of ideas for optimizing JavaScript?

No, of course not -- but you will not find room in page-load time or memory for the kinds of lambda-lifting, indeed whole-program compiler, optimizations needed for closures that are as efficient as class instances for integrity, private members, and inheritance. If JS had less mutability, it would be easier.

With runtime profiling or trace-based optimization you can do better, but closures have irreducible costs in JS, and compilers can't afford the code footprint for fancy analyses. This stubborn fact, combined with the idiomatic and frankly syntactically heavy/awkward costs of writing closures instead of classes, suggests that classes address a valid use-case not served by JS1.

Consider that the self-hosted built-ins, which are proving to perform better in modern VMs (see the footnote in the overview), cannot be expressed using closures.

Certainly, Mozilla and others will optimize JS closures. They simply are not the one-size-fits-all solution you seem to think they are. Not in JS, not in its common embeddings such as web browsers and the Flash Player.


Plugin cliff

Will ES4 allow me to play MP3s, videos and make OpenGL calls? In which browser/OS combinations? Under what security restrictions? That's the plugin cliff. Changing JavaScript is irrelevant. It is an API, API, API issue.

APIs -- and code to call them

Plugins are not needed merely for "APIs", or Flash would not itself contain an ActionScript execution engine -- it could under your theory expose APIs to browser JS only (same for Silverlight).

Advanced rendering APIs such as 3D canvas are coming to browsers. Guess what happens next when one tries to program them at high rate, with too few native types, using JS1 only?

Web developers can't simply program arbitrary APIs from JS1, and win against proprietary platforms competing directly against the open web standards.

Security is of course critically important, and mostly orthogonal to choice of implementation language. Or do you trust compiled C++ code implicitly, just because it is packaged as a plugin? VM-hosted languages can be more thoroughly and flexibly secured, but that is a topic for another day.


I'm losing track

Re your points: a) Flash isn't competing with JavaScript on script execution speed, it was actually slower before version 9. This didn't matter, and doesn't now. The people writing 3D engines in Flash aren't complaining about VM speed, they're complaining about rendering. b) I don't trust arbitrary plugins, I trust the Flash plugin because it's everywhere. I certainly wouldn't want arbitrary web pages to run at the same privilege level as Flash, including saving arbitrary amounts of data to my system.

And a little bit of my own perspective. I make online maps for a living. Maps can be written in JS, Flash, Java, SVG+JS, you name it. They're all pretty complex as webapps go, and all have major speed issues when you try to add interesting features. I should be your poster child, right? But making JavaScript run faster isn't even on my wishlist. JavaScript is okay. What's not okay is speed of DOM manipulation, HTML and vector repainting, DOM memory footprint, browser bugs etc. The ES4 proposal isn't fixing any of my problems with making online maps, but instead throws another heavy spec at browser developers to keep them busy. It's not just useless; it's actively harmful, like the 3 megabyte SVG spec.

DOM hides JS costs

My point about scaling is simple: a JS program calling OpenGL-ES methods on a 3D canvas will want the greater performance coming with JS2/ES4, and today will not compete with an AS3 (JITted) program in Flash 9.

Beware the hidden costs of JS1 mated to a C++ DOM. A lot of the overhead in method calling and memory use comes from the lack of a unified heap and type system. With JS2, more of the DOM can be self-hosted. This actually outperforms native DOM in many cases (same for the built-in classes).

When you look at a hierarchical instruction profile of Firefox, you see fairly flat distribution, not often an outlier to go fix. Yet major speed improvements are possible, and they're coming. The trick is to get the dynamic type conversion and method binding/dispatch out of the runtime, and trace the fast paths. This can be done with JS1 up to a point, but the DOM and other such interfaces have richer types than JS1 affords.

Browser bugs are always with us, but unifying memory management and type system machinery between JS and the DOM will reduce bug habitat.

I'll have more to say about this on my blog in the near future.


Speed, bugs

I try to move 30 map tiles (jpegs) simultaneously, and my frame rate drops. This is like 60 DOM calls per frame. How much of the slowness is due to method call overhead? I don't believe even 10%.

Firefox 2D canvas: I can draw a small line at 30 fps, and a big line at 2 fps. It's not method call overhead - the number of calls is the same. The true reasons: big spec, hasty implementation. CSS2, SVG, same story. Why will the ES4 story be different? The thinking certainly seems the same.

No no no no. Number of potential bugs is at least proportional to number of features. To fix bugs, stabilize features, don't grow them in a geometrical progression. Don't fix "will want", fix "want". Common sense.

I just got a comment from the lead interface developer of, Russia's biggest online map. Translated from here:

Agree with the comments.

Instead of solving real problems, they make up new ones.

As for the introduction of classes and types, I'm afraid of it like the fire. It seems we can lose the beautiful language JavaScript :(.

Fix bugs only?

That's not an option. I can cite real DHTML benchmarks that do spend too much time in the glue between JS and the native code. Your benchmarks are real too. Acknowledging neither kind excludes the reality of the other, but you want fixes only for yours before any of the ones we track that do point to JS and its glue code can be fixed. Sorry, we have to take a broader view.

But here, I will make you a deal: file bugs on the two cases you mention (30 map tiles moved per frame, canvas scaling badly). Put relevant version, OS, and CPU info in the bugs. Cc: me on them. I'll help make sure they both get fixed.

As for "we can lose the beautiful language JavaScript", that's false. Nothing is lost, ES4 is a superset of ES3. Keep using what you like and esteeming its beauty.



I filed one of the bugs, but forgot to cc you. The bug number is 402690. The other one is trickier to distill, I'll get to it.

The "nothing is lost, just don't use the new features" argument is wrong. View-source/copy-paste becomes harder with typed code. Obfuscation, minification and other source-to-source transformations become harder with lots of new syntax. The number of people mixing and matching JavaScript will go down. The loss of simplicity is a real loss - see the transition from C to C++, eloquently described by yosefk:

a programming language is not exactly a tool. It is more accurately described, well, as a language. The key difference between tools and languages in the context of "blame" is choice. You probably don't choose to speak English - you do so in order to communicate with all the other people speaking English. When a bunch of people do something because other people do it, too, it's called "network effects". For example, if you want to work on a project for reasons having nothing to do with computer linguistics, and the project uses C++, you'll have to use C++, too. No choice.

I like to talk about a "utility" metric for code: the amount of useful work it does, divided by the number of convoluted, typed arguments you have to give it. Languages like Java encourage code with low "utility". JavaScript code typically has high "utility", because it has no classes.

The Web isn't moving from documents to apps. It's still 99% documents. Nobody's going to write LiveJournal in Silverlight until Google can search Silverlight. The "rich" 1% of the Web that ES4 is fighting for doesn't belong to "open standards" now, and never has.

Untyped code, typed APIs and toolkits

ES4 does not mandate typing, and type annotations will not be used much for a while, if ever, in most small copy/paste-able scripts. But think: do you copy and paste minified gmail JS? No. But apps such as gmail are where ES4 can shine. Structural types for APIs, nominal types for optimized/frozen toolkits, untyped code elsewhere. ES4 is a best of several paradigms language.

It's hard to argue about the future, but ES4 is not aimed only at 1% of the web, or only at "rich" (intenet applications, or whatever you meant). There is no one size that fits all, or even one size fits 99%.

Good point about Ajax search engine indexing problems. Browser-personalized pages can't be indexed without local help. That is a possibility to explore, but it goes beyond the core language.


JS gets the kitchen sink

Yesterday I wrote that supercompiling JavaScript would be a fun hobby project. With ES4 this becomes a Big Pain, like obfuscation, minification or any other source-to-source transformation.

Popular technologies face pressure to grow. Java's generics, C# LINQ, Scheme R6RS, C++. CORBA, SOAP, SGML. It's sad to see JavaScript going the same way. It used to be the simplest and most powerful popular language, my favorite language.

You have to resist, people. Seriously: classes, metaclasses, virtual properties, annotations, tail calls, multimethods, nullable types, strong typing, interfaces, generics, packages, namespaces, pragmas, constants, destructuring bind, iterators and generators, array comprehensions?

Perhaps this is how things should be...

Instead of the 'web browser program', 'http', 'html' and a highly complex language like 'JS4', a simpler and potentially better solution would be to have a 'output viewer program', a structured binary transfer protocol, an easy lispy-like functional language where code can be executed either at compile-time as well as run-time.

Instead of having web servers send html pages through http, we should have application servers which send programs in binary form (not compiled code, but the source code represented in binary instead of text).

Instead of having html, we could have programs. If a programming language is structured appropriately, a program listing can appear as declarative as html is, but without its restrictions and the need to hand computations to external languages.

Instead of having web browsers, we should have output viewer programs which provide a graphical output viewport to downloaded programs.

Instead of having predefined GET/POST/etc operations, the language should define a simple, transparent and unified way to represent remote procedure calls.

Of course, the above conclusion is easy to make now, but it was hard 20 years ago. Developing such technological infrastructure is an evolutionary process, driven by needs.

But at some point, evolution must stop, rights and wrongs be recognized, and start over cleanly. It will benefit everyone in the long run.

Feh, evolution

Interestingly, I don't think this conclusion is really any easier now than 20 years ago. Researchers knew all the same stuff 20 years ago; it's not like they didn't know what networks could do. Surely people were working on such systems.

HTTP and HTML succeeded because they were not the kind of system you describe. Documents clearly were not programs. Even a non-programmer could see all the moving parts and get a good feel for what was going on.

The interface between client and server was dead simple, which created two very healthy, very competitive spaces, allowing evolution to do its thing. We've all benefitted, and I don't think stopping evolution is a good idea (never mind "possible").

I had the opposite reaction

HTML is programs (c.f. <SCRIPT>). GET/POST is a nice general RPC mechanism. etc. My reaction was the exact opposite: that everything he's asking for is essentially already there.



Yes, me too

Actually, that was my first impression too.

Then I realized that couldn't be what Achilleas meant. So I went back and re-read it. Now your guess is as good as mine, and I don't wish to put words in Achilleas's mouth, but I think he was asking for something different.

HTTP is RPC, sort of by definition--but not really. Or, you have to forget everything you know about RPC in practice in order to say that. When someone says RPC, they mean something like CORBA, SOAP, COM+, or Java RMI. (Or Alice ML, fine.) HTTP succeeded because it was not those things.

Some HTML documents are programs, but I imagine Achilleas is asking for something different: an elegant, general-purpose programming language (not JavaScript and certainly not JavaScript 2), in which paragraphs and hyperlinks are library features.

Now--this complicates my story, because people do actually want something like this now. But most people I talk to don't seem to care about the language not being JavaScript. They just want a language to write Google-Maps-like apps in. ECMAScript Edition 4 aims to be that language. I think it has very a good chance of success.

But my point above was just that someone could have invented that 20 years ago. They probably did. We got HTML instead for a reason. HTML was initially successful long before <script> and I think because it wasn't there at the time; see the last sentence of .

Some HTML documents are

Some HTML documents are programs, but I imagine Achilleas is asking for something different: an elegant, general-purpose programming language (not JavaScript and certainly not JavaScript 2), in which paragraphs and hyperlinks are library features.

All the output of a page should be library features.

HTTP and HTML succeeded

HTTP and HTML succeeded because they were not the kind of system you describe. Documents clearly were not programs. Even a non-programmer could see all the moving parts and get a good feel for what was going on.

It could also happen with documents being programs. The exact same html structure could be available to document writers, but it could have been programmable instead of hardcoded, giving the chance to people with better needs to evolve the standard without the need to embed other forms in it.

The interface between client and server was dead simple

I'm for simplicity, too. The model of downloading pages is good. The problem is what is inside those pages.

We've all benefitted, and I don't think stopping evolution is a good idea (never mind "possible").

I don't think so. Writing a web application today is a real struggle; creating up a web application can take days, whereas the same functionality in desktop can take hours to code.

No, dude

It could also happen with documents being programs. The exact same html structure could be available to document writers, but it could have been programmable instead of hardcoded, giving the chance to people with better needs to evolve the standard without the need to embed other forms in it.

Right, only programmers are a tiny minority among human beings.

To get off the ground back then, this hypothetical Web programming language would have had to be very heavily optimized for minimal friction when all you want to do is write text and links. Take that to its logical conclusion and you get wiki-markup. Take it just barely far enough to be an explosive success and the hottest thing on the planet, and you get HTML.

Writers aren't programmers. Generally speaking. Many are nerds. Few are programmers.

What made the Web grow? What made 14 kerjillion people download and install web servers, and write little "welcome to my home page" home pages? How many of those people were programmers? Have we forgotten what it was like?

Embedding a markup language

Embedding a markup language in an expressive programming language is not hard. See Ocsigen for a statically typed embedding of XHTML in OCaml. The only problem with this approach is sensible error messages for non-developers.

No, the other problem is

No, the other problem is that (especially from a non-coder's point of view) tbe concrete syntax sucks elephants through gauze. By the time you've written a new one you've essentially reinvented HTML, complete with "code goes here" tags.

Right, but...

While I agree with this one I also can hardly imagine anything sucking more than the HTML/SGML/XML disaster of a concrete syntax that we are doomed with instead. And for ages to come.

From a basic user's point of

From a basic user's point of view, not having to put quotes around text is a big win. You want a markup language rather than a textup language.

That aside, I pretty much agree - having some quick escape for tags and a couple of types of bracket (one for parameters, one for quoting/tag-around-this-code) is much nicer.

But if the language is

But if the language is elegant, the writers (who are not programmers) need not know that they are actually programming. The API that implemented the page output could be declarative in nature.

Let's reinvent the web in another topic

This comment is not aimed at anyone in particular. I'd like to suggest that if people want further discussion about reinventing the web, that someone start a new forum topic for it, and keep the current thread a little more focused on things more closely related to ES4. (Yes, I know the subjects are connected.)

Yay, evolution

But at some point, evolution must stop, rights and wrongs be recognized, and start over cleanly.

In biological systems, this "start over cleanly" utopia happens only with mass extinction events, and without recognition of rights and wrongs, and with tragic loss of information due to the slate-cleaning.

Hindsight is never perfect, and it does not tell you where to evolve next. Compatibility is important for the Web, even though it can be a pain to engineer. Over time, bad old forms die off, but you can no more predict when or which ones, than dictate better forms.

In this light, ES4 is indeed righting wrongs and providing cleaner ways to do things. That it cannot, and should not, remove older forms (many of which are just fine) does not make it unfit -- quite the opposite. Right now, JS developers have to struggle to use it "in the large", and many are wooed by bigger languages that can promise better programming in the large support. But the switching costs from JS to these other languages and runtimes is high, higher than the cost of using ES4 (the cost of implementing and shipping ES4 can be born by a handful of browser vendors, for the common good).

Someone starting from a green field can afford to pick C# on Silverlight and/or WPF on Windows, for example. Most people, whether for commercial reasons or not, prefer to maximize "reach" on the Web. This favors using browser-based standards, including ones not "based" in plugins. And of course, many people are not starting from zero in an empty field of web content.

Jason's right, the Web would never have happened with any 20-year roll-up of conventional wisdom, frozen into a programming language. The Web requires distributed extensibility, backward and forward compatibility, error corrections in browser parsers, and increasing programmability of core browser functionality over time.

No one group or individual will command or control the single lispy protocol envisioned here. Didn't Curl try such a "better is better" approach?


Well, nothing of what you

Well, nothing of what you said does not stop new and more efficient web standards to be created. They don't have to be compatible with old stuff, as long as the two (old and new) can run in parallel, and browser vendors could easily incorporate the new standards in their products for the common good, as you say.

And the new standards could be simpler and easier to implement. Actually, we don't need standards, we need a programming language. Any 'standard' can be covered if the browsers are programmable...

You know you're an architecture astronaut when...

"We don't need $criticalHighLevelThing, we need $lowerLevelThing, any $highLevelThing can be implemented with it"

XHTML utopia

Some in the W3C believed this around the turn of the century. XHTML2, XForms, and SVG would replace the existing web with clean, well-formed XML content languages. It did not and will not happen. Learning why not is the beginning of wisdom.

Super-programmability of browsers (which I support), or lack of it, has little to do with the reasons.

The Web evolves incrementally, by short path innovations that degrade gracefully in older browsers. Browsers continue to live under footprint pressure, which makes having N >= 2 parallel implementations of anything a survival disadvantage. And web content authors write hypertext, so expect and deserve backward compatibility including standard error corrections (i.e., HTML, not XML).

In the Imagine closing of my long-winded reply to Peter Russell, I point to a future where ES4 and advanced APIs together allow browser innovation to take off in a larger world of downloadable code, instead of depending too much on browser vendors. That's my dream, and it is a real place, not a utopia. It depends on steady, evolutionary progress -- not "rip and replace" and "do things twice".

Pay attention, please

JS2, not JS4.

Firefox is integrating the Tamarin VM and co-evolving it to support ES4 in Flash as well as Firefox.

We are supporting development of other VM-hosted languages. As noted at that link, we're also adding glue so that Tamarin can support JS2 in IE.

The web will support multiple languages interoperably some day, but not soon, and JS will remain the mandatory default language forever. If you bothered to read my comments earlier in this thread, you would have read the reasons for this, and perhaps come up with better arguments.

But your post is pie in the sky, I'm sorry to say. Erlang is not going to be embedded in all browsers, never mind in any one. The odds of a Java come-back in all browsers are low. The only realistic hope for an open VM commonly implemented and interoperating among browsers is Tamarin.



Programming languages shouldn't grow by piling feature upon feature. Instead a language should offer facilities to allow meta-programming, so that users can declare DSLs that are able to interoperate with each other.

There seems to be a disconnect between what you say PLs should be striving for, and the PLs you refer to as competition. I think few will say that Java and C# are small languages. If anything, those PLs are adding features at a much faster rate.

So instead of talking how technologies are getting simpler (smaller JVMs and Silverlight), I'm wondering what we are left with in terms of simple PLs and DSLs. Which programming languages do you have in mind? Or do you want the languages to get out of the way, and address the platform/VM directly?

Meta-programming in ES4

Meta-programming for bootstrapping reasons, and for emulation of magic-in-ES3 native types, was a conscious design goal of ES4 (or JS2 -- we are equating the MIME types). But meta-objects for ES3 are not enough: users deserve convenient syntax, which without macros mean some growth in the surface language.

ES4 classes cannot be simulated by closures. ES3 allows mutation of any object, including a captured activation object.

Let's say a meta-object API was added to seal objects. That's still not enough, because classes also provide fixed properties (fixtures), with convenient syntax. Classes also provide instance-bound methods that cannot be hijacked. Simulating all of these using closures plus opt-in MOP hook calls requires a lot of source-level and runtime overhead.

In a perfect world, perhaps there would have been macros and a MOP long ago. In the real world, we're growing the language ahead of adding macros (which I think we will add, eventually). To deprive users of improvements, including usable syntax, would be wrong at this late date (eight years after ES3).

For "how fast is Tamarin", see Chris Oliver's blog. For one micro-benchmark (Tak), it's 4-5x slower than HotSpot, when last tested. Not shabby. The CLDC JVM or a successor won't be HotSpot-speed either.

But really, you are missing something else in my arguments that does not depend on "seeing who wins": Tamarin is already in Flash, so it's much more widely deployed than a similarly small and fast-enough JVM. Distribution is everything, and you have to buy it or already have it. Sun will be rolling a stone up a hill, but they are way behind Adobe.

A final point that I've made before, perhaps not here: JVMs don't "do JS first, fast, and compatibly" (Rhino is not a drop-in alternative to browser engines).

The time for a JVM to have provided high performance to JS was years ago, before Sun apparently denied Macromedia a license for a smallish JVM on which ActionScript (a JS variant) could be hosted. There's no going back now.


Can we tone tone down the

Can we tone tone down the rethoric a bit? It will help people see the the more informative points you are trying to make.


I meant "pie in the sky" in the nicest possibly way ;-).

The frustrating thing about this thread is the divergence between what people think might be possible in browsers, and the far more limited set of uplifts that can be engineered compatibly, and actually deployed to enough users to make a difference by inducing web content authors to target new languages.

Erlang, various JVMs, and the like are good to great pieces of work, but they are not going to get onto 90%+ of desktops any time soon. Mobile may be another matter for Java, but J2ME is apparently very fragmented by incompatibilities across phones and OSes.

I should add that Tamarin is only one of several advanced ES4 VMs in progress that I know about. Not all have clear paths to widespread distribution, but in total it looks like ES4 has a shot at being more widely supported on desktops than any other language. Like ES3 before it.


I wasn't replying to you

I wasn't replying to you (look at the identation)!

Sorry for the misunderstanding.

Mmmm... pie!

It should offer support for closures, agents from Erlang and TCO (tail call optimization). A Lisp-like functional language would be perfect indeed.

Rewrite that as "It should offer [X]. [Y] would be perfect indeed," and every single programmer on the planet will fill in X and Y differently.

Taking my cue from Brendan, here's how I'd complete the sentence: "It should offer pie. Pie in the sky would be perfect indeed."

Waaah, pie!

The important point being that I'd like my pie down here where I can actually eat it! Given a choice between unachievable perfection and achievable improvement I know which I'll take.

More material

If you are interested in a closer look at the relationship between ES3 and the current draft ES4, here's a new document detailing the proposed incompatibilities with ES3. Backwards-compatibility is an extremely important issue to TG1, and always has been.

Also, we've released another build of the reference implementation, now with binaries for Windows and Linux -- I encourage anyone interested to download and try it out (keeping in mind this is still an early pre-release).