Google's "The Future of JavaScript" internal memo leaked

Note: Saw this on Sunday (9/11), but waited for it to go viral before posting it here.

A leaked Google memo, The Future of JavaScript, from November 2010 is being circulated around the Internet, outlining Google's supposed technical strategy for Web programming languages. Google plans to improve JavaScript, while also creating a competitor to JavaScript, Dart (ex-Dash), that it hopes will be the new lingua franca of the Web.

Ironically, I saw this leak via a Google Alert keyword search. It has propagated to at least Github, the Dzone social network, The Register and Information Week since Sunday.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Interesting, too soon/closed to say, and slightly frightening

The sender of the mail is Mark S. Miller, which is well-known outside Google and his recent work on Javascript standardization for his continued work in the capability-security circles and the language E. This is only the first sign that this memo is LtU-relevant.

I found it interesting (interesting that Google is still looking for a language-based solution) and exciting. I've been disappointed, however, that this project is apparently so closed. I found it a bit disturbing to see this level of secrecy among people that come directly from the research community, and surely value openness and diffusion of ideas.

It's hard to say much more before the technical details are available. I'm interested in knowing what they are thinking of (frankly, with the goals they have, I don't expect a revolution, but you never know).

Brendan Eich has reacted very strongly against this effort. In a nutshell, he's afraid this effort may "fragment" the web and apparently see it as a single-vendor product trying to bypass openness and standardization.

I would myself welcome something like a "web assembly" suitable as a target for programming languages, so that I may choose whatever language I like -- as we do for desktop application. I'm a bit surprised Google doesn't try to push in that direction; if you restrict yourself to a single language, it will probably always be irritating for some people in some problem domains. The "performance" goals of Dash may help make it better suited as a backend, but the next Javascript standard proposal are also doing quite well in this regard.

See Brendan Eich comments about the evolution of JS and some tangent shots at Dash/Dart on his blog. See also the discussion on Hacker News, with numerous comments from Brendan.

Some quotation of the leaked memo that I found particularly LtU-relevant.

Dash is designed with three perspectives in mind:
- Performance [..]
- Developer Usability -- Dash is designed to keep the dynamic,
easy-to-get-started, no-compile nature of Javascript that has made the web platform the clear winner for hobbyist developers.
- Ability to be Tooled -- Dash is designed to be more easily tooled (e.g. with optional types) for large-scale projects that require code-comprehension features such as refactoring and finding callsites. Dash, however, does not require tooling to be effective--small-scale developers may still be satisfied with a text editor.

Dash is also designed to be securable, where that ability does not seriously conflict with the three main goals.

(Where security conflict with performances, performances come first? Is this the right priority choice for "lingua franca of web development"?)

Dash will be designed so that a large subset of it can be compiled to target legacy Javascript platforms so teams that commit to using Dash do not have to seriously limit their reach.


Q: How will Dash interoperate with the huge body of existing JavaScript (JQuery, Analytics, etc)?

A: Moving to a new language will be a very large undertaking. The specifics of how inter-operation with current Javascript would work is still an open question.


How will we get Harmony related changes into Chrome?
Very carefully ;-). V8 is carefully tuned for speed with the current
Javascript standard rather than flexibility--this makes it very difficult to make experimental changes. We are considering pre-processors and a number of other options, but ultimately the precise solution is still an open question.

I had somehow never thought of this. Being interested in language and their implementations, I was always happy to see something more exciting as an AST-walking interpreter coming in web browsers, and people interested again in old research in programming languages (Smalltalk, Self..).
But the large investement in subtle (and brittle) technology around javascript performances has also made the language harder to change. This may have a net effect in the future years, especially when the hype and marketing around JS performances fall down and the costs of the large engineering teams become harder to justify.

Q: What about Go?
A: Go is a very promising systems-programming language in the vein of C++. We fully hope and expect that Go becomes the standard back-end language at Google over the next few years. Dash is focused on client (and eventually Front-end server development). The needs there are different (flexibility vs. stability) and therefore a different programming language is warranted.


We will strongly encourage Google developers start off targeting Chrome-only whenever possible as this gives us the best end user experience. However, for some apps this will not make sense, so we are building a compiler for Dash that targets Javascript (ES3).

Open development

I've been greatly confused by the echoing of objections over Google's closed language design/development phase. Perhaps the sharper minds and cooler heads over here can enlighten me. (Also, stop this fast if it is too offtopic or too religious. There's plenty to discuss on the topic of web-language-stack improvement.)

I can't see how "open design" is in any way a requirement for programming languages, whether they're aimed for the "web stack" or not. Every industrial-grade language used anywhere in the web implementation stack (C++, Java, PHP, HTML, Python ...) started out as a closed design, sometimes even with a closed implementation, and then gained varying amounts of openness over varying lengths of time. Even JavaScript had a closed design phase, though it standardized early (I think a year into its life). Even then, Netscape had a jumpstart on other browser vendors in implementation.

It has been rather a tradition of successful language designs to come from small, focused groups. Languages hailing from committee (most famously ALGOL68 and Ada) have a track record of baroqueness and poor adoption, ironically or not. If I put together a nice language for my own use, especially in the context of my employer, can I expect such a harsh reaction if I eventually open it up? This would seem to be a negative influence on the language community, both users and designers.

Closed design

I don't believe gasche was objecting to closed design/development. He only notes his position is quite speculative because he lacks access to details.

I'm all for closed language design, so long as the language is open after release. It seems to me that 'open design' - or design by committee - is a recipe for complexity, with a lot of local optimizations and agendas.

I'd go further...

I'd make a stronger statement. Design by committee has a bad track record, sure, but so does any language that's designed in public, by whatever means.

I think languages are best born in darkness. Languages which are introduced to the world too soon (before having a half-decent implementation, stable syntax and semantics and some modicum of design cohesion and sensibility) tend to be laughed out of existence, ignored, or meander off into endless design revisions before getting anywhere. Not universally true, but more often than not...

"closed" versus "open"

Thank for your clarification; indeed, "too closed to say" only meant that we can't say much before we have more technical details.

I would like to highlight that there are compromise between starting a secret language design project that stays underwater for more than a year, with participation of known researchers, and a "design by committee" approach where everyone reasonable is invited in actively participating in the design.

So I think "closed" versus "open" is a bad dichotomy in this case. I would rather speak of "secret" versus "public".

The most common thing, is my view, is a small group of people designing a language while discussing/publishing what they're doing. For example, the Fortress team discusses what they are doing on a blog. That's also the norm for most research teams in academic settings.

I can think of lot of reasons for people not to talk outside their group about the language they're designing. Strategic choices (create a surprise effect), fear of releasing too early and being flamed by critics, etc. They're certainly justified in some cases and I say there is a moral obligation for language design to be open. I'm just saying I found it a bit strange in this precise situation.

Curious generalisation

You're implying that Ada is more baroque than C++ in your post, really??

IMHO the early tool cost is the main reason why Ada failed and C++ won, which has nothing to do with design by commity or not.

Indeed, I was being overly

Indeed, I was being overly broad with my brush-strokes there. I've also marveled at the complaints of complexity against, e.g., ALGOL 68 when it's so much simpler than C++.

Instead, I should have said something like "immediate" or "incremental" complexity of the specifications. This complexity has both psychological and material consequences. Materially, designing new features in an implementation vacuum leads to hare-brained leaps (external templates in C++ being a recent example). Psychologically, a spec that is unimplemented and divorced from current practice plays out as "too complicated". I understand this effect to have harmed both Ada and ALGOL68 adoption-- thus why I mentioned them.

As languages age, changes by popular demand and committee bargaining bring in exactly that baroque complexity so derided in initial specs (C++ and COBOL being the standard candles of this process). By picking up the complexity piecewise from practice, rather than pushing it onto practitioners, the complexity thus seems more "natural". It's the standard frog-boiling paradigm.

I'd argue that, to a very

I'd argue that, to a very large extent, C++ is the poster child for design by committee. The vast bulk of the C++ language, including pretty much everything that people consider baroque, was added by the ANSI standards committee.

How about this suggestion:

How about this suggestion: It is not about the number of people involved, but about the number of conflicting interests.

Corollary: As languages gain more widespread use, they are pushed to become more baroque. It is then that a strong hand at the helm is necessary, but also when the pressure is on to move the language into the hands to standardization bodies.

Dictatorships

Seems like there is a benevolent dictator behind most successful languages. Even C++ was initially standardized from a language designed by mostly one person (Stroustrup). You need one voice to shape the language's feel and keep the design consistent.

The initial language design is very important since if the language is successful, it probably have to evolve via committee.

Suppose we want to test this

Suppose we want to test this empirically. How many languages with any chance of success (i.e., not clearly niche efforts) were designed "by committee" and how many were designed by individuals? Are the samples even of relatively the same size? Are they enough to support any kind of generalization?

The population of new

The population of new languages that gain popularity isn't very large, while design-by-committee languages are usually labors of industry vs. labors of love, and have resources to back them up. I don't think you'll find enlightenment in just the numbers, you'll have to draw more on non-objective experience.

Regardless, most established popular languages must evolve by committee, which is where success and failure are more evident.

Agreed on both points.

Agreed on both points.

* VBScript, anyone?

*

The new Microsoft

OP>> Google plans to improve JavaScript, while also creating a competitor to JavaScript

CW> I've been greatly confused by the echoing of objections over over Google's closed language design/development phase

I didn't have a problem with it when MS had the same strategy with Java/J#, etc., but certainly there were plenty of objections.

Response from Googler and TC39 member Alex Russell

Alex Russell responded in a blog post here.

Google is big, can do many things at once, and often isn’t of one mind. What we do agree on is that we’re trying to make things better the best we know how. Anyone who watches Google long enough should anticipate that we often have different ideas about what that means. For my part, then, consider me and my team to be committed JS partisans for as long as we think we can make a difference.

After I read that post, some of the concerns that people had based on the original memo seem overblown. But a lot will depend on how Google treats JS going forward.

I thought Brendan's comment that Googlers have a "better is better" bias was pretty spot-on; if he's right that "worse is better" wins, then it's a foregone conclusion that Dart will remain a niche language. Even if Dart doesn't take over, though, it may still achieve its goals (just like Chrome has achieved its goals by influencing the development of other browsers).

"worse is better"

I thought Brendan's comment that Googlers have a "better is better" bias was pretty spot-on; if he's right that "worse is better" wins, then it's a foregone conclusion that Dart will remain a niche language.

Unless Dart, like Go, is actually "worse"... ;)

Some things are a victim of their own success. This is more like being a beneficiary of your own failure. IMHO, of course...

I am glad to see Google

I am glad to see Google getting more and more into the language business (which I recall they weren't quick to embrace). I am sure Google's impact on the language business is going to be positive, regardless of the merits of their specific contributions. I am not as sure about other markets they are disrupting, but that fortunately is off-topic.

Agreed

Very much. I'm happy to see Google investing in language design, even if I'm not a huge fan of the outcomes.

There are others

I can't share names, but language people in Google aren't particularly impressed with Go either. At the very least I can say, the ground is fertile.

While totally

While totally understandable, I find it disturbing when people don't discuss their views with outsiders due to a corporate culture of secrecy, corporate policies, or corporate loyalty. Regardless of the level of publicness the optimal design process should have. This is one problem with languages designed in the commercial world outside old school R&D labs.

Names changed to protect the guilty

Oh, I've discussed my views about Go publicly. I'm just not willing to discuss what other people have told me in confidence. What I meant about the "ground being fertile" is that the idea that Google can and should be investing in new languages is gaining traction. There is a large feeling of needing better tools. We use a lot of DSLs internally and have for a long time.

Personally I don't think languages with open design processes are particularly all that well off. If anything, the open nature of the debate means that all features tend to regress toward the mean, resulting in a bland, squishy bag of everyone's favorite features, none of which work together well. The only bold things that survive are those truly horrible hacks put in by some lone prima dona grandfathered into the project whom nobody has the guts to confront.

How much can a single language breakthrough influence a whole language? In one of those design-by-committee languages that inevitably seem to come of "open" design processes--not a whole lot.

As for corporate loyalty, I didn't inherit that gene. But I have been trying to develop a level of self-awareness regarding which things my opinion can influence.

I of course didn't mean you!

I of course didn't mean you! You are posting here, aren't you. I was making a general observation. I thought about adding a clarification when I first posted, but decided it is clear enough that the complaint wasn't directed at you.

What design-by-committee languages?

Original design, I mean? Some living examples, or examples in living memory, would be good.

Ada does not count, from what I knew (ROLM self-hosted Ada compiler hacker in my grad student summer intern days -- I miss the 80s ;-).

Committees take over once the designed language is more or less done and actually used enough to generate standardization effort.

Designing in the open but without a committee, with a moderated mailing list, is not design by committee, and not a huge hardship in my experience.

/be

During my first year of grad

During my first year of grad school, my adviser gave us a fascinating doc on the design of Modula-3. It was in the form of meeting minutes on how various committee members (researchers from SRC?) were debating various design decisions. I didn't really appreciate the document then, and I regret I can't find that document today. Has anyone else seen this before?

Yeah, I think I've seen it.

Yeah, I think I've seen it. If it's the same document as I seem to remember, it was (only somewhat) tongue-in-cheek, no? With some aliases for the various committee members? Though I think one could work out which aliases went with which member, if you knew enough about the context.

I seem to recall it was in an appendix of one of the books on Modula-3.

JavaScript just needs to evolve...

Given that Gilad Bracha and Lars Bak are the designers of Dart and its VM, it could be that Dart will indeed be novel and compelling. So, from a language perspective, great. Go Gilad and Lars!

Still, back on Earth, I vote for a better JavaScript - a modern JavaScript, a repaired JavaScript - not something net new to replace it, if that's in fact Google's goal. Why would browser makers take on a new VM? Because Google thinks it's the right thing to do? What is this, the 90s?

At any rate, I look forward to seeing what Gilad, Lars et al come up with and deliver. I'm a big fan of these engineers.

C

Kasper

Gilad joined the project quite late, only in the last two months or so. Kasper Lund deserves a lot of credit for Dart and hopefully the upcoming presentation will clear that up a bit.

Aplogies to Kasper...

I'm not following this project too closely, but I do follow where Gilad ends up. Sorry to Kasper for the lack of due credit.
C

back on Earth, I vote for a

back on Earth, I vote for a better JavaScript - a modern JavaScript, a repaired JavaScript

MarkM, Google, and others are working on that, too. This is not an either/or prospect.

Why would browser makers take on a new VM? Because Google thinks it's the right thing to do?

Nah, they'd do it because people will start writing websites using Dart (and the Dart-to-JS compiler where necessary), because it's fun to jump onto a new bandwagon, and because - no matter your opinion - there are a lot of people that do not believe that JavaScript will be fixed by evolutionary means.

Evolving JavaScript in a good direction is hard, precisely because it is standardized and subject to a wide variety of interests in committee. Brendan Eich speaks of 'filling a complexity bucket'; might as well be talking about How the Camel Got His Hump.

I'm not sure Dart will be a better web assembly language than JavaScript, but I expect/hope it will be a better substrate for building a better web assembly language - what with two capability security guys (MarkM, Gilad Bracha) who have done a lot of work on distributed programming heading it.

Clue: Mark M. not working on Dart

Mark's name is on the memo but he didn't write it, and Gilad joined big G only this spring. The memo looks like a committee product, with fragile/breaking consensus and bogus (iOS wins cuz of Obj-C? I lulled) substitute rationales.

Google is not serving both open-web and proprietary-plan-B masters well, so much as doing whatever individuals with high rank want. The real doers from Google on Ecma TC39 are fewer than the long list of names cited on the memo. Now consider how much better TC39 could have made ES6, what roads we missed, if we had Dash-informed input before next month -- say 1.5 years ago.

Web standards are hard, JS is no exception. If Dart becomes one, it'll face the same hardships. But it can't get there from here by being too complex and too closed. That's how it looks right now, and it can't very well help be otherwise with two-year incubation in the dark.

Google's stealth-mode development of V8 guaranteed Apple's JavaScriptCore had to compete and make its own way. This is a permanent fork in WebKit, with an awkwardly shared DOM embedding API. These delayed-open decisions have real costs.

"Better" by itself has nothing to do with winning adoption on the Web. "Closer", "fewer moving parts", and "good enough" (great counts if possible) matter more. See Collin Jackson's excellent USENIX Security keynote.

/be

When seeking revolutionary

When seeking revolutionary improvements, society as a whole can accept a low batting average. Even if Dart isn't successful, I consider the work on it - that a large group is taking the appropriate risks, despite expenditure - to be a good thing.

An advantage of delayed-open is that there is (a) more time to simplify the design (it is much more difficult to remove features than to add them), (b) more time to judge whether the design makes any real improvements, and (c) much less commitment to a project that they might later decide to cut. Mitigating risk is important, and I do not begrudge them for it.

As to your 'dash-informed input', I doubt you'd achieve anything differently with it. It isn't as though the JS community is short on ideas. The problem is that risking a mature standard with new ideas is ridiculous, and trying to change the 'foundational' ideas of an established language is counter-productive (even if it would result in a better/closer/simpler model).

The Rust language seems to

The Rust disclaimer ("This is a very preliminary work in progress. No supported releases yet nor defined release schedule / plans. Caveat emptor. It will crash. It will change syntax and semantics. It will eat your laundry. Use at your own risk. Etc.") seems like a reasonable guard against anyone over-investing in a work in progress. Incubating something that is to become a standard is hard, for sure, but it's matter of weighing letting others benefit from seeing the development path v. the additional burdens of open development. Sometimes all it takes is a couple big scary looking banners to keep from getting burdened down, and sure, sometimes it takes a lot more.

Of the three advantages you list from delayed-open, which do and which do not apply to the developed-in-public Rust language?

Second reply (first was orphaned to top level)

Those advantages are not exclusive to delayed-open. Every one of the three you assert can be gained with early-open, in my experience.

Mozilla does early-open (lately: Rust, B2G) and we put appropriate warnings up. More to the point, for JS evolution we prototype only what has been proposed in Ecma TC39 and either has already made it to harmony:proposals, or has achieved strong strawman status, with good odds of being promoted (but in need of prototyping to make the case).

I've argued on hacker news at length that doing delayed-open will not make a new de-facto standard without a lot more market power to win adoption and force competitors to adopt or reverse-engineer. That market power is >50% share, more like 80% from what happened in the '90s. Google doesn't have that and won't get it soon enough even on the most optimistic assumptions.

Google was going for wow-effect and (I surmise) trying not to rattle any cages or draw fire early. That's a fine choice to make if the goal is to push Dart as proprietary (open-washed or not). For a proposed web standards, it's backwards.

/be

Excitement factor

I imagine Rust will remove its warnings, some day, and nobody will notice because they stopped looking. I don't know much about marketing, though, so I could easily be wrong, but there might be some benefits to dangling a tantalizing project without immediately sharing it.

Whether delayed open or early open, I think producing a 'new de-facto standard' has the same problems with adoption. It is not clear to me that time of opening is a significant factor.

Personally, I would not want to be constrained by promises to a community, nor even by my previous efforts while trying to find a programming model that can truly advance state-of-the-art for open distributed systems. Standardization is useful when it is time for a language to sit still and support settlers.

Would Google discuss Dash/Dart at all, were it not for the leaked memo?

Try working with competitors in a standards group

I'm sure Rust will do fine whether you keep looking or not :-|. Seriously, that's a silly argument. People who lose interest because an early-open project does not give them a free lunch are generally not going to contribute, and they can be recovered later through more traditional outbound marketing.

But this is not an issue for Rust. Rust is not for the web -- it doesn't need interoperable implementations based on a detailed spec (not yet, anyway), or early Looky-Lou users who leave for want of a finished product.

I think producing a 'new de-facto standard' has the same problems with adoption. It is not clear to me that time of opening is a significant factor.

You don't surprise other browser implementors with late-open and market-power moves if you want to standardize something in the existing standards bodies and market share structure.

Maybe that's not the goal, though the leaked memo did talk about standardization. Talk is cheap, but actions have consequences. What your "Excitement factor" lede suggests is that Google is fine going the market-power de-facto route alone.

That's anti-social in the standards bodies and anti-open-web. It could work, but I doubt it. It won't lead to any kind of predictable standardization.

/be

Standards are for clients, not competitors

I don't see the early history of JavaScript as any different. You wrote up a language in a couple weeks, with minimal external input, then a big company pushed it in a de-facto market power play, and you only 'standardized' it later.

I think this is okay when you did it, and I think it's okay if Google tries the same thing. (I even think building a whole new, mostly incompatible web is okay. It's certainly my own plan, since I think stateful DOM is a huge historical mistake.) But perhaps this is due to my perception of browsers as competitive application platforms (similar to different OS's, but with a focus on distributed apps).

Standardizing is something that can wait until after demonstration. And the main reason for standardization is NOT to support your competitors, but rather to support your clients, who wish to provide user-generated content and extensions and who can't be having their target languages and APIs change too much underfoot, and who are assured by the standardization.

Google can provide a Dart-to-JS compiler. Such is really for clients, who wish to retain the 'reach' of JavaScript. But it is sufficient to initially support the competition - to avoid being 'evil'.

learned nothing and forgotten everything...

... unlike the Bourbons.

Standards are for competitors and clients, not just for clients. Competitors want interoperation so they can compete on quality of implementation, not be locked out. That's why Microsoft wanted JS standardized and threw a big brain at it in 1997.

I don't see the early history of JavaScript as any different.

This isn't about me, and I never said it was. Netscape had 80% market power and pushed a bunch of stuff, not all good -- better than what would have been pushed by Microsoft, IMHO, but who knows? VB lost to JS, that much is clear.

The point is not that Netscape did it, so it's ok. The point is that Google cannot pull off such a power move because it lacks market share.

I think it's okay if Google tries the same thing.

No, because the market is different now.

Google doesn't have 80% market share, not even close. So it's not ok, while the balance of browser market players are trying to collaborate on interoperable specs, for Google to spring such fragmentation grenades, because Google will fragment the web if these anti-open gambits (see the Hacker News thread for a more complete list -- it's not just Dart) succeed.

Apple and Microsoft doing the same would not increase interoperation, quite the reverse. Don't assume they won't try their own (I joked Apple's would be sleeker, so "Flechette", and someone at CapitolJS beat me to the Microsoft name: "Javelin" -- bigger is better), or that they'll simply roll over and implement Dart -- that's not going to happen without Google taking something like 80% of the market overnight.

Google can provide a Dart-to-JS compiler.

That would, as the hacker news thread covered, result in substandard performance in browsers that do not support Dart with a native VM. New number types, remember?

Your argument keeps shifting. First it was a "closed design" vs. "design by committee" false dilemma. Then MarkM was, according to you, working on Dart (he wasn't and isn't). Then it was revolutionary risk being good for society, and early-open bad for language designers. I replied and you moved on. Are we having fun yet?

I claim there is a social good in developing interoperable web standards. Yes, you can try to write such standards later, after having used market power to create a de-facto standard as Netscape did with JS. That's not necessarily good or bad, but it *won't work* without that market power. And Google does not have that power now.

The leaked memo (this thread's topic) shows a two-faced and anti-open-web strategy, about which I've already heard negative reactions from competing browser vendors who are active in the standards bodies. This is a problem for Google. Their launch will go ahead as if they are kings of the world, but the chance to land Dart in a standards body, or better compiled-to-Dart support in JS standards, was passed over for this false glory.

/be

Not the size that matters

I'm not sure it's the absolute market share that matters, so much as the positioning. Android, for example, will likely be another Dart client.

Further, Dart will - by nature - need to compete in terms of quality rather than reach.

Google will fragment the web if these anti-open gambits (see the Hacker News thread for a more complete list -- it's not just Dart) succeed.

The web won't fragment, though it might fracture a little. We'll still have some 'lowest common denominator' glue holding everything together.

That would, as the hacker news thread covered, result in substandard performance in browsers that do not support Dart with a native VM. New number types, remember?

Yes. That's the point. Having substandard performance is important if you want to provide technological pressure to actually implement Dart. And, if the authors of Dart truly feel a switch would improve society, it is reasonable to provide pressures to that end - social, market, technological.

Your argument keeps shifting.

That's not quite accurate, though I understand how you could gain that impression. My argument keeps growing.

MarkM's absence from Dart doesn't entirely Dash my hopes for a capability secure distributed language design, since Bracha is still involved and has done related design work for Newspeak.

I agree that we can 'share' our designs early without turning it into design-by-committee. But I never actually made a design-by-committee argument. I think you have confused a position about why evolving JS is difficult with a separate point about avoiding commitment and risk for an immature language design.

The arguments about Rust seem to expose a misunderstanding about the nature of risk. If you walk through an unknown minefield and don't get hurt, you did not "avoid the risk", you only avoided the consequences. Putting up an "I'll eat your laundry" banner may have mitigated Rust's risk a little, but Rust is still at risk of forking and other situations by anyone who decides it's good enough and begins using it.

The risks for Google with Dart are not the same as the risks for Mozilla with Rust. They have every right to mitigate their risks however they feel is necessary. This includes reputation and commitment risks.

With regards to your arguments about 'adoption' and market power, those don't really concern me. I consider fracturing the web an acceptable risk, perhaps even an inevitable stage in the long run - like breaking a twisted bone so we can set it correctly.

there is a social good in developing interoperable web standards

I think so as well, but I'm not convinced that 'backwards compatible' needs to be part of that, except insofar as a technology transition strategy is important to achieve adoption.

the chance to land Dart in a standards body, or better compiled-to-Dart support in JS standards, was passed over for this false glory.

I have no doubt that Dart, if Google remains committed to it, will eventually land in a standards body. But it won't happen before Dart is more mature.

Quibbling while the Web standards burn

I'm not sure it's the absolute market share that matters, so much as the positioning. Android, for example, will likely be another Dart client.

Who knows? Android already has to carry Dalvik, which chews memory and cycles.

The web won't fragment, though it might fracture a little. We'll still have some 'lowest common denominator' glue holding everything together.

Why are you quibbling about "fragment" vs. "fracture"? The "lowest common denominator" may be so low that people choose their silos and the Web becomes like AOL.

I doubt this will happen but many worry about it -- including those writing in the leaked memo who used fear of iOS eclipsing the Web to rationalize doing Dart and therefore not doing nearly as much as they could for JS.

And that's the concern: Google breaking current covenants required for healthy functioning of standards bodies, in order to keep evolving the Web with multi-vendor-implemented, interoperable specs.

Having substandard performance is important if you want to provide technological pressure to actually implement Dart.

That's considered dirty pool by competitors trying to cooperate in standards bodies.

Also, it may fail. If Dart-to-JS lowers new number type intensive code to slow JS using ints-in-doubles and even arrays (bignum emulations), the results may be *worse* than a carefully hand-coded pure-JS version of the web app.

Will non-Chrome users of gmail then do what Google wishes: either switch to Chrome or put pressure on their browser's vendor to support Dart natively; or will such users find competing webmail services that work better by using JS to full effect, including the latest JS VM optimizations and Harmony prototype and even standard (ES6 is coming) features?

It's an open question, but secondary to the main point: Google is breaking bad in standards bodies and this is already harming open-web standards work.

My argument keeps growing.

No, shifting. New and different assertions or unbacked claims or hopes, no justification for rebutted old ones. But it's fine with me!

Bracha joined Google in June. Dart has been going for ~2 years. It's really an Aarhus thing.

I've confused nothing about your arguments. You wrote "I'm all for closed language design, so long as the language is open after release. It seems to me that 'open design' - or design by committee - is a recipe for complexity, with a lot of local optimizations and agendas." That pair of sentences clearly set up two alternatives (the "or" interjection before "is a recipe" does not make a third alternative). Furthermore, the context was Dart design, not JS.

Dart could have been developed in the open, just as V8 could have been (and was mooted to be briefly, to me, in 2006). Google chose otherwise, and not because early-open inevitably results in "complexity" or "design by committee".

The risks for Google with Dart are not the same as the risks for Mozilla with Rust.

That's true, but way to miss the big picture. Who cares about risks to Google's or Mozilla's reputation? The bigger risk is to the Web from fragmentation, to the standards bodies falling apart with competitive gaming and proprietary market power moves.

Rust is not a proposed web content language. It cannot fragment the Web. It is nothing like Dart as a "JS replacement."

Your breaking a bone to untwist it analogy would be better if there were coordinated effort by multiple vendors.

But there is such effort: Ecma TC39 introduced ES5 strict mode, and we are building Harmony on it. It's not backward-compatible. It requires opt-in. But it does not require a new VM, separate GC and consequent cycle collector, etc. It requires very few runtime tests, almost all early error compile-time checks.

Google completely bypassed this standards process and the opportunity it affords to break JS a bit better, to untwist it differently. All while open-washing and waving web-standards flags, often in Chrome-only marketing clothing.

That stinks, frankly, and it is breaking relationships in standards bodies. And you ought to care about that, since if the standards bodies become badly dysfunctional again, we will have only market power games -- and not only by Google but by Apple, Microsoft, and probably Facebook.

but I'm not convinced that 'backwards compatible' needs to be part of that,

That's a straw man. Pay attention to web evolution. Contrary to some HTML5 claims, it does involve compatibility breaks.

We can move forward, but not with surprise-ware proprietary moves by minority-share vendors. That's unlikely to do more than make a bigger mess that does not result in removing badly twisted old limbs after new ones have successfully grown or been grafted onto the Web body, by cooperating vendors working in standards bodies.

I have no doubt that Dart, if Google remains committed to it, will eventually land in a standards body.

We are still waiting for WebM, SPDY, Pepper2 (not the first one that bounced off the plugin-futures group). Your faith is touching, but if you mean that once Google conquers the world, they'll get rubber stamps from a captured standards body, who cares?

Meanwhile, last year, right now, and probably next year the way things are trending, Google is playing proprietary market-power games with Chrome extensions that are nowhere near ready to standardize (no spec), or to be implemented by other vendors other than by using Google-controlled source code (and lots of it).

/be

The "lowest common

The "lowest common denominator" may be so low that people choose their silos and the Web becomes like AOL. I doubt this will happen but many worry about it

There is no reason to worry about it. Service providers are interested in maintaining their reach with minimal effort, and browser providers are interested in maintaining a competitive edge which means supporting a broad range of services.

There is no inherently stable market model that will partition the web. Modulo use of force (such as patent trolling) there will always be adapters at a 'good enough' abstraction/security/performance level.

The advantage of a unified web model is only the extent to which we gain some cross-cutting benefits from it - global optimizations, pervasive security, fine-grained mashups, greater reusability, lower education overheads, simpler failure modes, common resilience and recovery models, improved productivity.

So the fear shouldn't be of isolation, only of inefficacy.

If Dart-to-JS lowers new number type intensive code to slow JS using ints-in-doubles and even arrays (bignum emulations), the results may be *worse* than a carefully hand-coded pure-JS version of the web app.

I've been working under the assumption that a Dart-to-JS compiler will usually perform worse than carefully hand-coded JS. The possibility that it might occasionally perform better hadn't really crossed my mind. If the opposite is true, that might be a mild extra incentive to use Dart, though I expect you can find better languages for a JS target than Dart.

Will non-Chrome users of gmail then do what Google wishes

That's an unlikely scenario. Gmail might eventually serve Dart to Chrome users, but will likely continue to serve dedicated JavaScript to the user-agents that need it.

Either way, gmail is nowhere near numerically intensive. I doubt users would be able to distinguish performance. The reason for Dart is, I hypothesize, targeted more towards those WebGL canvas and in-browser sound-generation applications we'll be seeing more of in the future.

You wrote "I'm all for closed language design, so long as the language is open after release. It seems to me that 'open design' - or design by committee - is a recipe for complexity, with a lot of local optimizations and agendas."

I presented no such argument in this thread. And I do grant gashe's point about public/secret design being a separate axis one might consider.

Who cares about risks to Google's or Mozilla's reputation?

I suspect that would be Google and Mozilla, respectively.

Ecma TC39 introduced ES5 strict mode, and we are building Harmony on it. It's not backward-compatible.

I am also interested in seeing what happens with harmony.

The bigger risk is to the Web from fragmentation, to the standards bodies falling apart with competitive gaming and proprietary market power moves. [...] And you ought to care about that, since if the standards bodies become badly dysfunctional again, we will have only market power games -- and not only by Google but by Apple, Microsoft, and probably Facebook.

Even during those power struggles, we'll still have HTML4 or 5, and ongoing innovations in PL, and the vast majority of the Internet and Web will keep working, and life will go on with nary an interruption for most people, most purposes.

Power struggles between titans will occur primarily at technology's bleeding edge. I understand that you're a bit more sensitive to these issues because you are essentially working on the bleeding edge; it's easy for you to get hurt or trampled underfoot. But, based on your exaggerated claims, I think your position biases you.

We are still waiting for WebM, SPDY, Pepper2 (not the first one that bounced off the plugin-futures group).

So? These are less than six years old. Why bother standardizing yet?

This feels like feeding a troll

So the fear shouldn't be of isolation, only of inefficacy.

We agree.

though I expect you can find better languages for a JS target than Dart.

Whatever, the point is Google (per the leaked memo) would use Dart in their web apps and thereby try to drive other browsers to adopt the native VM. You wrote "Having substandard performance is important if you want to provide technological pressure to actually implement Dart." We were exchanging interpretations of, and justifications for or objections to, Google's strategy per the leaked memo.

Developers using the Closure Compiler, Dart, CoffeeScript, ClojureScript, etc. and targeting JS is not by itself fragmenting. Lots of server-siloed languages. Compiling some to JS does not fragment the space of web content languages that browsers must interoperably implement. Shipping a native Dart VM in Chrome, then pushing Dart performance in Chrome vs. other browsers as you justified with "important", could fragment.

That "could" is enough to create discord in standards bodies. That's the proximate concern. If it all ends well, we'll all laugh. I don't think this is likely right now.

That's an unlikely scenario.

Did you even read the leaked memo? It explicitly advises native-Dart-VM-in-Chrome and Chrome-first Dart web app programming, with the Dart-to-JS compiler as fallback.

We've profiled gmail. It has amazing deep inheritance hierarchies. It has non-trivial int vs. double optimization opportunities. Everything adds up. 'twasn't the last cookie I ate that made me fat.

I suspect that would be Google and Mozilla, respectively.

Ok, that's just trollish. I obviously care about Mozilla's rep, but that wasn't the point and you know it. In context, I was pointing you to the big-picture difference: Rust is not a web content language, whereas Dart is explicitly aimed at being a JS replacement. You dodge the point with a digressive one-liner. That looks like more shifty non-arguing.

it's easy for you to get hurt or trampled underfoot

Spare me the patroniziing personalization. What's getting trampled, and it's a real social good (you granted), is interoperable web standards evolution.

We can try proprietary power moves. I lived through it in the 90s. It led to some wins and many losses for developers and users. Compared to the relatively harmonious work on HTML5 and JS lately, it was much worse. Why are you pooh-poohing this point?

So? These are less than six years old. Why bother standardizing yet?

You're clearly not developing for the web. All the successful sites use standards, de-facto and de-jure, some still not in final form, that are much newer than six years. Especially on mobile devices (see the CSS extensions that Apple originated and worked to standardize). All successful web developers know and rely on much newer standards than six years.

Apple, Microsoft, Mozilla, Opera, and even Google pay people to do significant work on evolving standards that are less than three years old. Waiting six years is likely to standardize something that is falling out of use, while missing the newer emerging standards in urgent need of de-jure codification.

/be

Did you even read the leaked

Did you even read the leaked memo? It explicitly advises native-Dart-VM-in-Chrome and Chrome-first Dart web app programming, with the Dart-to-JS compiler as fallback.

Sound advice for a new project, perhaps, but must be weighed against existing use of JavaScript libraries and server-side JS generation. Despite what is said in the memo, it would be silly to imagine that gmail will transition quickly.

Ok, that's just trollish.

It seemed an appropriate response to your facetious 'who cares' comment and hysterical hand-waving about the end of the web as you know it.

Rust is not a web content language, whereas Dart is explicitly aimed at being a JS replacement.

I have already acknowledged that the situations are different. You should not generalize from your claims about development of Rust to similar claims about Dart. If you feel it isn't applicable, then why did you mention Rust in the first place? (because rektide mentioned it as well?)

What's getting trampled, and it's a real social good (you granted), is interoperable web standards evolution.

Ah, what I said is that there are advantages to a unified web. This is not the same as agreeing we should be interoperable at all points in our evolution, nor even that evolution is the right way forward.

I lived through it in the 90s. It led to some wins and many losses for developers and users. Compared to the relatively harmonious work on HTML5 and JS lately, it was much worse. Why are you pooh-poohing this point?

If you want a personal reason, it is because I have developed a programming model that inverts quite a few modern web concepts in order for some massive improvements in orchestration and composition in-the-large. While I can and plan to integrate the modern web using JavaScript frameworks and AJAX, I also plan to design browsers and a new UI model (or extensions to existing browsers) for this alternative web.

So, I have my own biases. It would be quite hypocritical of me to object to Google's decision to ignore the chains of existing standards while pursuing something newer and greater.

All the successful sites use standards, de-facto and de-jure

Ah, yes. But it would be dishonest to say that that Pepper2 and WebM and the like are not de-facto standards. So that couldn't be what you're complaining about.

When I say 'why bother standardizing yet', I certainly mean taking it to a committee with a formal approval process and veto powers, especially while the APIs are still maturing.

Waiting six years is likely to standardize something that is falling out of use, while missing the newer emerging standards in urgent need of de-jure codification.

I am quite suspicious of standards with an iteration cycle more agile than most acquisition projects. Release early and release often does not seem to be the right approach here. The only reason to take it to committee that often is to have the committee design it.

This will be brief

Pepper2 is not a standard ("Something, such as a practice or a product, that is widely recognized or employed...") at all. It's a steaming pile of Chrome-only, chromium.org-hosted C++ interface and implementation code. You do not seem to know what "standard", de-facto or otherwise, means.

WebM is better, it has an evolving spec and two or more independent implementations. It can claim to be a de-facto standard.

As for your "I am quite suspicious" of new and fast-cycling standards, let's be concrete. CSS rounded corners: something to wait seven years, or only six, to standardize? In the mean time, to hell with interop, let each browser reverse-engineer?

/be

RFCs and Proposals

There is no dichotomy between 'standardizing CSS' and 'each browser reverse-engineering'. You can draft RFCs and proposals, and let people implement those, for years before standardizing.

PPAPI is open source, and was designed with intent for cross-browser standardization. Anyone who wishes can make a second implementation. In the mean time, standards are for clients - PPAPI still provides cross-plugin standards to support portability.

Standard is as standard does

RFCs are standards. I'm not talking about those that make it to internet-draft status. In the old days, RFC status, combined with actual implementations interoperating on the Internet, was enough.

Same goes for drafts actively being edited in the CSS WG.

If your "six year" remark was only about some final, ultimate promotion to de-jure status (e.g., w3c REC), six years is still too long, but that's not the issue. The issue is lack of any spec by which independent, interoperable implementations can be -- and have been, before the standard is blessed -- built.

PPAPI is open source

Nice try. Open source does not make a spec. If open-washed as Google tends to do with delayed-open, zero governance, contributors all employees, it is not really open.

Pepper is not a standard, it's a codebase. I cited the definition of "standard" for you. Have the good grace to stop abusing that word!

/be

Open source does not make a

Open source does not make a spec.

I agree, but I never suggested otherwise. What I said is that PPAPI is open source and was designed with intent for cross-browser standardization. An API is a spec. PPAPI is an API. Not all APIs are intended to be standards, but PPAPI was designed with that intention.

Sure, it is accompanied by an implementation, but that is not the point. The fact that the API is open, and designed for wide use, are the relevant aspects.

I would say that Google's done its fair part in lifting this to a de-facto standard. Creating 'independent, interoperable implementations' is up to independent groups. It isn't as though they have any obligations to coordinate with a group that has expressed "no interest" in it.

API != normative semantics

This is getting absurd. As LtU readers well know, C++ cannot express all, much, or even any of the mandatory (normative in spec terms) semantics of the API's implementation.

I know some of the Googlers involved in the PPAPI. Asserting that PPAPI was designed with the intention of being cross-browser-standardized based on your divine afflatus does not cut it. They never did the work to engage with other browser vendors, and check (by other vendors implementing, interacting, and having a say in the final API) whether different engines such as Trident, Gecko, and even WebKit in Safari can implement the large API-set interoperably.

The PPAPI sits directly atop chromium.org and webkit.org implementation code, which has unknown semantics. Reverse-engineering this pile of code is not feasible for any other vendor. Using it adds redundant bloat and coherence bugs, also no-go -- assuming competitors can even use open source in the first place, and are willing to take large Google-dominated source bases.

I've updated my reply to add the last paragraph, because I am not sure how much large-scale software engineering you've done -- specifically API compatibility on top of unrelated codebases that interoperate using high-level content languages, but do not have much if any commonality in their native and OS-dependent C++ implementations.

Saying PPAPI is cross-browser when this is undemonstrated, or asserting intent as if that matters, doesn't mean a thing.

/be

APIs come with

APIs come with documentation. OpenGL standard, for example, is basically an API with documentation. I grant that the formality of an API can be weaker than you might desire, but I think that's an argument about quality or formality of a specification rather than its definition.

I did not say PPAPI is (present tense) cross browser, nor even that it will be. I assert intent based on what I've read of PPAPI at wikipedia, its website, and a few blog articles over the years. No 'divine afflatus' needed. I think you're mischaracterizing PPAPI in any case.

Anyhow, this argument really isn't about PPAPI.

You feel that Google has not made a sufficiently strong effort towards standardization of X for many values of X. I recognize your opinion, but I set a much lower bar for the associated obligations of any individual or organization.

standards are for interoperating peers

In the mean time, standards are for clients - PPAPI still provides cross-plugin standards to support portability.

This is simply false.

First, standards (see the definition) on the Web and among browsers cover general notions of interoperating peers. They include both "client" standards (web developer and user facing), e.g. JS, HTML, CSS, the DOM; and plugin/browser standards, signficantly (since 2004) NPRuntime and NPAPI.

Second, asserting that PPAPI provides cross-plugin "standards" (there you go again -- PPAPI is not a standard) to support portability among Chrome for Mac, Chrome for Windows, and Chrome for Linux, simply says that Google Chrome is a program ported to several OSes. It does not say PPAPI is a standard, never mind a cross-browser standard.

PPAPI is not a standard. Not even close. It's more like Google's ActiveX -- "ActiveG".

I have no doubt you'll keep replying till doomsday, but since you've once again abused "standard", I'm done. No point arguing if we don't agree on definitions.

/be

Peers

Standardization efforts are only justified to the extent they support your clients - i.e. if clients would avoid or abandon you if you lacked standardization, or join you based on your support for standards. There is no obligation to help one's competitors, so the only reason to pursue cooperation with 'peers' is if it mitigates risk, reduces costs, or improves quality for yourself or your clients.

Thus, standardization and standards are never 'for' peers, even though the standards themselves might cover notions of interop between peers.

simply says that Google Chrome is a program ported to several OSes

It is possible to write Chrome plugins that only work on one OS. So this is not a trivial issue.

No point arguing if we don't agree on definitions.

We also don't agree on principles or priorities, but I can agree that there's not much point in our further arguing.

Huzzah

I can agree that there's not much point in our further arguing.

Thank Crom.

You've almost single-handedle ruined LtU for me and a bunch of people I know. Well done.

/be

I am sorry you feel that

I am sorry you feel that way.

I'll take some time off from LtU. If you really think it can get better, prove it to me.

Hopefully not too much time off

Your posts haven't ever bothered me -- if I wasn't interested I just didn't read them, and I think you frequently have useful things to contribute. But I think gasche's feedback was reasonable, and I might send other feedback privately. In the meantime, so that LtU doesn't lose too much steam while you're gone, I'm trying to arrange for Achilleas Margaritis to come back and do some posting on type theory while you're away.

Just agree to disagree...

No need to run and hide. Your posts are typically logical and well thought out (and correct in context), but sometimes it's OK to be wrong(if in fact that's the case here. There can be absolute correctness in one side of a debate, dismissing all retorts, but not always).

I'd say it's nearly impossible to argue successfully with Mr. Eich in the arena of standards and open PL design... He's right that Google presents an image and history of openness, but they are fighting to preserve relevance in an increasingly JS-oriented world, like all of us (V8 isn't enough - or is it? JS is the language of the web today) and they are doing this behind closed doors. That's part of the political issue at play here.

The crux of the political argument here (right or wrong) is that hiding the development of a new web programming language is counter-intuitive/counter-productive since the web is open by design.

TC39 have their work cut out for them. Would be nice to see more energy supplied by the big players like Google in that realm as that's where the market is driving us - let's not forget about the users of JS...

Still, it will be interesting to see what Dart actually is and how it enables more capabilities and efficiencies (and productive tooling) for large scale web development. JS is almost there and all browsers support it, millions of developers are proficient in it, etc. Theory can't beat reach. At this point, Dart is theoretical until we see something realized in practice. Would be nice if more than one player could help shape the new, but the old will get better and it's open and standardized nature is very good for developers, browser makers and of course consumers.

C

Yup.

Would Google discuss Dash/Dart at all, were it not for the leaked memo?

Yes, Lars was listed as the keynote speaker at GOTO for several months, and the description of the keynote was filled in to mention Dart a few days before the memo came to the world's attention.

My guess is that the press surrounding that announcement was what tickled someone's memory of the leaked document.

Interesting

Thanks for that tidbit. ;-)

Priorities, not ideas

As to your 'dash-informed input', I doubt you'd achieve anything differently with it. It isn't as though the JS community is short on ideas.

Clearly I was talking about roads not taken, choices not made. Ideas are not in short supply but different paths, orders of work, and insights into composition of primitives, or even just how to build a sane numeric tower -- help wanted, TC39 is not too proud to hear from others at Google on these fronts.

/be

Enough about the process...

What about the technical arguments/approach (not that we know much, I think)?

GOTO is where Dart's details will emerge...

We'll need to wait for GOTO for the technical stuff to emerge as it relates to Dart's design and implementation, as linked to on this thread (comment Yup.) (the anchor linking here isn't precise - you'll need to scroll down to Yup. Hmm. Scroll down to Yup. Now that's a cool Facebook status...).

This thread is only about "process", or leaked politics

We will want a new thread when the big reveal is done.

/be

To bring some objective

To bring some objective reason into the discussion, there's interesting work on "technology expectations." One funny analytic economics result (I couldn't find the paper, it was fairly short though) is that, modulo crowding out effects, early failure is OK for a new project. If I remember right, it is more sensitive once taking off. In terms of the existing players, incumbants have the benefit of crowding out any slightly late players during initial competitive phases and high switching costs (and tech expectations) for new comers in later phases. There's bunch of experimental evaluation (subject to fallacy of history, of course) supporting related reasoning, though I don't recall anything about this one in particular.

The question here is whether closed design is healthy for that design and others. The closed model, in general, seems susceptible to black swans (I started writing an essay about this maybe half a year before Oracle bought Sun.. wow that was fun!), which themselves seem inherent to technology. For incumbents, competition is a stimulus (which Brendan acknowledges) but also a preclusion of tacit knowledge and other resources (which is the issue). I think proprietary design is important -- we need evolution -- but view most GPL restarts as inevitably limiting towards dorodango, with some fatalistic implications (and thus am with Brendan here).

I was/am fishing for more

I was/am fishing for more information...

My best rumor-informed guess

Lightweight unsyntax compared to Java, functional, but more "Java" than "JavaScript" in terms of types -- and with types and type checking optional, therefore with big soundness holes => fat runtime checks.

We explored this during the ES4 days. It would be a hoot to find ES4 ideas in Dart.

/be

Given whose involved, this

Given whose involved, this is a good guess. I expect strong Smalltalk, Self, and Strongtalk influences. Regardless of the standards/political climate, it will definitely be an interesting language for an OO person like myself.

I'm looking forward to your Splash keynote, and hopefully I'll be able to hear more around the conference.

ES4

It's worth noting that all of such ideas existed before ES4 in one form or another, many of them being explored by Dart's designers even.

Sure, ES4 never claimed precedent

We were trying to avoid "doing research" in ES4. In some ways we failed to avoid research, had negative results, and then cut, which led (in combination with more work on the ES3.1 front than the initial 2007-era warm beer) to Harmony.

My point in invoking ES4 was that Google did not support ES4 in Ecma TC39 -- hence "a hoot" (I enjoy irony). By "Google", I mean people other than the Dart folks. It's a house divided. But you've said as much here.

De-jure standardization of anything like Dart without it first becoming a cross-browser de-facto standard is unlikely.

/be

To Brendan Eich

Brendan,

I am sorry you have had to field so many off-topic inquiries. I almost regret posting this story.

While you note that Mark is working on TC39, I am curious if you could take some time to discuss some programming issues rather than political caca.

The thing I found most interesting in the memo was Google's* view of JavaScript's "toolability". I have also heard Microsoft engineers complain that the Web, based around HTML, CSS and JavaScript that the ecosystem isn't tool-friendly. WPF Architect Chris Anderson wrote a chapter accusing the ecosystem of being less tool-friendly when defending the decision to create XAML. And Bruce Johnson and Google has stated in various publications, such as CACM, that GWT makes the Web more tool-able.

I guess my basic questions are, How does TC39 address these complaints, if at all? Does Mozilla see technical reasons? are there any projects that focus on making things more tool-friendly?

The tool-friendly argument has always seemed empty to me. GWT does not do much tooling-wise other than allowing you to attach a debugger to Java code. Comparing GWT to something like Opa suggests that many bright minds at Google are behind the curve in understanding where to put R&D effort into better tools.

* Let's not debate how many people at Google share this viewpoint, or whether specific products like Gmail or Maps could benefit from tooling.

Calling Tool-time Tim

I agree that the tool-hostile complaint is not really substantiated with good big-picture analysis and thinking. We heard it from Adobe in favor of type annotations in AS3 during the ES4 effort.

It seems to me the complainants are ignoring decades of dynamic language experience (Lisp and Smalltalk), and more recent but solid semi-static and static program analysis work on JS (e.g. DoctorJS, but lots more out there and coming, including to Cloud IDEs such as Ace/Cloud9 [ajax.org]).

Instead the big bad trade-off of requiring programmers to annotate types, at least at module boundaries, is made by the tool builders.

This could be helpful to programmers if done well, with a strong-enough type or contract system. I haven't seen it done well (AS3 resulted in grotesque over-annotation, even at the expense of performance due to JS's lack of C-like arithmetic evaluation rules). Perhaps Dart will make a breakthrough.

/be

Tooling should be part of the language spec

It is 2011 and we still mostly design our languages as if they were simply text files running on the platform. The best way to get out of this rut is to consider tools a part of the language, so when you negotiate decisions, tooling has a strong place at the table. And not potential tools, rather the tools developers need to should be conceived and evolved along with the language.

XAML/Blend sort of worked like this, though not strongly enough in my opinion. Early Smalltalk was closer: you didn't program in a flavor of Smalltalk without the IDE that came along with it. Adobe has had a lot of success designing action script to support its tools, rather than the other way around! However horrible AS is, it was not the point; the tool was the point.

_

_