Knockout JS

Apparently Knockout is old news, and I'm behind the times. But: it is a Javascript library that apparently gives declarative binding, automatic ui refresh based on that, including dependency tracking. All written in 29k minified Javascript I guess. Scary. I liked this overview.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Very nice. Sounds like

Very nice. Sounds like JavaFX and my own Bling project. I hope declarative binding takes off. One of the reasons I was disappointed with Silverlight is that they never supported fully general declarative binding.

Very appealing

Very appealing, I concur. Coincidentally, I'm going to need the help from this kind of thing for a project to begin shortly. So, thank you for sharing about this.

JavaScript is the library language of the Web...

The number of high quality JS libraries is somewhat astounding, with new ones popping up almost daily it seems...

Amazing how a simple scripting language designed for non-programmers to manipulate DOM objects in a web page has tunred into the de facto general purpose library language for the web (both client and server - check out what's going on with node.js and the libraries forming around it...). There's also interesting work going on with JS AST shaping. See for example JSShaper and burrito

Acts of Desperation?

It is true that many high quality libraries have been developed. And node.js is nice because people don't like learning more than one language.

But I get the impression that this has happened mostly because developers do not have other viable options. Flash and Silverlight and applets tie developers to a puny little rectangle and put a large rendering/layout burden on developers, so that's no fun. I don't believe NPAPI or NPRuntime plugins currently allow us to extend the set of script types supported by a browser.

Someone could, of course, run off and create their own new browser model, and some web-services for it. But it's a lot of work to catch up to what browsers already accomplish today, especially if efficiency is a concern, and competition is fierce. It would take a real disruptive technology to succeed that way, which doesn't leave much room for incremental improvements... except via well-written JS libs.

Or build higher level syntax on top of JS

We may be "stuck" with JS as the de facto in-browser programming language, but I don't think that's the case... It proved it's worth more than we were just not given any other options. The flexibility of JS enables one to build higher level abstractions or "languages" on top of it - JS becomes, as Erik Meijer says, a web assembly language, a compiler target. Look at CoffeeScript, for one example of something-higher-level-to-JavaScript. CS removes the rough edges of JS (and all the curly braces!), adds simplified higher level abstractions, etc. It compiles to JS, which is the "machine code" of the web in some sense.

What's astounding to me is that JS is BOTH a powerful and expressive human-composable language AND an efficient compiler target (though, it is far from perfect in this context... Humans of all skill levels need to be able to write JS programs effectively - this was Brendan's primary goal when he designed the language - so adding things like GOTO to make it a better compiler target would be counterproductive and just a bad idea...).

JavaScript is the language that just keeps on proving useful in astonishing ways or in contexts that make us go "wow... it does THAT, too?!".

Nice job, Brendan et al.

Web Assembly

JS has many outstanding properties that make it remarkable as a web assembly language:

  1. semantics that hinder optimization (whether static or staged at runtime) have led to the development of powerful new compilation techniques that include continuous tests in case a volatile assumption is violated
  2. The language is standardized. Indeed, there are many standards. And the semantics are informal. It's relatively easy to build a compiler targeting JS that works roughly as expected most of the time.
  3. the insecure composition model and resulting 'same origin policy' hinder development of mashups, brokering, and composition of web services; this helps us maintain heterogeneous diversity of the web.
  4. the computation model and namespace resist features involving parallelism or heterogeneous memory compilation, severely hindering development of multi-media applications, distributed apps, or large data processing, thus securing the livelihood of the desktop applications market
  5. being an imperative language with a complex user-managed control flow supports us in obfuscating data and hindering transclusion, annotation, bookmarking, search, language translation, screen readers, and so on.
  6. errors and failure handling have rich, ad-hoc semantics in JS. Now, some people prefer that errors reduce to some consistent, boring condition (like the all-or-nothing of transactions) so that we can easily compose independently developed libraries. But who can say no to rich, ad-hoc semantics?

But, even with these properties in mind, I think we could do better if we were to design a language for purpose of web assembly - and not just for the compiler; the humans can also benefit. For one example, see Curl initially from MIT. I don't agree with all of Curl's decisions, but many of them are well justified.

JavaScript as an Assembly Language

I haven't heard of Curl. Thanks for the link. Interesting, though it represents yet another mark-up to learn (HTML is already the capable, standardized mark-up of the web...), which is OK, but if you are proficient with JS today (and HTML), then you can apply it to a wide range of web programming - browser, server and in Windows 8, desktop applications (HTML5 + JS), as revealed at this year's D9 conference by Windows executives Sinofsky and Larsen-Green).

I agree that there is plenty of room for the advent of a better language (maybe it just needs to be a more modern, object-oriented JavaScript? People seem to like classes and the like...) to program web application logic that's guaranteed to run in any modern browser host (it just needs to compile to JS...).

CoffeeScript is one example of SomethingSimplerOrCleaner-to-JS (there are others and there will be more...). Doug Crockford made an excellent suggestion in a recent conversation: Why not take all the CoffeScript-like implementations, throw them together on a stage and let's vote on the best one; essentially a beauty contest for languages that compile to JS and hide its sharp edges/bad parts, add more OO concepts, etc. I think he is spot on. This is something that should happen!

Now, if JS is to actually become a standard Web Assembly, then the TC39 folks will need to add this problem space to their current list of work items for serious consideration. What does the language need to make it an industrial strenth, reliable, efficient web assembly? How will these additions impact the language for general purpose use? What are the trade-offs? Are they worth it?

I've spoken with some of the TC39 folks about this and either they don't agree at all that JS should be used for this purpose or they feel that adding new constructs to the language to support JS-as-compiler-output will add new (and potentially dangerous or counterproductive) complexities higher up the stack for web developers who can today write robust web apps powered by JS without understanding the language (and the sharp knives) to the degree folks like you do...

JavaScript is supposed to be, and I'd argue in fact is, the language that democratizes web programming for the masses. JS is the only language I know of that can wear two very different hats simultaneously (high level language for general purpose composition and efficient compiler target for consumption by the web (virtual)machine).

Really interesting times.

a language in multiple hats

JavaScript is supposed to be, and I'd argue in fact is, the language that democratizes web programming for the masses. JS is the only language I know of that can wear two very different hats simultaneously (high level language for general purpose composition and efficient compiler target for consumption by the web (virtual)machine).

JS is hardly unique in the role you name, if you include the many various other 'web machines' out there. Cf. Croquet project, Second Life, Oz/Mozart, distributed Alice, various cloud compute services (many of which use Python) and mobile agent languages, et cetera.

JS just happens to be the de-facto scripting language for the most popular class of web services, and thus better known than the others. And it certainly isn't the only language that could fulfill that role, though it now holds an incumbency advantage that can cow most competition.

What does the language need to make it an industrial strenth, reliable, efficient web assembly? How will these additions impact the language for general purpose use? What are the trade-offs? Are they worth it?

I've been enamored with open, federated, distributed systems programming for years, so I have a lot of advice to offer about what not to do and one very promising model on what I think we should do. But I think most people asking your questions won't be able to hear me. Too many people desire easy answers more than valid ones.

Hats

I don't think the other examples you share are the same, though. Which of these have had a democratizing effect at scale (none of this really has to do with which programming language is better designed - that doesn't factor into the mass success equation to the degree that language designers hope...)?

I don't think anybody could argue that SmallTalk is a language for the masses. Well, at least this has been proven empirically by the masses...

Theoretical validity doesn't really matter in this case does it? Users make programming languages successful, not programming language creators, implementors, experts. Well, yes... Corporations push the hardest and have a great deal of impact, but no corporation pushed JS as anything more than web page scripting technology. Now look where we are. I wonder what users wil discover next about the "little language that wasn't supposed to"...

not javascript's success

I do not believe JavaScript has any 'democratizing effect'. Rather, I believe you misattribute the cause. If Brendan Eich chose SmallTalk for the Netscape browser, that's probably what you'd be gushing about today.

Also, most users are trapped by the system. Saying that 'users make languages successful' connotes that users are making an informed decision against other viable options. I think it would be more accurate to say that the success is self perpetuating, which is a common property for many platform technologies (gas, electric, roadways, gaming consoles, internet protocols, programming languages). It's difficult to directly compete with a large system - it takes a disruptive technology, a transition strategy, and a healthy dose of rage against the machine.

I agree that language design, theoretical validity, even outright in-your-face proofs of superiority are insufficient for success. They all pale in the face of market forces and inertia.

JavaScript the language

Brendan choose a syntax that was in vogue at the time, C-like, just like Java, which JS has no real relation to (what a bad name you chose, Brendan... :)

I'm not a JS pimp, but rather come from a background of actually disliking the language. I never took it seriously when I was professionally programming, but did enjoy it's ease of use when having to code in it (some of my work in Windows XP was written entirely in JavaScript...). Was it weird coming from a "real" language like C++? Yes. Yes it was! Did I find myself dumbfounded by certain aspects of the language? Yes. I still do, in fact. That said, perhaps Crockford has successfully brainwashed me, but I don't think so. What he really did was lead me to further investigation and experimentation with the language and it's left me impressed (and sometimes confused).

Regardless of the language's accidental-forced success it's an interesting language with some great characteristics that make it suited for the web and the varied skill levels of human web developers. It's these very attributes that are pushing the boundaries of JavaScript into new territories, outside of the browser, the DOM and web pages.

Today, I spend more time writing C++ code than JavaScript (I'm actually not affiliated with web programming in my day job - I pimp languages like C++ and C#, professionally), but I still believe it is a more advanced language than it was initially designed to be, weird as that sounds. Who knows, maybe Brendan is even smarter than we think he is.

We can agree on one thing for sure: JavaScript is here to stay. Reach wins every time. There's no substitute for adoption. And, it's a language deserving of respect from folks like myself who used to make fun of it. It's got mine. That's for sure.

JS vs. C++?

When you say 'JavaScript has some great characteristics that make it suited for the web', what are you comparing it to? I would certainly agree that it's less painful than C++, but I sort of feel like you're comparing a sledgehammer to a ball-peen hammer when the problem calls for some twine and a gluestick.

I'm not inclined to agree that any imperative, insecure, non-concurrent, non-incremental language is 'suited to the web'.

Not JS vs C++

I wasn't comparing JS and C++...

JavaScript is also functional and object oriented (prototypes). It has warts, as I said. It's not a perfect language but it is the language of the web as a matter of fact. The addition of strict mode fixes many of the blatant security/reliability issues you allude to. More work will be done here. Suited for the web? Again, I think the web has proven this to be the case...

JavaScript's highly dynamic, functional nature also makes it a good fit for web programming. Yes, the single-threaded view of the world is unfortunate. I doubt it makes sense to add threads to JavaScript, but you can imagine an async/await model (like C# 5 will have) added to the language at some point. This would obviously be very useful for responsive UI-based programming over distributed data sources, which is what the web client world is all about... Not sure what to say about concurrency and JavaScript. You'll need to ask one of the JS language curators.

Widely Used != Effective

JavaScript is not a good fit for web programming. But it takes a study of alternatives, and a long list of failed desiderata, to understand that this is the case.

Please refrain from further arguments of the form "it's pervasive, therefore it's effective". Unproven, inefficient, or even harmful systems can easily be widespread and self-perpetuating - consider war, prejudice, tobacco, religion, the American health system, and JavaScript.

When I say I want concurrency, I'm not suggesting that we 'add threads to JavaScript'. It does seem that 'use strict' will patch a few of JS's more obvious vulnerabilities, but there is still plenty to be done.

I'm just a user, not a language designer or implementor

My perspective is one of pragmatic experience and looking around at what's happening in the real world (all the JS libs, the novel ways it's being used to solve a wide variety of problems, unexpectedly). I agree that adoption != effective or perfect (as I have said a few times in this discourse - it is far from a perfect language, yet it continues to prove capable).

It's clear that both JS and C++ are not among this group's favorite languages :) Sean, I can't agree with your representation of C++. Let's leave it at that.

I won't argue with you (I can't really. I'm just a user, somebody who uses these tools to build things. I also like them.)

This has been an enlightening and fun conversation. JS (and modern C++ for that matter) has much growth and evolution in front of it. We'll see what happens. I hope language folks such as yourselves get involved since the likelyhood of the advent of some new language for the web is pretty small. JS should become better and also not break the web as a consequence. It's a tricky problem and this is where the language designers come in.

Issues of perspective

Your 'perspective of pragmatic experience' is extremely biased: you only see success. That's all you'll find by "looking around at what's happening in the real world (all the JS libs, the novel ways it's being used to solve a wide variety of problems, unexpectedly)". Cost, failure, and lost opportunity are simply not on your radar.

An accounting of costs, failures, and lost opportunities would require you severely lower your estimate of just how 'capable' JavaScript is proving.

We could certainly do worse than JavaScript, but that doesn't mean it's well suited to its purpose. We could do a lot better.

I did not "choose" the name or the syntax

Please see my blog. Also this hacker news comment.

In the last, I suggest that perhaps the syntax chose me, as much as I chose to learn C, C++, and early Java, or to work at SGI (very much my choice getting out of UIUC in '85).

I could not have chosen Smalltalk. History has reason and rhyme as well as chance, it is not all and only random. For my part, there was little "arbitrary" in what I did, including the mistakes -- some of those weirdly recapitulated early LISP mistakes.

JS's influences: Self, Scheme (barely), AWK, HyperTalk, Java, C.

/be

Fog of Software Development

I agree. We should not be quick to pin blame (or credit) for decisions shaped heavily by external pressures. And it is only as of 1997 that we truly began to develop programming models that support a more declarative approach to dynamic UIs and rich internet applications.

However, do you have any good reasons that JavaScript was provided in the core rather than shifted to NPAPI? Even now, we have a MIME field in <script type="text/javascript"> that is pretty much unusable. Use of <object> for the newer NPRuntime does allow a plugin some limited access to the DOM these days, but we still lack good integration with different parts of the page.

It seems that decision has managed to stifle any real competition.

Even now, we have a MIME

Even now, we have a MIME field in <script type="text/javascript"> that is pretty much unusable. Use of for the newer NPRuntime does allow a plugin some limited access to the DOM these days, but we still lack good integration with different parts of the page.

I agree and I often wondered about the same. I did also often regret that the so-called "script components" and "behaviours"-related scheme of things never had enough success/didn't attract enough attention for it to leave that unwelcoming realm of proprietary Microsoft-specific extensions, more specifically in scripting-with-rendering cooperation(*) in the browser, for this matter.

((*) and yes, ideally, I suppose, while also keeping the option of the scripting be done in Javascript or... something else)

<script> supports multiple MIME types

There's nothing unusable about the script element's type attribute. See RFC 4329, and also note how people write <script type="application/python">...</script> and dig out the code to interpret using, e.g. Skulpt.

What's more, no off-the-shelf language engine has distribution, web-facing safety properties, or DOM integration such that any browser would just load, e.g., C-Python as a DLL or DSO due to such a type attribute value. That would be the road to security ruin, for users who had the DLL (on Windows and Mac, most won't).

The <object> tag came in 1998 with HTML4 (Dave Raggett invented the unregistered text/javascript type there, too). It did not exist in 1995. I invented script because the JS vision (marca and I agreed on this) started with inline scripts -- code in the page. That vision required a CDATA content model and rules about fallback that <object> lacks.

Again this is not just bad history. <object> is a failure relative to <script>, and for good reasons including security, opacity, and other costs of native-code plugins and the hairy, hard-to-evolve NPAPI. Not to mention the EOLAS patent.

I don't think you realize how hard it is to standardize something like JS. There's little energy and no economics among browser vendors to do a second language, even abstracted by a plugin API.

NPAPI is a big interface, and with custom, per-browser integration, one could do such a script engine extension mechanism. But the browser vendors cannot agree on one native-code interface standard expressing all the DOM and browser APIs. Google tried with Pepper, and failed. Apple will never follow that, nor will Microsoft.

/be

MIME not usable in <script>

RFC 4329 describes four options as of 2006 (ten years after your initial work), all of which refer to the same language, and two of which are obsoleted: text/javascript (obsolete; also the default in HTML5), text/ecmascript (obsolete), application/ecmascript, and application/javascript (which IE still doesn't support).

Skulpt is hardly a positive example. Sure, we can interpret Python in JavaScript. But that comes at the cost of an extra layer of interpretation, and works moderately well only due to similar semantics (e.g. with respect to concurrency, dispatch, state, side-effects). The fact that the <script> field is used there is almost incidental.

Since you call this 'usable', I would hate to see what you call 'unusable'.

We will have practical support for multiple scripting languages if ever we can put Curl, E, Oz, LabView, or whatever into both script and various event-handler fields, without sacrificing the safety, performance, parallelism, security, et cetera advantages featured by these languages.

no off-the-shelf language engine has distribution, web-facing safety properties, or DOM integration such that any browser would just load

There are many safe language engines, even at the time you wrote JavaScript. And DOM integration is not a very difficult problem - it is easy to model the DOM, and manipulate that model, from almost any language. Browser integration has been the real stopper.

I don't think you realize how hard it is to standardize something like JS.

In this case, I believe it was the DOM that needed standardizing, not the scripting language.

Imagine we did not standardize on JavaScript. Would we still have de-facto common scripting languages that work in every browser? Yes, we would. But a site that needs to provide a richer experience would be able to provide a few plugins to major browser vendors.

The <applet> tag was the predecessor to <object>. The idea of 'applet' was to give some code a sandbox to play in. The idea of 'script' was to give some code the whole page as a sandbox to play in. If a different turn was taken back in 1998, these two concerns might have been reconciled.

Dream on

Skulpt is young. Compiling and even trace-compiling JS eliminates double-interpretation costs on current optimizing JS VMs. It can even yield nearly native performance.

Your complaint about the script type attribute was misstated. You really meant "why didn't all browser vendors provide multiple script engines?" (No extension API could credibly work without multiple engines in multiple browsers under test by the vendors.) The answer to that question, I already gave: because doing so is enormously expensive, not only in one-time costs.

Contrary to your assertion, there were not any cross-platform (remember Windows 3.1?), open source, web-security rather than Unix-command-security ready language engines in sight.

C-Python of the time was strictly less memory-safety hardened. It relied on an unsafe FFI. And it was poised to evolve dramatically, so any browser-embedded version would be a forked fly-in-amber. Same situation for TCL and Perl.

You may now cite an obscure language that might have made it through the gauntlet, but the reality was not only a paucity of candidates, but high costs to implementors, and on top of that, the convicted monopolist bundling IE with Windows 95.

If JS were not "on first", we would not have a multi-vendor extensible script engine standard. We would have VBScript.

I agree with your replies to Charles stressing that we do not live in the best of all possible worlds. Dr. Pangloss aside, in the real world, path dependence in networks does breed monopolies. Netscape was one but it underinvested in the browser after Netscape 2 and thereby helped MS to the market.

The new "better is better" hope for multiple languages is NaCl. It was the driver for the Pepper API that failed by being too chromium-specific. Hope springs eternal, but IMHO the C/C++ toolchains will have CFI enforcement for portable safe binary code sooner than the browser vendors will agree on NaCl and Pepper2 or 3.

The "worse is better" hope is compilation to JS combined with language and VM evolution. That is my bet. And I am putting money on it at Mozilla.

/be

Dreaming On

Compiling and even trace-compiling JS eliminates double-interpretation costs on current optimizing JS VMs. It can even yield nearly native performance.

I agree that it is possible to leverage runtime specialization, tracing, staging and the like to achieve near-native performance for a tower of languages.

Unfortunately, JS makes a poor foundation for such a tower.

Among other problems, JS is not well designed for cached compilation of dependencies due to issues of namespace shadowing, global state, and load-time side-effects. If I go to a site and it loads a 100kB JS library, even state-of-the-art browsers will re-load, re-parse, and potentially re-JIT that library once per page I open, with corresponding space and time costs.

The JS benchmarks that seem to approach 'native performance' simply do not reflect concerns for snappy performance when loading a new page.

And there are many performance opportunities that are simply not addressed by JavaScript. Brokering of web services could put more processing on the client, but is undermined by JavaScript's single origin restriction, which is from its 'web-security' issues. Leveraging parallelism or heterogeneous memory (CUDA, GPGPU, DSP, FPGA) is also feasible, but hindered by use of shared state in the global namespace and the lack of standard collections-oriented operations.

The techniques developed to eek performance from JS have potential to apply even more effectively to languages designed to discover invariants suitable for optimization.

You really meant "why didn't all browser vendors provide multiple script engines?"

I did not. But I do respect your concerns that VBScript would be "on first" if JS was not already there. (shudder)

The new "better is better" hope for multiple languages is NaCl.

NaCl is a promising technology for supporting new languages, but I've yet to experiment with it or form a solid opinion on it. Hardware virtualization is another useful technology in the same vein.

I do prefer the JS approach of compiling/interpreting the code on the client side.

My 'dream' involves declarative automated sharding and code-distribution of fragments of a secure language, albeit one much more optimizable than JS. I would also like to trade out HTML for a document model better suited to zoomability, accessibility, CSCW, continuous real-time, multiple views, mashups, transclusion, flexible user input, disruption tolerance, augmented reality, and other nice properties. And I am making fair progress on it.

static assumptions fail on the web

"JS makes a poor foundation...", "... global state, and load-time side-effects", "... will re-load, re-parse, and potentially re-JIT...".

You write as if nothing is changing, but JS is part of the evolving client side, and since it still has first-mover advantage, it is in fact evolving to address every one of these concerns. It's easier to evolve than to start over.

ES.next is based on ES5 strict mode, and eliminates the global object as top level scope. It has lexical scope all the way up, so free variable references are early errors. Developers opt in via script type="...;version=..." and an in-language pragma.

ES.next also has a static module system that caches module instances and prefetches all static dependencies. (Even now, thanks to Steve Souders and others, developers know to avoid side effects and global collisions, but by convention; ES.next modules make this foolproof.)

http://wiki.ecmascript.org/doku.php?id=harmony:modules_rationale
http://wiki.ecmascript.org/doku.php?id=harmony:modules
http://wiki.ecmascript.org/doku.php?id=harmony:module_loaders

Source, parse tree, bytecode, and JITted code are all cached in leading edge browsers, and content-addressed caching across origins is under way. Nothing in JS today, never mind ES.next, prevents this kind of caching, and browsers are doing it.

As for data parallelism, WebCL (the low road) is coming fast, and we are working with Intel and NVIDIA on higher-level functional APIs based on immutable typed arrays with the usual combinators, which enable compilation (JIT or AOT) from JS to OpenCL or better.

Yes, starting fresh would be nice. It's not realistic in the current market. If a new monopoly arises, possibly -- but even then shortest path may prevail.

Meanwhile, right now JS evolution is accelerating.

/be

Evolving Language

I have very little faith in language evolution, except to the extent it is achievable from within the language (via libraries, frameworks, ajustable syntax). I'm not saying it's impossible, just that I've heard such claims then been let down too many times.

Starting fresh is undesirable, I agree. Any new technology should integrate well enough with the existing systems, e.g. via mutual embedding.

If you fix the performance issues, JS will be better as an 'assembly' language. Just remember that this means caching not just the code, but also intermediate results (such as from Skulpt compiling Python into JavaScript).

I find your lack of faith disturbing

Lots of languages evolve successfully. In fact, basically every language in use today has undergone substantial evolution (Java, Ruby, C, C++, Python, Scheme, Perl, JavaScript, C#, Fortran, Scala, OCaml, ...).

In particular, the future evolution of JavaScript is described here: http://wiki.ecmascript.org/doku.php?id=harmony:proposals

If you have feedback on anything specific, please do let us know.

Substantial Evolution

If I look only at the successful changes after they occur, that hardly qualifies as 'faith'.

I'm not sure I would qualify any of those languages as having undergone 'substantial' evolution, though perhaps I just measure against a much wider scale for how different languages could be (IMO, Ruby, Python, and JavaScript are nearly the same language, modulo libraries and community). Substantial changes tend to break code or require new idioms. But I do benefit from the minor improvements that offer convenience, performance, or discipline for something developers were previously hacking.

If you improve JavaScript to a degree where I can maintain proofs about security, safety, consistency, performance isolation, and nearly maintain bounded-space and real-time properties of the 'substantially' different languages I might compile to it, then I can be satisfied with that. I am happy to see Mark Miller's influence on the proposals.

Why JS became the web

Why JS became the web assembly language of choice is purely accidental. It wasn't designed as such, it was just supposed to be a way of building richer web pages in lieu of using Java. But here we are, and look at all the work going on in industry and research to patch JS up so that it is more suitable...its actually quite amazing and somewhat sad as all those cycles could have been used elsewhere.

The ideal web assembly language would be secure, portable, flexible, ... maybe something like JVM or CLR bytecode but of course we could do much better.

Accidental Success

One could argue that any successful (at scale) language happens upon success "accidentally". JavaScript is no different in this regard than Java, C, C++, Ruby, PHP, etc. The better designed language, the perfect language, doesn't mean anything in realistic usuage context.

Humans play a far more important role in determining language success than the design of the language - we don't give users enough credit. In fact, it is the unpredictable and chaotic human part of the success equation that determines a language's success at scale.

JavaScript was designed for non-programmers, yet it is used today as a compiler target, annointed a web assembly, de facto language for web page logic, used for writing http servers, etc. Astounding.

There is no way Brendan Eich designed the language with anything besides simple web page programming in mind. This is why he was able to implement the first version in 10 days...

The big accident here is that humans took the language and ran with it, pushed it into areas that on the surface seem insane - yet the language consistenly proves itself useful in new contexts. So, accident or not, JavaScript will most likely become even more used (and useful) in the next 10 years. It needs more attention from the language community than it gets today - the TC39 people have a large responsibility to ensure they don't mess it up now that it has proven worthy of shedding it's web scritping clothes. I believe it will.

The accident is that JS was

The accident is that JS was not designed to be a web assembly language, and in fact is fairly unsuited to being a web assembly language. But JS was pervasive, and so it became a web assembly language inspite of not being suitable for it. Rather, we (mainly Microsoft, Google) invest lots of money in making JS suitable via clever engineering.

JS wasn't designed for non-programmers, it was designed for low end programmers in much the same way that VB was. The semantic difference is very important. One of the crazy things that JS did well was borrow a lot from Scheme and Self, hence it has great extensibility and you can define crazy dynamic-meta-style libraries for it. This made it easier for programmers to use and extend, but ironically that also makes it even more less suited as a web assembly language.

JS is really in danger of becoming the next C++: a widely used language that is universally shunned by language enthusiasts. Much like C++, it will get more attention from system researchers than PL researchers given its incredible system challenges, as well as compiler people looking for good perf challenges.

Which C++ are you referring to?

Today's C++ is very different than yesterday's... Which C++ do you mean?

Today's C++ or yesterdays,

Today's C++ or yesterdays, they are both still in wide use. C++ is still an incredibly complicated and dangerous language. C++ is still a perennial pariah for language researchers, an example of how not to do things. But then C++ basically guarantees full stable employment for tool researchers, while companies like Coverity can earn major bucks on "solving" C++ problems.

Don't agree with your characterization of C++

Dear Sean,

I can't agree with your characterization of C++. Modern C++ is not terribly complicated (yes, it's very big, there's too much ceremony sometimes and it can be very unwieldy in certain contexts. C++ can be really hard to debug(nested templates anyone?) and you can, as a user of the language, use it in dangerous or unsafe ways.

This danger shouldn't be blamed on the language itself - it doesn't write it's own bugs, after all! We developers do that, and we do it very well sometimes... You can write type safe C++. You can write code that won't pollute the ambient environment. Nothing forces you into the sketchy world of C casting, for example (re type safe C++). You can write highly structured, object oriented, highly generic C++ as well. If you choose to do so.

C++ is just a tool. How you use it is really up to you (you can write the most dangerous code in the world, but you can also not write the most dangerous code in the world...).

Don't hate the player. Hate the game. :)

Don't agree with your characterization of developer fault

I hear it is common in an abuse scenario for a victim to blame himself instead of correctly attributing the cause to the perpetrator or environment. We should be prepared to blame our tools, our languages, our development systems when they are inadequate, inefficient, unsafe, or have other systemic issues.

Tools can be good or bad

Charles, unfortunately, you are very mistaken.

Modern C++ is not terribly complicated -- It is. It is the most complicated mess of a language ever put forth. I don't know anything that would even come close.

You can write type safe C++. -- You can't. Just about every relevant feature of C++ is inherently unsafe. Casts are just the tip of the iceberg. There is no useful subset of the language that would be safe.

This danger shouldn't be blamed on the language itself -- It should. Seriously. Just like you would blame it on the car if you have a terrible accident because the breaks are not working on Tuesdays, and it has the cigarette lighter installed next to the gas tank, which is an open tub under the driver's seat (so that it can be refilled more quickly).

Tools can be really, really bad

Andreas, unfortunately, you are very mistaken. You need only look to the intentionally obtuse languages such as INTERCAL or Whirl to find messes even more complicated and error-prone than C++.

So you should at least offer this glowing recommendation of C++: they could certainly do worse if they tried.

I learned valuable lessons from studying just how terribly awful language design can be - lessons that give me strength and hope as a language designer. The lessons are: The space of meaningfully distinct languages and programming models is vast, subtle, and largely unexplored. The most useful features come from what we eliminate from our languages, rather than what we add to them. If we can build a language worse in every way than Blub, we can probably build a language that is better in every way. All we need to do is see it.

These languages

These languages are perhaps messier, but I doubt that their creators accomplished creating something more complicated than C++.

V8 is written in C++

Why did you choose C++ to implement V8 if you hate it so much? Does Lars feel the same way? You could have implemented it in some other powerful systems language. Why didn't you?

I understand that C++ is a language that you either love or hate; there is no middle ground. It's a language that makes it super easy to blow your arm off, shoot yourself in the foot and upset users when memory buffers overflow and the digital mob takes control of the users' systems... In all of this, however, it's the developer who is at fault and not the tool. If you pound a hole in your living room wall when you decide to hammer a nail to hang a picture in an area of the wall that is just sheet rock and you swing the hammer as hard as you can, do you blame the hammer, the nail, the wall, the picture or yourself?

The fact remains that when you want to build highly performant, low level systems using high level abstractions (or not), C++ is the overwhelming choice - in the real world. Why is this the case if the language is as terrible as you assert? Your car analogies are somewhat off base: the car's design is not dependent on semantics of C++, though it's implementation is. You can't blame C++ if, for example, you design an algorithm with unchecked buffers. The problem is with your design and how you implemented it using a tool that enables you to directly allocate memory to accomplish your computational tasks... C++ is not the problem. Bad design and inexperience are - both correctable human problems.

You don't like C++. That's fine. Millions of developers do. That's fine, too.

Illusion of Choice

Just because another language exists does not mean it is a viable option. When choosing a language for a component of a system, one must consider: what is the rest of system written in? What other libraries are we integrating? What's the toolchain for getting everything compiled? Can we afford the configuration management and portability concerns of adding yet another language to the dependencies? How will it affect IDE integration and debugging? How many other developers on my team know or are willing to learn this language? (Will the developers brave their fear of the unknown?) What is the downtime and warm-up time, both on the developers and the toolchain, for switching to the new language?

Developers rarely have the privilege of choosing the best language for a job, Charles Torre. The tooling decisions we make are driven more heavily by circumstance and inertia than by suitability for the purpose.

Circumstance and Inertia

Having been a pro dev earlier in my career (working on large projects like Windows), it's certainly true that there was no option for developers to pick and choose languages and toolchains... This was even true on smaller projects (like web sites or web services). You are right -> Circumstance and inertia are hard to overcome and in some cases it just doesn't make technical sense to try and force a new model into an unrelated world full of incompatible code (like shoving .NET into the Longhorn OS - it just didn't work...).

C++ doesn't need to be replaced. It will continue to evolve to meet the needs of modern times... It's a great tool for a number of unrelated jobs. I just don't agree that it's fundamentally a bad programming language or a terribly designed one. It just doesn't feel that way to me. My comment about picking a different language for V8 was less than serious and was an emotional response to the C++ bashing commentary... Obviously, the rest of the Chrome browser is written in C++ and runs in operating system user mode environments also written in C++ and the team of engineers are all seasoned C++ developers... The C++ compilers and debuggers are mature. The libraries are tried and true (and standardized in some cases).

Interestingly, this all applies equally to the topic of this thread: JS. Let's say that the best possible language emerges that is perfectly suited for the web. Easy to use. Powerful. Concurrent. Safe. Modular. Dynamic. Functional. Easy to build productive tools around. Performant. General purpose. Capable. Will it work with legacy JS code and state of the art design time tools? Will you be able to compile it down to JS so the JSVM wizards don't have to reengineer their machines? Do we expect a billion web pages to be re-written in this new language to gain all the benefits it provides for both developers and web application users? Of course not. Even in this hypothetical scenario JS will still reign supreme as a result of massive inertia. You are correct.

A bad tool is a bad tool

A badly designed tool remains a bad tool, even if you think you should blame me for using it wrongly. A good tool significantly reduces the risk of me having an accident, and supports and encourages good use.

As for your other points: choice of language isn't primarily driven by its technical merits -- David made some good points. And popularity certainly isn't a technical argument either (though unlike you, I don't seem to know many developers who have much love for C++). Re V8: I'd argue that VMs are a bit of a special case anyway, because you are doing inherently unsafe stuff.

"Inherently unsafe stuff"

I'd argue that the only things inherently unsafe are things that have some chance of crashing the program, corrupting data, etc. Since you presumably don't want your VM doing those things, I think it'd be better to replace this phrase with "stuff that's difficult to prove safe." Inability to implement tricky low level stuff in a provably safe way is a language limitation. </pedant>

Point taken

Point taken, but I would still call implementing, say, a runtime code generator "inherently unsafe" because proving it safe is so utterly unrealistic -- especially with all the massively dirty tricks an engine like V8 has to do to get some performance out of JavaScript -- that by all practical means it becomes indistinguishable from impossible.

The tool reflects the problem space?

Which provably safe languages are suited for the low-level problem space of directly programming the machine (the "tricky low level stuff") besides the usual suspects in wide use today?

Illusion of choice is not an illusion at all if there are no viable alternatives.

BitC? Actually, Shapiro was

BitC? Actually, Shapiro was working on Midori for awhile, but that didn't last long...

Your point is well taken. We choose C++ not because its the best language, or even a good language, but the only language suited to the task.

Coq

Adam Chlipala has done a lot of interesting work on this subject, such as verifying compiler optimizations and the Ynot project for validating side-effects.

I hope most of our 'safe' high-performance low-level code will eventually be written by high-level code. We achieve this by targeting an 'abstract machine' that (not coincidentally) happens to match the real one.

ATS

I would suggest ATS as a potential candidate, though I find the added complexity of statically formalizing this low-level stuff a bit off-putting.

Curiously, ATS has not be thoroughly discussed on LtU, though it has been mentioned a few time now.

In UX, there is this common

In UX, there is this common saying that there are only bad designs, never bad users. This doesn't generalize to programming languages very well, a programmer can abuse even the best language. But it still has some relevance: languages are meant to be thought shapers, they are meant to guide their programmers, and C++ doesn't really do that in a constructive way.

There are a few things going on here:

  • People don't use a language because it is the best language, they use it because its the best option for the platform they want to program on. Evidence: Objective C and iOS, also see C++ and Javascript.
  • Although poorly designed languages can become popular, it is still important to invest in well designed languages like C#/LINQ. It inches us forward and shows developers how life can be better.
  • Language researchers aren't the best audience for talking about popular languages. We aren't being snobs, it is just that we want to focus our efforts where we can be effective, and a dysfunctional language really is not that place. On the other hand, systems and tool researchers can really thrive dealing with the defects of a dysfunctional language (e.g., Coverity).

JS was not for non-programmers only

And as a language for non-programmers, the "look like Java" directive deoptimized it immediately, compared to my earliest wishes and sketches which were based on Logo and HyperTalk (HyperCard's language).

Really, functional + prototypal was not aimed squarely at beginners or non-programmers. I put in what I hoped would be power enough for real programmers to finish the job. That's why I made everything malleable. I knew monkey-patching would be needed.

/be

Why did you choose prototypal?

Brendan, why did you choose prototypal?

He did mention monkey

He did mention monkey patching. It is easier to monkey patch in a class-based language, though of course class-based languages like Python and Ruby support it.

One of my designer colleagues found ActionScript pre-3 easier to use post-3, one of the main differences in 3 was an emphasis on more class use for "more structured" programming (in Flash Studio). However, it comes at the cost of usability/tinkerability that prototypes enabled.

middle grounds?

i'd guess somebody must have explored middle grounds, like being able to dynamically attach 'traits' or something?

You really can't/shouldn't

You really can't/shouldn't mix classes and prototypes because it becomes weird very quickly and usually different extension methods will conflict. I like the ideas of traits, its something I'm doing in YinYang.

What is crazy is that classes had won (Treaty of Orlando be damned), and then...bam...JS comes out and prototypes are back in the game again. But since most people in industry hate prototypes, they are trying to suppress them again with classes (a plausible conspiracy theory :) ).

Dodo mixes the two

Er, the introduction of prototype-based classes is one of the very premises of dodo. I don't think you shouldn't mix them.

Are there specific issues you see in having both in the same language?

For me, the main issue I had was to figure if an object derived from a prototype is the same class as its parent (in dodo it is the same if it adds no members) and how to type prototypes (I use tuple types with specific rules for assignment).

Classes and prototypes are not orthogonal

Bridging between the class world and prototype world, even in the same language, is much like bridging between two different languages with two different programming models. And its not just a technical problem, its a problem of thought.

Intersection of Classes and Prototypes

Abstract Factory?

It seems to me that classes and prototypes are mostly orthogonal:
* Each 'Class' is an object. We can clone and inherit from it. Doing so creates a 'subclass'. I.e. the prototype for a class is its super class.
* Each 'Class' is a factory. We can create 'new' objects from it.

Something like this might work, though we'd need to use special methods to mutate the object created by the factory, separate from extending the factory...

I chose prototypal because

JS needed objects. To talk to Java, at least; more important: for its own HyperTalk-inspired DOM objects with their onclick, etc. methods. But as I've written elsewhere, JS back then could never have had classes as declarative forms -- it had to be Java's boy-hostage sidekick, it could not be too big for that batcave. Plus, there wasn't time.

I admired Self and saw prototypal inheritance as a better way to build object abstractions, especially in a dynamic language. And it was easy to implement. And it was non-threatening to Java -- indeed prototypes and first-class functions (and closures, which weren't in JS1.0) flew under the radar for much of the first year.

/be

Why not add classes to JS?

Thanks, Brendan.

JS is so far removed from the JavaMan bat cave that it's sometimes hard to fathom why it hasn't really evolved very much over the years.

Re classes -> Many folks coming (and more will come soon...) to JS from Java or C# are used to classes - they've come to expect them. They're used to _thinking_ in a more traditional OO-structured way (and programming, like any programming language, can be defined as a formalized expression of structured thought, right?).

Fact is, for many practicing developers classes are simple to use, simple to understand, etc. They are _implicit_ in their everyday coding lives. Prototypes, on the other hand, feel awkward in comparison. So, why not add classes to JS?

Then again, it's not like you really have to add classes to the language itself, right? JS is an assembly language, after all (or is it? What do you think about this notion, Brendan?) so just add classes to some JS variant that transpiles to proper JS (or compiles... It's sometimes hard to know the difference with so much of this sort of thing going on).

Versions of Javascript (aka

Versions of Javascript (aka EMCAScript) DO have classes, consider Adobe's ActionScript.

I thought that Javascript wasn't meant to be an assembly language, it was a language that programmers actually use. I would love to hear Brendan's thoughts on that also.

I thought that Javascript

I thought that Javascript wasn't meant to be an assembly language, it was a language that programmers actually use.

Pretty interesting to me is it's been wearing both hats actively for half a decade or so now (if not longer, in some uses of it(*), and/or to a looser extent of 'assembly') when you look at it, actually, and all this progressively occurring de-facto, as opposed to per one of its main original design's objectives, it seems.

Anyway, Thank You, Brendan: in my book, Javascript didn't do bad (at all) until now, given the initial resources allocated to its design, implementation, etc.

I can easily imagine myself asking a junior (web) developer "Just out of curiosity, do you really enjoy coding in Javascript or have you tried looking for something else you'd like better?" and if the reply is "Well, yes, I quite like it." I'd probably try come back with a:

"Same here, for some things. So, you do find coding in an assembly language fun too? Nice..." and then, maybe, enjoy the funny look I'd get in return.

But then, again, it's not just about the language's design and semantics properties at play, here, but is also I suppose, as much as about the place occupied by JS in the whole WWW client (and recently, server) platform's role and playground and opportunities they met over time, and the progress achieved in performances of its various execution environments competing with each other, that have to deal with "the other guys"... DOM, rendering/server page engines, etc.

(*) Ok, shameless plug, there; but hopefully not off-topic, though.

Jangaroo

There's a cross-compiler for AS to JS (Jangaroo). So, you're right, Sean. JS has classes today, but you have to compose in AS first (and what's the status of this AS->JS translator these days, anyway?). JS also now supports a Lisp dialect(Clojure), but you need to write in ClojureScript first.

Everything-to-JavaScript seems to be the pattern emerging here. It IS an assembly language! It's also a gp programming language that's expressive and high level.

Would love to hear Brendan's thoughts on JS classes and JS as web assembly language :-)

JavaScript versus the Java Virtual Machine

I think that "JS is the web's assembly language" is a silly notion. Yes, JavaScript a popular target for compilers these days, but it's not an assembly language and compilers have been targeting languages other than assembly/machine language for 50 years.

I think the lack of a prescribed VM for JavaScript is a strength, not a weakness, and perhaps a key reason for its success. I think the lessons from the JVM and the CLR is that Virtual Machines are pretty much tied to a single language; step much outside the original language and you have a lot of difficulty implementing things efficiently, if at all.

So what we've observed new languages that are designed around these VMs; and attempts of implementing existing languages on these VMs falter and fail.

So I think the better question is to ask what makes JavaScript a difficult language to target. And granted I haven't tried to implement a compiler that targets JavaScript, but it sounds like it's posed approximately the same level of incidental difficulty as the JVM or the CLR. And three issues come to mind that I've heard from multiple sources:

1. A lack of tail calls, explicit or otherwise.

2. A lack of standardization with regard to the initial JavaScript environment.

3. Numerical difficulties of implementing pretty much anything other than floating-point arithmetic.

Web Assembly

I won't speak for Erik (I can't), but it seems he's using this terminology to reflect the working notion that JavaScript can be defined as a low level language that only VMs need to parse and compile. So, in this sense you don't program JS as a human programmer - you use some other higher level abstraction that compiles down to JS (just as we have today in various formats).

The question is, what will be the "C of the web"? C effectively "killed" assembly language in terms of its required usage by human developers programming computing devices. Higher level languages like CoffeeScript are most likely the direction we're going. Then again, maybe JS will just evolve to be both a better high level language for the web and a better web assembly, regardless of the terminology used to express what it is or isn't in context.

JS as a compilation target

I don't think CoffeeScript qualifies as "higher level" than JavaScript -- it's just an alternative surface syntax with some additional sugar. Consequently, it doesn't prove anything about JS's suitability as a compilation target.

From a purely technical perspective, using JS as a "web assembly" has almost satirical value. Not because it is hard to compile to JS -- it isn't. But because you gonna throw away all the valuable knowledge a compiler has about a program in a "real" language, and then have the JS engine jump through hoops and hoops to reconstruct it at runtime, in a rather incomplete, unpredictable, and expensive fashion.

I happen to work on V8 these days. Like most other contemporary JS engines, it is amazing high-tech. But it also is somewhat sad that about 80% of the invested cleverness would be unnecessary if it wasn't for the peculiarities of JavaScript.

The language needs to evolve

It seems to me that JS has plenty of room to get better. Today, it's not design-time tool friendly. There's no reason that it can become a better language with support for object orientation with classes, for example or more structured genericity (compile-time genericity al a templates in C++), safer (removal of eval or strict mode by default, etc), removal of dumb stuff like null as an object, better numerical support, on and on... Languages also evolve as a consequence of how they are used in the real world. If we want better tools, more control at design time, compilation before deployment, etc, then the language must evolve. I believe it will (but not as fast as all would like). It may be that JS is not really suited for a role as web assembly today, but this doesn't mean it won't be in the future. Clearly, this is one significant way it's being used today - and effectively.

Keep on cranking out killer engineering. We all appreciate it (the low end developers who rely on the brilliant hacks inside all modern browsers' JSVMs). Hopefully, in the future you can scratch your heads less.

Good point about lack of a standard VM being a plus

Regarding your numbered list, 1 and 2 are addressed in ES.next, which requires proper tail calls, and whose module system helps clean up the hopeless top level. DOM and Web API standards continue to evolve, but at least they don't all need to inject global names.

Point 3 is a good one. For storage of machine integers along with 32- and 64-bit floats, typed arrays and binary data (ES.next's embrace-and-extend answer to typed arrays) provide fine-grained programmer control.

But for arithmetic operators, nothing beyond the status quo (bitwise for int32 and uint32 implicit conversions, all else are double). This is a problem we're working on for "Harmony" but not ES.next. It will take a bit longer.

/be