## Wat

A pretty funny video by Gary Bernhardt on some surprising behaviour in Ruby and JavaScript. I think Wat as a term of art for this phenomenon is quite appropriate.

There's also a follow-up blog post by Adam Iley explaining some of the behaviours in JS seen in that video, if anyone wants to understand the underlying semantics behind these behaviours.

Since we're all PL enthusiasts here, I expect everyone here will have their own lessons to take from this 4 minute video. Myself, it seems a perfect example of the dangers of implicit conversions and overloading. Stuff like this makes we want to crawl back to OCaml where overloading is forbidden.

## Comment viewing options

Implicit conversion, because it's applied in a context-free manner, is what's nasty. Overloading without implicit conversion is not really a problem, except for people who suffer from wearing particularly tight static-typing straitjackets.

I disagree. An example was in the video, where the commutativity of addition is violated. Even if you eliminate the implicit conversions, providing different overloads for X+Y and Y+X would still cause problems.

Actually, in "{}+[]" there's no addition -- it's parsed as an empty block followed by the expression "+[]".

So, think it's more accurate to say that the problem is with JavaScript's weird syntax ("{}" can be either an empty code block or an object literal, depending on the context), not overloading.

### Ah, good call. I was

Ah, good call. I was wondering why all three of these printed different things:

• {} + [] + {}0[object Object]
• ({} + []) + {}[object Object][object Object]
• {} + ([] + {})NaN

Arguably, using + for string concatenation in the first place is a bad idea. But yes, it's always safer to write ({}) to generate a fresh object, and I do it when generating JavaScript code (but not when writing it by hand).

### Yeah, if it were not to

Yeah, if it were not to mention that, too, which depending on my mood-of-the-day I found myself thinking it to be one of the coolest things or among the nastiest, actually:

https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Object/valueOf

(Also : ECMA-262, 15.2.4.4 - valueOf)

Note: when it has nasty effects, I usually blame myself, not the language ... for my overlooking something I was supposed to know already, though. :)

"You rarely need to invoke the valueOf method yourself; JavaScript automatically invokes it when encountering an object where a primitive value is expected.[...]When you create a custom object, you can override Object.valueOf to call a custom method instead of the default Object method."

Ah, ok! Now I know who I can really blame: the dude who did that (in bold, above) in his library (that I'm using) and didn't bother to document his initiative...

"Hmph" :)

### How?

I recently learned about C++'s terrible 'name hiding' behavior, but I think that's a C++ specific problem. Is there another way overloading+inheritance is problematic?

### It seems that the case of

It seems that the case of combining method overloading with inheritance/polymorphism is regularly more-or-less of a challenge in language design.

Here's a C#-specific pitfall to be aware of, for instance:

Which I remember to have actually stumbled upon a couple times.

### Unavoidable

It is not possible to avoid problems in a programming language regarding the meaning of names. When you're using a subset of implicit conversion, overloading, polymorphism, and complex scoping rules the interactions can always be difficult.

You cannot escape these features in a programming system. If you eliminate most of these features from the language translator, as in C for example, in a clash free large scale environment, you need a search engine and help system just to figure out the name of the entity you require, or learn what some complicated name used in code actually means.

In turn this need for constant mental lookup destroys the mental pattern matching needed to comprehend programs. Provide some ambiguity and simplify the names, the comprehension and uncertainty increase together.

The best solution here I think is to provide for some redundancy, and obviously type annotations are one of the popular ways to do this if the ambiguity is type based, and scope control operators if the ambiguity is namespace based.

Ambiguity is core. The best example is, of course, the ability to write 1 + 2 + 3 due to the associative law: it's ambiguous but it doesn't matter so we can gloss over it! Nothing new. Mathematics is the art of making stuff as ambiguous as possible!

### Ambiguity with human

Ambiguity with human readable names is unavoidable (see Zooko's Triangle), but programming languages manage this complexity using scoping mechanisms. Lexical scoping is tried and true. What other scoping mechanisms are usable for humans is still open.

### My recent work gets away

My recent work gets away from lexical scoping. Jonathan Edwards says it best in in Subtext paper "names are too rich in meaning to be wasted on compilers." The alternative is to link symbols directly into code using some kind of search/selection mechanism, but then we no longer have a simple textual language in which to read/understand code.

### comedy dialog (meant in a nice way)

I expect to work in C shops -- or a near moral equivalent: C++ or Java -- for the rest of my career, and some things won't fly. Absence of a text version of code, for example, would just get me laughed at. It's not as friendly as it sounds. I don't want to give you negative feedback, but you might want to internalize models of incredulous coworkers. I'll try not to be too funny, but here's a few to think about. (You seem like a nice guy, so I don'twant to accidentally give an impression of being critical here. Imagine me with a good attitude.) None of these guys are real people; these are abstractions whose behavior may appear in real colleagues though.

Zeb always goes last, and just says, "Hangin's too good for him." He wears bib overalls and a straw hat, and chews a weed while glaring at you squinty-eyed.

Capone is Robert Deniro doing an Al Capone impression, who goes directly into a more-in-sorrow-than-anger routine while looking around for a baseball bat. He usually opens with, "I trusted you, and gave you this responsibility, and this is how you repay me?"

Your manager is a guy named Jer with a mohawk, who wears a red and blue striped tie with a pristine white button-down shirt. He was wild and crazy when young, but now wants to project a corporate image. Jer would never dream of being critical, and instead acts like you're joking, and laughingly says, "Stop kidding around. What were you really planning?"

A close coworker named Tex takes a break each time you have an odd idea, kicking up feet on a desk and reaching for a big black cowboy hat while grinning hugely. He says, "Tell it again, I needed a break anyway."

Here's how it goes down with a plan for code with no text version:

You: Instead of text, the runtime is emitted from this other stuff.
Tex: (reaching for hat) That's hilarious!  I love these breaks from real work.
Capone: This is what I get for letting you be involved?
Jer: (chuckling) Such a kidder.  Come on, what were you really thinking?
Zeb: Hangin's too good for him.


You can see how I'd go to some trouble to avoid that. Is your plan necessary?

### Hehe, now this was funny.

Hehe, now this was funny. :)

You're really with your feet on earth!

### This goes over my head, but

This goes over my head, but that's ok.

### I got carried away

Normally I resist creative urges. Sorry, I meant to say when you have a really novel idea, there's a problem with acceptance, and getting along with teams is a high priority. A development team camps in a local code stack with known problems (analogy: bears at Yosemite), and the further you go from camp the weirder you sound, until you go beyond the pale (cf ancient fence technology). I was trying to give body to patterns in negativity, perhaps uselessly.

Jules' observation the old school will eventually go away is well taken. But it doesn't help the near term.

I can ask constructive questions about the non-text approach instead. In source code, text or not, things refer to one another. What creates identity? How is change over time managed? Can any of the old tool stack survive in a useful role if diffs require large graph to graph comparisons?

### Here is a (draft?) paper on

Here is a (draft?) paper on the topic of programming language adoption. By lmeyerov of LtU fame. Unfortunately because it's a difficult problem it is more a paper that lists the pertinent questions rather than one that provides the silver bullet.

For non ASCII-array programming in particular it seems hard to design it into an existing system. But you could still optimize the other points in the paper, for example by making it really easy to try out and by focusing on one small domain first.

### I wasn't offended at all, no

I wasn't offended at all, no worries.

There are lots of points in between text-only and structured editing. If we just want to innovate with naming we could always have very long globally unique names (think Wiki) where a smart editor optimized the presentation and inputs of these names (show a short non-unique name and allow for input by that name if context allows for easy disambiguation); we are then just one step displaced from a pure text representation. This is already fairly disruptive, of course, but its not a huge step, existing tools still somewhat work since they don't really care about name length.

### Escaping the fly trap

When you're using a subset of implicit conversion, overloading, polymorphism, and complex scoping rules the interactions can always be difficult.

I'd use stronger language there...

You cannot escape these features in a programming system.

I disagree. You don't need implicit conversion, it's evil. You don't need ad-hoc overloading, if at all do it with (type) class. And baroque scoping rules are a definite sign of language design gone wrong. The only thing from your list that is really indispensable is polymorphism.

### All fair points. Back to the

All fair points. Back to the OP's post let's check as to how that works out in the case of JavaScript:

Polymorphism ... Check. Well, the object prototype-based flavor of it anyway.

Baroque scoping rules ... Check. And it's not just about the this keyword and/or the global object; help yourself:

http://matt.might.net/articles/javascript-warts/

Implicit conversions ... Sort of check. valueOf is automatically called whenever the adjacent sub expression context involves one of the basic "types" (those for which +, -, *, etc have a default meaning to start with)

Overloading ... Hmm, not quite check. Sure, call sites in JavaScript just don't give a dime to how many parameters the methods expect. Plus, the latter always have a handy arguments object to play with, no matter are the formal parameters of the definition. I don't think this quite counts as "overloading" though, since usually it's meant to denote distinguishing methods with a same name but different signatures from call sites, and not by/via callee objects' cleverness to do the dispatch themselves.

Along with it being highly dynamic (you can attach code to an instance reference (almost) any time and from (almost) anywhere, I mean, c'mon!) all this would a priori make JavaScript a poor contender for broad acceptance and productive use... large scale, that is. Let alone innovative or novel.

Recall that some were all focused on targetting C, C++, or Java maybe for more "noble" language's compilers. But I never read anywhere about any even remote contemplation (modulo perfs concerns that'd have been expressed) to target JavaScript a decade ago.

(Corporate-wise, I knew a dev manager who had decided once for all to remain uneasy, reluctant, to say the least, to give JavaScript anything but trivial client side validation tasks, though we could obviously do more, as soon as he had learnt JavaScript doesn't require more than the simplest text editor to code in it and not a \$ IDE suite. I know. As much as it doesn't make any sense.)

Yet, the language re discovery literally exploded about 6 years ago, after 10 or so first years of rather mediocre use -on average, from what I saw in the industry- (or not super exciting anyway). So, people finally bothered to leave the condescending attitude and start to actually look at what was in there. Better late than never as they say, I suppose.

It's becoming so ubiquituous its even being used on the side one wasn't quite expecting a decade ago. Servers. And not just as an applicative PL, also as codegen target for other languages as alluded to, above.

How come? Is it just thanks to well informed people like Crockford, Flannagan, and others?

Besides saying its simple functional features certainly contributed to the boost lately, it's still mostly unclear to me (I had seen its nice usefulness even on the command line for backend tasks 12 years ago already, and I'm not really that smarter than your average, seriously. Though people would find me odd not to stick to tedious and sucky Windows batch files syntax. Oh well)

If your point is that bad design decisions don't prevent widespread adoption, then sure, I agree with you. Language adoption is due to many factors, technical superiority being relatively far down the list.

(Btw, JavaScript has overloading, in the sense that e.g. + does something completely different when applied to strings, etc. In an untyped language, it's the callee who has to make the case distinction. Polymorphism is vacuously present for any untyped language. Scoping is horrible in JS, but ES6 will at least provide cleaner alternatives.)

### Coming from a non-dynamic

Coming from a non-dynamic and typed languages setting/culture, I tend to stick to seeing overloading in the "traditional" sense of C++/C#.

I guess it's time I update my nomenclature.

Btw, my point was more that, conversely, it's maybe rather regrettable that (the few, though quite powerful) nice design features of JavaScript have taken "so long" to be noticed or considered as they deserved.

YMMV, but AFAIC, I can cite what I've always liked from the start (or as soon as I got myself familiar enough with):

simple, intuitive, C-like syntax

prototype-based extension of objects behavior (though it takes a while to be properly understood vs. the class-based guys)

enough of syntax and semantic support for FP style (i.e., incl. anonymous functions at minima) for whoever cares enough

overall simplicity combined with a good flexibility, mostly thanks to its dynamic nature, allowing for quite a wide range of programming styles, or also, easy experiments with internal DSLs, even.

(well, granted: once you got "The Good Parts" right, to put this at work, at least, & *wink* to some book :)

Hmm... and ???

well, I did say it was a short list. :)

However, should I have to compare JavaScript to some other, supposedly expected & so-called "competitors", some wannabe-in-the-same-ballpark (initially) as JavaScript, after a first, oh-so-naive and unattentive glance at them, say, like VBScript ...

... Well, on my end, it's a no-brainer : no personal offense meant to VBScript's designers, but JavaScript with all its own flaws had won, and by large, more than a decade & half ago!

### JavaScript's success

JavaScript wasn't successful because it's nicer than other languages. It's simply because it was the first and only on the web, worked well enough, and the web got huge.

### I think you're right, that

I think you're right, that pretty much sums it up, for what I can recall.

The especially important part of your sentence not to leave out is "worked well enough" (and that it was about web pages, as rendered on the client).

Strictly chronologically speaking, some have had different hopes earlier, though.

### Good points,

(@skaller)

Good points, and:

Mathematics is the art of making stuff as ambiguous as possible!

That's a cute way to put it, I'd tend to agree. :)

But by that, I take it you actually meant "... as ambiguous as bearable" (?)

;-)