Rob Pike: Public Static Void

Rob Pike's talk about the motivation for Go is rather fun, but doesn't really break new ground. Most of what he says have been said here many times, from the critic of the verbosity of C++ and Java to the skepticism about dynamic typing. Some small details are perhaps worth arguing with, but in general Pike is one of the good guys -- it's all motherhood and apple pie.

So why mention this at all (especially since it is not even breaking news)? Well, what caught my attention was the brief reconstruction of history the Pike presents. While he is perfectly honest about not being interested in history, and merely giving his personal impressions, the description is typical. What bugs me, particularly given the context of this talk, is that the history it totally sanitized. It's the "history of ideas" in the bad sense of the term -- nothing about interests (commercial and otherwise), incentives, marketing, social power, path dependence, any thing. Since we had a few discussions recently about historiography of the field, I thought I'd bring this up (the point is not to target Pike's talk in particular).

Now, when you think about Java, for example, it is very clear that the language didn't simply take over because of the reasons Pike marshals. Adoption is itself a process, and one that is worth thinking about. More to the point, I think, is that Java was (a) energetically marketed; and (b) was essentially a commercial venture, aimed at furthering the interests of a company (that is no longer with us...) Somehow I think all this is directly relevant to Go. But of course, it is hard to see Go gaining the success of Java.

All this is to say that history is not just "we had a language that did x well, but not y, so we came up with a new language, that did y but z only marginally, so now we are building Go (which compiles real fast, you know) etc. etc."

Or put differently, those who do not know history are doomed to repeat it (or some variation of this cliche that is more authentic). Or does this not hold when it comes to PLs?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

There's certainly a paradox

There's certainly a paradox that a language that can take off cannot innovate very much, because it would otherwise be too far ahead of the market; but this timidity of innovation will sooner or later lead to obsolescence. The only way out seems to be in the direction of Lisp or similar lack of syntax and extensibility of compile-time abstractions, but that in itself is a move too far, and can be undesirable for other reasons.

Got the t-shirt

I happened to be there for Pike's talk. He made me laugh a few times, e.g. quoting some Java "Integer x = new Integer(80);" and replying "what was wrong with the old integer 80?" But over all I was as bothered by his talk as Ehud was.

His description of the histories of C++ and Java missed important points.

C++ - he doesn't even mention the rise of the GUI on under-powered hardware or the commercial interests of Microsoft and others. Also, his description that C++ lacks GC because somehow that's "more sophisticated" completely ignores some of the actual motivations behind the original design of C++ (C compatibility, "don't pay for features you don't use"). We can curse or bless the lack of GC as much as we want, but the choice to avoid it wasn't motivated by a desire for 7 different types of Boost smart pointer.

Java - he doesn't talk about the rise of the Internet; the fractured landscape of the time with Microsoft on the desktop/Unix or mainframe in the data center; and the commercial interests of Sun or IBM pushing Java and Java tools cheap to sell their hardware. He's right that Java was deliberately designed to look C++ish, but from where I stand that's only a small part of Java's uptake.

The other big hole in his talk was not at least mentioning the family of statically typed MLish functional languages. While they don't (yet) have the massive uptake of C++ or Java they do stand as existence proofs that static typing doesn't mean verbosity, which seemed to be the heart of his argument.

I think all of these are

I think all of these are good points. I think one thing to ask is not why Java/C++ suck, but give a realistic answer as to why the succeeded. Then you can copy them (if you are on the Go team), or at least try... But the logic of the talk was that Java/C++ succeeded for the wrong reasons, and they are going to do something different (go for the old time virtues that were lost) and expect the same success. Seems a bit iffy logic... :-)

BTW: I think it is more interesting to ask why Ada failed where C++ succeeded, not what ST didn't.

Predictable

Java, certainly Java 1.0, as a language is just nice and small and has a quality I can only describe as being predictable, as in low-entry. I mean, if you know Modula, or C, or Pascal, there just is relatively little to know. Just learn the few constructs, and, "Bam!" you're programming.

I don't know all the ins and outs of early industrial support for Java, but, as a language, I think this is the one quality which sets it apart from all those other attempts.

Unfortunately, the same doesn't seem to hold for Go. (Though, this is, of course, highly subjective.)

Go is very easy to reason about

The language specification is online, the message passing is synchronous and rendezvous based by default. The memory model is very sane.

The only part where I find I can get confused about Go, is the same in any duck typing system. When you don't have to declare the interfaces you implement, you either have to just know that something already implements that interface or search for stuff that does implement it (somehow).

I've managed to write pretty complex software with very little time in Go. Java took more mind bending to get into as everything had to be in a class, and there were two families of exceptions (checked and unchecked).

Getting up and running with Go simply doesn't take very long, and I don't understand the comment (subjectively or not).

Yeah, it's subjective

I can read Go and probably can program small programs in it pretty fast too. To be honest, the first thing which came to my mind when I tried Go online was: "Hey, it's Algol!"

The thing which I didn't like were the advanced manners of dealing with go-routines and channels. It's too far out of my own programming experience, and I found the syntax and semantics mind bending.

That's what I mean about Java. You are right that it is way more difficult to program some tasks, pun intended, in Java than in Go, but there are no conceptual surprises. It's just pure old fashioned imperative programming without advanced concepts and with a simple OO layer added on top.

And I think that is a good thing for early adoption. Most programmers don't want all those features. They want to dive in, head first, and start programming. Meanwhile, they'll invent libraries for doing all those things other more complex languages support out of the box.

I have the feeling that after that initial period, programmers might miss or want more features. I am not sure that providing them in the first place makes a lot of sense in helping adoption. I.e., now, Go is asserting: "We are a language good at multi-core server programming," instead of, "We are just a way better, faster bash/Python replacement (and we do multi-core server programming too.)"

Ada Failure

Ada failure? My personal take: Too little, too late; combined with the fact that C++ (horrible) just mixes well with C.

Too little? Like tasking,

Too little? Like tasking, namespaces, exceptions, etc.? Only thing missing are classes, but then, many don't think classes are that great an idea. More to the point, did everyone already knew classes are what they are after back when C++ arrive? Did it fill a void, or create one (pun is not intended, but is welcomed none the less)?

Too late? As in Ada had this in 1983, C++ didn't?

Mixes well with C? Well, that's a valid point, I guess, not that I am sure why it would matter on platforms that are not C based (and there are quite a few of those), and OS's that don't favor C (I know a couple of those as well).

So while I know exactly what you are getting it, that's exactly the sort of history I was trying to warn against. No offense.

Uh, yes I think so.

Too little? Like tasking, namespaces, exceptions, etc.? Only thing missing are classes, but then, many don't think classes are that great an idea. More to the point, did everyone already knew classes are what they are after back when C++ arrive? Did it fill a void, or create one (pun is not intended, but is welcomed none the less)?

Yes, too little to have industry switch from C as a systems development language.

Too late? As in Ada had this in 1983, C++ didn't?

Yes, too late as in that the major OSes and tools were written in C, not Ada. Were there a viable major OS alternative written in Ada, life may have turned otherwise.

Lets agree to disagree.

Too little?

I also disagree with the statement that Ada had "too little." This is just anecdotal, but I worked for a couple major defense contractors from the early until the mid 90's and did quite a bit of Ada. Many of us came to it with a C and C++ background. The biggest issues we ran into were not with the language itself (which we quickly grew to respect), but with the tools (including compilers) available for it at the time. Mind you, this was pre GNAT. The mandate combined with few vendor choices eventually made it very unpopular (buggy and expensive is a bad combination).

Hey, I accept that Ada is a

Hey, I accept that Ada is a better language on some features. (Some features because I don't know how good Ada is at all those features which makes C sometimes a very pleasant low-level systems programming language. I like being able to cast a float pointer to a char pointer sometimes - i.e., my garbage collector couldn't be written as concisely without that feature.)

It isn't meant as a derogatory statement. Just as an observation that Ada doesn't deliver enough in features, or tools, or economic circumstances to tip the market into its direction after the widespread adoption of C.

Ada for systems programming

I won't argue about the tool side. I wrote most of my Ada code around 2005, in Emacs. I don't think there was a language-aware IDE for Ada back then. And if you want to use Ada 2005 language features in proprietary software, there do not appear to be many licensing options.

But I doubt that there is anything you can do in C which you cannot do in Ada. Actually, the thing you mention, casting a pointer to float to a pointer to int, is not allowed in C. There are workarounds, and compilers to tend to recognize them and avoid aggressive optimization based on alias analysis. Ada doesn't guarantee that it will work, either (although there is support in the form of Ada.Unchecked_Conversion), but similarly to C, Ada compilers tend to make it work, too.

Aliasing

I aliased a pointer to a char pointer which is allowed in the C standard; aliasing a pointer to float to a pointer to int isn't, as you mentioned correctly.

I am struggling with that at the moment, as you can read on my wobsite. (hi-language.blogspot.com). Problem is, I am not really sure what parts of my compiler rely on aliasing.

Re: Ada failure

Does anyone remember what the educational costs for Ada were?

From 3rd grade elementary school to graduating university, I learned Logo, Basic, Borland Pascal, C++, C, Java (1.1-1.4), Scheme, Haskell, and CafeOBJ (in that order), along with a few specialized languages. In hindsight, I would have appreciated an introduction to Scheme, Haskell, and Maude (in place of CafeOBJ) much earlier in my education.

I understand that a major contributor to languages offered in schools is the cost of an educational license (which, if much above $0, is likely to be rejected).

I remember hearing about Ada. The general impression I received is that Ada was verbose and painful, but also critical for a defense job, so I looked it up. But I wasn't able to find any educational or personal license implementations.

By the time I left Uni, the DoD edict for Ada had been abandoned, and an awful mix of C++ and Java ruled the software projects.

Everyone in my defense job tells me how painful Ada was, and I'm not inclined to disbelieve them. But with how many concurrency, segfault, and memory-leak errors I see in the software we use today (from contractors, universities, and even DoD development houses) I think that generally rescinding the order was a mistake. (Weakening it to verifiable languages would have been okay.) I'd rather pay up front than pay out the back.

GNAT

I used GNAT in 1999, approximately between Java 1.2 and 1.3 on your timeline. The implementation was far more usable than what we had for Modula-II (the other teaching language). So there actually was a low-cost implementation out there (at the cost of an account on a machine which could run it).

May 23, 1994

Incidentally, the Computer History Museum says that on this day in 1994, Java development begins in earnest.

I announce a day of mourning.

I announce a day of mourning.

I don't see why...

that's the last day that anyone could realistically suggest a memory-unsafe language for general-purpose use. Ever.

History of Java is very interesting

We had a brief side discussion on the FONC mailing list recently, where people chipped in random quotes from around the Internet about why Java succeeded (and Smalltalk failed), and why AWT and Swing won and the Smalltalk approach to graphics failed. Essentially, very little can be chalked up to techincal reasons.

Here is a round-up of the links we tossed around:

History

Ehud: Or put differently, those who do not know history are doomed to repeat it (or some variation of this cliche that is more authentic).

"History doesn't repeat itself, but it does rhyme." — Attributed to Mark Twain

That's a great line. I will

That's a great line. I will do my best to add it to my repertoire.

People don't change

People don't change. Therefore, if you understand history, you understand people. When you see new technology being pitched, the chances are it is an old (and probably bad) idea being revisited. If it is a good idea, the chances are it is being revisited without knowing it, and therefore done poorly, perhaps even worse than the first go-around.

It doesn't get any more authentic than that.

Somebody recently asked me what I thought of node.js, and I started off with that perspective. Bigger picture: This is the direct result of undertrained programming work force, not knowing the difference between a good solution and any other solution.

could you elaborate your

could you elaborate your comment on node.js?

node.js is your typical open source project

Good idea: Message-passing concurrency.

But when you actually listen to the creator of node.js give a talk, he fuzzes over important issues like the semantics of callbacks. I watched a talk where he said that when you invoke a callback, you are in a side-effect free world unto itself. This is bizarre, because nothing in node.js guarantees this. It is all an illusion based on marketing. People are so starved to use good ideas that they will use them without any way to actually enforce and predict well-behaved application of that idea.

I think you mean "rime"

I think you mean "rime"

java history

I remember the period of time when the Oak programming language was being dusted off and proposed for use on the Web. I remember conversations at my workplace from the time shortly before it was announced that Netscape would support Java in Mozilla.

The push for Java that I witnessed, then, came entirely from (mostly non-technical) management downwards. For example, early "leaked" specification documents circulated well outside of Sun but they weren't being handed out by trench level hackers who had found an interesting language to talk about. Rather, they were being handed out by executives trying to build buzz among the engineers. (These are the conversations that happen when an exec shows up, unusually, in your cubical or office -- hands you a stack of papers from Sun -- says (in a dramatically hushed voice) "These are secret so don't talk about them yet but take a look.")

The language was being pushed not because "Oh, this is so much nicer than C++!" Rather:

The language was being pushed in that period as a Microsoft killer. The "write once, run anywhere" story was, to these executives and financiers, the breakthrough "innovation". Logically, the same story would apply to just about any VM-based or interpreted general purpose language with suitable libraries. It was new for them and since they "got it" from Java, well, that was the Java story. Why did they care about a "Microsoft killer"? Well...

At the time, Microsoft was taking its first hits from the Dept. of Justice (after plenty of prompting from Silicon Valley). The Oak guys figured out that they could revive their failed project if they could get it into browsers with the "story" that: this creates a new platform for personal computing. Microsoft may control Windows (the "dominant platform")but if all of your personal apps run instead on this virtual machine then Windows means nothing. That is about as deep as the executive class seemed to understand things.

Given that *abstract idea* and the fact that big guerrilla Sun was saying "and Java will do it" ... suddenly all the cheer-leading culture of Silicon Valley was in full throated 4 part harmony and the hype bubble was inflated: Java was the future. Nobody had looked very carefully at Java yet, but many suddenly knew it was the future.

With the executive and financial class unified in that vision, the national press picked up the story from the trades. Conference host and publisher O'Reilly added a lot of gravitas to the brand.

Gosling was at a TED conference showing off animations on web pages something nobody had seen before. Sun had a story to go with it. A good story. Something about virtual machines killing Microsoft's dominance. Side chapters about "convergence" for everyone, too.

That all gave Java a lot of momentum and good will early on, but in lots of vision directions that didn't work out. Java-in-browser didn't kill the Microsoft Desktop and probably never will.

In spite of that messed up initial vision, Java continued to gain importance. How?:

That early momentum warped markets. Serious money started being spent on competing Java / JVM implementations and libraries. The labor pool started rushing towards creating an oversupply of "Java skills".

Many other commercial projects in the subsequent years (e.g., developing middleware products) recognized need for a "high level" and ideally "highly portable" language -- with lots of libraries for web stuff. Java was a natural choice. IBM, which likes to be vertically integrated, recognized this as an economically important space and invested heavily. Some other heavy hitters did as well.

Late in the game, Java came to warp even academic curricula.

Here is the key thing about that story:

The economic and cultural success of Java was largely secured in those initial days before very many people at all had looked seriously at the language itself. It did not succeed because, say, "it was a better C++". It succeeded because it came from some powerful people and came with an idealized story about how it was going to conquer some big markets. It came with a "just so" story for investors and executives who don't care much for the technical details, just the big picture. It didn't conquer those markets named in those early stories. In fact it lost pretty hard in those markets. But the story was good enough, and spread among the right few people, that the early bubble of investment in Java guaranteed its long-lasting importance as a computer programming language.

I think your narrative

I think your narrative under-emphasizes how important it was that Java was a kind of C++ done better, with nice and familiar curly braces and C statement / expression syntax etc., rather than a more alien thing. If Java had been like Lisp or ML, but still had the same people / money / etc. behind it, it wouldn't have worked. But then, the same people wouldn't have been behind it, because tech people they ran it past (to avoid looking stupid) would have pushed back too hard.

re: I think your narrative

Are you talking about a syntactic distinction among languages? Or something deeper?

For example, what if the language were Dylan?

A combination of both. A

A combination of both. A syntax not too distant from C (Dylan might smell too much like Pascal or other wordier Algol rather than the somewhat more austere C); and a semantics which doesn't introduce too many new concepts (e.g. closures, dynamic typing, syntax macros etc.) that would trigger various phobias (too academic, too slow, too expressive, etc.).

Conceptual distance

We went back and forth on syntax in BitC, as you might imagine. In the end, the consensus emerged that there is some conceptual distance above which syntactic familiarity become a liability. The remarkable thing about Java is just how far they managed to stretch that distance successfully within the pre-existing syntax.

The Problem with your Story

Is that you can probably tell the same story for Pascal or Ada, yet these didn't make it.

I'd like to see you try to tell the same story for either Pascal

or Ada.

Just sayin'.

Why? I mean that...

I am sure you can find stories of industry backing Pascal and Ada too (especially Ada). But, despite that, those languages didn't make it. So, by extension, I am not sure you can say that because of that (industrial support) Java made it.

I.e., my point: Industrial support might be necessary but not sufficient for language adoption.

I simply agree with Thomas

I don't know anything in Pascal's or Ada's history that seems similar. I could probably stretch it to make it look vaguely similar, but it would be stretching it.

That said, I would like to see you stretch it, since you suggested you could.

Point taken

What is the difference between "We will deliver a language which will make all languages obsolete" (Ada) and "Write once, run everywhere" (Java)?

My point: DoD/Industry backed Ada with a slogan; Sun backed Java with a slogan. Similar story.

I guess your point would be that the slogans and support were different. Different story.

We will not agree on this.

Write once, run anywhere vs making all languages obsolete

I'd say that the difference is that "write once, run anywhere" is a very concrete goal solving a very concrete problem, whereas "making all languages obsolete" is a great deal more nebulous. What problems will this new language solve? Why are existing languages insufficient in this regard? etc

Yeah, well, they started Ada

Yeah, well, they started Ada off with a very clear goal in mind. To make a language which would replace the hundreds of languages in use in systems engineering at defense contractors at that time. All I am saying is that a lot of reasons like solving a major problem, industrial backing, playing games at generating industrial buzz, etc, which are contributed to the success of Java can be found in the case of Ada too.

That's the problem. One can always look at a winning horse and give arguments why the horse is winning, and one is always right since the horse won, right?

It is always very unclear what reasons lead to a market leading language. Why prefer Python over Lisp? Why prefer C over Ada? Why prefer Java over Pascal or Eiffel?

Momentum seems to have a lot to do with it. Simplicity of the language and tool support too. Industrial backing sure weighs in a lot. Solving a problem probably is important too. In the end, my guess is that you just need all of that and more, and maybe a language makes a chance.

Maybe it's the wrong question anyway. Maybe one should be looking at reasons why a language fails, and derive from that why other languages won.

You make good points,

You make good points, imho.

As to how to phrase the problematic of language "success" vs. "failure" I would stick to what had been said before:

languages always stink anyway because they by design construction always look at problems ontologies with a biaised point of view, with their own assumptions about it, at their inception point in time. None of their intrinsic design qualities or flaws can make their communities able to predict or anticipate long term and large scale success. But a language with no other implementation than in the brains of a few is useless anyway.

So my guess is the best bet to make is about how to have the language design not so forcefully putting handcuffs upon its implementors when the time comes for the tooling to remain useful with the programs or librairies expressed despite the unavoidable deviation yet to come from those initial assumptions.

I know, easier said than done.

Languages cannot predict whether or not they will be able to embrace the cultural expectations shift caused by the others (languages and tools).

Ada/Pascal history

No.

While Ada never quite did fail it didn't quite succeed as much as might have been initially envisioned. It was very different from the Java case. It started out as a somewhat imprecisely stated and controversial problem statement, refined into an RFC / RFP process ... and an excruciatingly slow and public language definition process. By the time it was "ready" the initial problem statement was in doubt and the solution offered was expensive. It wasn't popular with businesses outside of certain niches (though some niches that made a decent amount of money). This is almost backwards from Java which starts off as a (previously scrapped) language that's ready-to-go and a newly discovered "problem" with a story that says how the language solves the problem and why we need to act quickly and without too much critical analysis of the language itself. Ada and Java are very, very different histories.

Pascal made a pretty decent chunk of change back in its day for a few. It "made it" pretty well in the early days of the PC. In some ways, it very much was the Java of a smaller, earlier incarnation of the industry. There was no Microsoft-of-the-90s monster to be vanquished by Pascal - mostly Pascal was a cure for portability and aid for avoiding ASM and keeping dev costs reasonable. It did alright.

You're comparing apples and oranges and mainly missing my point.

Ada

I don't agree. Let's look at your key insight.

The economic and cultural success of Java was largely secured in those initial days before very many people at all had looked seriously at the language itself.

Many people have assumed that DoD/industrial backing is enough. Ada wasn't even designed yet.

It did not succeed because, say, "it was a better C++". It succeeded because it came from some powerful people and came with an idealized story about how it was going to conquer some big markets.

The people behind Ada had a good story to tell too about how it would change everything, and it was designed to be better than C.

It came with a "just so" story for investors and executives who don't care much for the technical details, just the big picture.

What's not to like about an infinite supply of DoD investment?

It didn't conquer those markets named in those early stories. In fact it lost pretty hard in those markets.

Ada didn't even lose in the embedded market. Still, it isn't as popular as C.

But the story was good enough, and spread among the right few people, that the early bubble of investment in Java guaranteed its long-lasting importance as a computer programming language.

Or maybe, it was just a small nicely designed language which crept into the embedded and browser space by chance.

[ Again, you told a nice story of the adoption of a winning language. Well, it looks to me that initially Ada had more industrial backing than Java, and similar games must have been played. Didn't make it an outright winner, and it still has to compete heavily with C in the embedded market. ]

re: The Problem with your Story

Huh?

Is that you can probably tell the same story for Pascal or Ada, yet these didn't make it.

I have no idea what you mean. Nothing in the history of Pascal or Ada seems similar to me.

See my post above.

See my post above.

For too many programmers,

For too many programmers, the last word on Pascal, the last nail in its coffin, is Brian Kernighan's outdated excoriation, and Ada had a similar reputation for bondage and discipline. In my opinion, probably the chief innovation that Java brought to the wider industrial programming market was simply the acceptance of the value of memory safety and GC; and even then, it left a lot of older programmers behind who shunned it because of that.

I work on the Delphi language, and the animosity people in our user base have for the fact that the IDE has some components written using .NET is something to behold. The vehemence of the hate for VMs and GC and similar "lazy" and "slow" approaches can be depressing.

Java in the browser

I feel you're underrepresenting the idea of running Java in the browser.
In 1994, the idea of the web as the universal ubiquitous computing platform was clear, but all computing was being done on the server; there was a big push for browser-side computing, but portability and security were big issues. Java essentially solved that problem, running anywhere in a safe (sandboxed) fashion. The initial "applet" proposition was very effective in making that proposition clear. The HotJava browser made it clear that Java wasn't limited to applets.

I agree that browser-side

I agree that browser-side computing (and the "write-once-run-everywhere" fantasy) were important. Of course, we should not forget that applets in the browsers are very rare, and that Java is actually a server-side language...

surely...

...you meant "currently exploited mainly as a server-side language".

And even at that, it's probably too strong...

considering the enormous and growing base of Java-enabled phones out there.

Fair enough. Doesn't explain

Fair enough. Doesn't explain the initial trajectory of the language, though.

re: Java in the browser

re: "there was a big push for browser-side computing, but portability and security were big issues. Java essentially solved that problem, running anywhere in a safe (sandboxed) fashion."

I disagree. You are just repeating the hype that drove top-down adoption of Java back then.

Portability and security were not hard issues back then. It made sense to market Java *as if* those were hard issues back then because Java had a neat way to statically check bytecodes and eliminate run-time overheads -- and that's an interesting programming trick to talk about. Nevertheless, there was nothing hard about portability or sandboxing back then.

HotJava didn't make much "clear" back then. It didn't work very well. It was, as I wrote, very influential for top-down adoption of Java because of some early and high visibility demos.

I think we're also

I think we're also forgetting that most browser side apps were in the commercial intranet setting pre-google maps era (though I have no idea about VBS/ASP vs. Java/Java market shares): just because it wasn't on the web doesn't mean it wasn't running in many browsers.

I was certainly keeping this

I was certainly keeping this in mind. While I don't have any figures to back this up, my impression is that this was a far cry from waht the applet mantra would have had you believe. I was especially fond of those applets that let you run "FORMS" apps through the browser...

Is being the "dominant" language the goal? Should it be?

Why is it that we tell the history as if only one language existed in each era? We know this is false. What does this tell us about the uses of this story telling? What does it tell us about the field?

It reminds us of the

It reminds us of the economic fact that there are network effects to programming languages, and that users of programming languages for whom technology is not a core competency feel a strong need to be part of a large crowd of sheep. It makes them feel safe; safe that they can find new employees to look after their IP for reasonable prices, and that they won't become overly dependent on a few wizards who might leave for new pastures.

history of ideas

I'm very glad that you mentioned 'the "history of ideas" in the bad sense of the term'. As in all history of ideas, if you don't do it carefully then each time period gets oversimplified and therefore seems to have exactly one dominating force. E.g. (from my own academic discipline), in the 19th century Kant solved the problems raised by Hume, in the 20th century Quine killed the analytic/synthetic distinction, etc etc etc. Simple claims like that are not only false, they're (maybe more importantly) massive oversimplifications of how people saw things at the time.

Social constructionism and

Social constructionism and diffusion of innovation theories (with almost a capital T at this point) would argue that being popular (not sure about dominant) are important for improving systems: generality, effectiveness, etc.

"But no simpler" is too strong

What this tells us is something about the human approach to knowledge in general. Einstein wrote: "It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience." - a.k.a. theories should be as simple as possible, but no simpler.

Einstein was wrong, at least in general: his qualifier "without having to surrender the adequate representation" is too strong. In practice, people use lossy compression to create theories, boiling choices down to as few as possible that still allow some sort of useful approximation to the subject being modeled. So e.g. the "dominant language" becomes the only language worth paying attention to, for the purposes of a simplified theory of the history in question.

In general I might agree

In general I might agree with you about Einstein, and I agree that historians can't study all languages, but they have plenty of time to study more than one!

That's true, but I think the

That's true, but I think the issue that Ehud referred to also has to do with the fact that most PL history is put forth by people who aren't historians.

Ah yes, fair enough.

Ah yes, fair enough.

Einstein, on the other hand, WAS talking about professionals; but that's just nit-picking. :-)

Nitpick accepted

Einstein was close enough for a domain like physics — although one could argue that even his own wonderfully successful theory of General Relativity is "too simple" in that it doesn't account for quantum mechanics (and vice versa for QM). But that's considered a kind of flaw that ultimately needs to be addressed somehow.

But in a domain like the history of programming languages, matters are fuzzier, leading to discussions which are prone to quick degeneration into quagmires such as postmodernist Marxist conspiracy theories... ;)

Hush!

Don't you know you should ROT13 M.a.r.x. to Z.n.e.k. to avoid being marked in a CIA data centre as a commie? ;-)

[ That's a joke. I actually liked Thomas' Marxist analysis of Agile programming. Darn those hippies. ;-) ]

Off topic here, but I think

Off topic here, but I think Einstein agreed that General Relativity was probably too simple. I've lost my citation for this claim: if anyone knows one, please tell me!

Apologies in advance...

His field equation was too simple to account for a stationary universe so he added an extra term: Lambda, the ultimate fudge factor

Thanks, yes, but also I

Thanks, yes, but also I believe he said at some point that Relativity was wrong with or without the cosmological constant, and that we would have to find alternatives to the relativistic ways of looking at space and time. That's what I've lost my citation for.

that's too simple, Anton

I disagree, Anton.

It is a truism that any way of talking about the history of programming languages is a simplification. All ways of talking about history are simplifications. That we like to simplify, perhaps by approximating, is not something we learn from "dominant language" discourse any more or less than it is something we learn from discourse that distinguishes "functional" vs. "procedural" vs. "logic" languages.

What is interesting about analysis in terms of language "domination" is not what details it elides, but which details it retains or invents - and makes central.

"Dominance" is not a property intrinsic - internal - to any language design. The concept makes no sense at all applied to a single language in isolation: it is always comparative. It is not only always comparative, but the comparison is not fixed: which language is "dominant" depends on the context in which all the languages we are comparing might be applied. One language might dominate another in the embedded systems context while the situation is reversed in educational contexts (for example).

Discourse about the dominance of a programming language -- well, I haven't formally surveyed it but from my informal experience reading it -- that discourse about dominance of languages is always about economic dominance (in a broad sense). We may not be comparing which language inventors made the most money -- but we do look at the broad range of economic transactions where one language is chosen over others.

"Dominance among PLs" discourse is not only an economic comparison, but a comparison that assumes, almost axiomatically, a "winner take all" situation.

It's easy but I'll argue wrong to take that belief in "winner take all" as a symptom of an underlying economic law of nature. The (in my view wrong) view says there is a network effect. (Some other comment here mentioned this in a first order way.) The network effect means that a language which is popular will have books written about it, classes offered to teach it, libraries written for it, and programmers learning it. The economic value of each book, class, library module, and worker is, so to speak, "multiplied" by the number of other books, classes,libraries, and trained workers.

The economic network effect that increases the value of a language by multiplying its documentation, libraries, and workers is certainly real - I'm not denying that. I can (but won't here) argue that there are other, more economically efficient possibilities but certainly the language network effect is real as far as it goes.

But we can agree here that the self-evident economic network effect doesn't imply a winner-takes all situation. (Which winner has taken all for writing simple web services? Python? PHP? Perl? Java? Lisp? (Etc. ad nauseum)?

So where does that focus on language domination come from? It's an economic concept, not a concept intrinsic to the design of languages. Yet why the focus on a winner-take-all economy?

I believe the answer has to do with power (political and economic) in this sense:

We talk about "dominant programming languages" because we are speaking to capitalist interests in commodifying programmers.

If you look at the past several decades worth of materials relating to the management of programmers and software development projects, a consistent theme is the treatment of programmers as commodity -- the standardization of the labor pool.

The theme is reflected both positively and negatively. It is reflected positively in cliches like "Don't let your business risk failure should the lead programmer be hit by a bus" and negatively in cliches like "The most productive hackers are n-gazzilion times more productive than every possible team of loser ordinary programmers." If you are managing a software project you are managing it using commodity solutions or decidely not using commodity solutions but either way, the top line of the description of how you are doing it relates to the commodification of labor.

Whether positively or negatively stated, the project of commodifying the programming labor pool (pro or con) is always the ground topic in these kinds of conversations.

To answer Ehud's original question:

Why is it that we tell the history as if only one language existed in each era? We know this is false. What does this tell us about the uses of this story telling? What does it tell us about the field?

We speak in terms of language dominance when the underlying discourse is about commodifying the labor pool for the programming domain in question. That commodification is concerned with the standardization of labor, reduction of labor costs, and the ancillary merchandising opportunities in books, classes, libraries and other support tools.

So powerful is this press for commodification that much of what is done in the PL theory and practice domain must be done in response to that commodification drive (whether joining it or resisting it).

I wrote a comment bringing up the network effect

The network effect "winner takes all" language for the web is Javascript. Your point about simple web services points to you ignoring what has historically been a powerful, if not the most powerful, vector of the network effect: language interop, specifically with the OS vendor's preferred language. Along this vector, few people care what a simple web service is written in, because interop is mediated through HTTP and JSON or XML or some other language, itself subject to network effects (Javascript's network effect was effectively enough to overpower the XML in AJAX).

Previously, interop with the OS's native API was a very strong vector for network effects, because example code would be written in it, documentation would describe APIs in the OS vendor's preferred language(s), higher-level widgets in GUIs would arguably have been most easily leveraged in a timely fashion using the anointed language(s), etc. But HTTP's stateless and simple, functional nature has added a layer of indirection. There is almost no alternative to the HTML+CSS+Javascript combo as the web platform languages of choice. They are completely dominant because of network effects, more dominant than C ever was. But the narrow bottleneck of HTTP has added more freedom on the server side than there has been in a very long time; on the internet, nobody knows your language is a dog.

And so other economic effects become more important than just network effects; certainly, the availability of labour, but also the balances of expressiveness vs performance, obscurity (so adventurous programmers using it will probably be better than average) vs popularity, momentum vs stability, etc.

Paul Graham expounds further on the same point, about how different the web is in this respect:

Back in 1995, we knew something that I don't think our competitors understood, and few understand even now: when you're writing software that only has to run on your own servers, you can use any language you want. When you're writing desktop software, there's a strong bias toward writing applications in the same language as the operating system. Ten years ago, writing applications meant writing applications in C. But with Web-based software, especially when you have the source code of both the language and the operating system, you can use whatever language you want.

But as degree of complexity and capability people expect in web applications increases, I wouldn't be surprised if we see a reemergence of a network effect on the server side too (on a timeline of, say, 10 to 30 years), if keeping up with the competition requires leveraging increasing amounts of third-party code through rich APIs rather than simple thin ones.

Commoditization of Programmers

f you look at the past several decades worth of materials relating to the management of programmers and software development projects, a consistent theme is the treatment of programmers as commodity -- the standardization of the labor pool.

The agile movement appears to be a refutation of that premise.

for OR against

Thomas said that everything is defined in terms of its adherence to commodification of the labour pool OR against it. The agile community (in the form of the extreme programming community on the c2 wiki, for example) spends a lot of time and energy explicitly and volubly resisting commodification.

It's important to keep in

It's important to keep in mind that the term "labour commodity" was originally introduced by Marx to express that labour is produced and traded like every other good. The resemblance to slavery was of course intended, s.t. the liberation of the proletarians could follow the pattern of the liberation of slaves.

Since "labour commodity" is the very being of the worker in capitalism, it can't be altered by management practices. The pervasively used BA term "human resources" doesn't magically change anything to the worse, just like agile programming practices with their absence of managerialism change anything to the good, with respect to the fundamental fact, that I sell my labour to some company and therefore I'm a "wage slave".

agile is pro commodification

Agile and related methods are formal methods. One good agile programmer is as good as any other good agile programmer with similar specific skills.

Agile programmers are, in that sense, interchangeable generic units.

Some parts of the agile process are specifically designed to ensure that projects can tolerate a modest amount of programmer churn, scaling up and down, etc. Agile programmers are in that sense expected not to be idiosyncratic but to conform to standards of process and documentation -- so that one may pick up where the previous left off.

Agile is therefore very much about (and in favor of) programmers as commodities.

One agile programmer as good as another

Agile and related methods are formal methods. One good agile programmer is as good as any other good agile programmer with similar specific skills.

I don't think anybody really believes that....even PHBs. One thing that I've witnessed over the years is that even great programmers sometimes don't get the domain on certain projects, and so aren't as productive as they are on some other random project.

I don't get religious about agile, but I think the whole gist of it is so that teams don't get too far down that rabbit hole where your costs for "redoing the architecture" are exponential....the whole point of not doing waterfall.

In the past decade or so, a lot of management (not all) has realized that programmers aren't interchangeable cogs. The holy grail for management is to "get rid of programmers" and let business analysts have at it.

dehumanizing agile programmers

I agree with what you are saying. We can generalize it:

Programmers are not (yet?) thoroughly commoditized. In fact, many of the more interesting programmers get very idiosyncratic work that few others could take on. Many, many programmers are basically working as commodity grunts but far from all. "Agile" functions as a commodity category when it is used to filter resumes - but it doesn't work that way when hiring managers sometimes look for non-commodified qualities (like "really gets the problem domain and is easy to talk with").

So I am not saying "programmers have, absolutely, been turned into a commodity."

I am saying that things tend in that direction and, returning to the question Ehud posed: the discursive tendency to talk about dominant programming languages is primarily about commodifying programmers.

The modern "wanted ad" refutes commodification

For those old enough to remember the want ads of the 80s , they would say things like "programmer wanted". Now you see a laundry list of 20 specific technologies that an organization wants.

wanted ads prove commodification

Wait a minute.... you say:

For those old enough to remember the want ads of the 80s , they would say things like "programmer wanted". Now you see a laundry list of 20 specific technologies that an organization wants.

That helps to demonstrate, not refute commodification.

Car tires come in many different sizes with a variety of features that can be mixed and matched. The same thing for wood screws, eggs, crude oil, and pads of paper. These are all commodities.

The standardization of features charactizes a commodity market. Specialization is not incompatible with commodification so long as the specialization is standardized.

I think you misunderstood

If a want ad just says "programmer wanted", then they're saying that any type of programmer can come aboard for any other type of programming situation...any old cog.

If a want ad has a laundry list of 20 technologies, then the organization putting out that ad isn't looking for any programmer "cog", but a programmer that specializes in a certain development tools stack.

winner takes all, eventually

I remember reading that it doesn't take a war for Homo Sapien to render Homo Neanderthal extinct. If Homo Sapiens even have a 1% survival advantage, that becomes a 21000:1 population advantage over just 1000 generations. Further, there is the possibility to absorb the competition, e.g. through cross-breeding.

How many distinct programming languages do you expect we'll be using in 1000 years?

We happen to be living in a period of intense flux. It's an exciting time to be a PL designer.

Video online, too

oscon.com has the video online, too.

Go is more fun than the presentations suggest

Go's creators do have some rather unorthodox views. For instance, the language does not reflect immutability or read-only access in the type system (or ownership of mutable objects), even though it is aimed at writing parallel programs. The shared heap is difficult to collect efficiently because the object model allows pointers to subobjects which will keep the entire object live. The claim that a static type system is required for performance is increasingly at odds with reality, especially since some interesting Go facilities rely on reflection, which acts as an optimization barrier for static compilation. Using error values instead of exceptions makes exploratory programming more difficult and hampers beginners who regularly ask why their programs do not work, and the answer lies within an ignored error value.

That being said, writing concrete Go programs is much more fun for me than I expected, considering all those shortcomings. The language just feels right to me, especially for low-level tasks where attention to detail such as error handling is important.

Exceptions are overrated.

I'm with you on several of the above points, but I think exceptions are vastly overrated. My experience of them is that most coders use them to ignore errors, or avoid having to track down specific errors to the operation that causes them, or to make really opaque non-local jumps almost as bad as INTERCAL's COME_FROM statement, rather than for tracking down and fixing errors.

If I knew nothing else about two competing packages except that A was written in a language with exceptions and B was written in a language with error values, that neither was written within the last two years, and that they'd both been in use by approximately equal-sized large communities for an approximately equal length of time, I would pick B as likely to contain fewer errors, simply due to the inability of lazy coders to paper them over and the inability of lazy management to compel programmers to do so.

re: exceptions overrated

I agree that exceptions are overrated. They are problematic in context of parallelism, futures or promises, inversion of control frameworks. They are difficult to manage or functionally transform as they cross module boundaries. It takes resumable exceptions to separate the error recovery policy from the error site. Burying one exception when another is raised by a handler or finalizer is ugly and composes poorly.

But error codes, easily dropped and forgotten, are not better.

I favor a few idiomatic approaches too error handling, albeit not built into the language. (a) Pass a handler, which can both report the error and provide some advice on how to recover from it. (b) via some combination of syntax and type system, we might also force developers to handle the error value - they could explicitly ignore it, but at least that would be more visible in a code review or search.