Less is exponentially more: Rob Pike on Go and Why C++ Programmers Aren't Flocking to it.

I was asked a few weeks ago, "What was the biggest surprise you encountered rolling out Go?" I knew the answer instantly: Although we expected C++ programmers to see Go as an alternative, instead most Go programmers come from languages like Python and Ruby. Very few come from C++.

Starting from there, an interesting read on the design of Go, including influences and core philosophy.

Here's the link: Less is exponentially more.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Pike channels Dijkstra

Finally, although the subject is not a pleasant one, I must mention PL/I, a programming language for which the defining documentation is of a frightening size and complexity. Using PL/I must be like flying a plane with 7,000 buttons, switches, and handles to manipulate in the cockpit. I absolutely fail to see how we can keep our growing programs firmly within our intellectual grip when by its sheer baroqueness the programming language---our basic tool, mind you!---already escapes our intellectual control. And if I have to describe the influence PL/I can have on its users, the closest metaphor that comes to my mind is that of a drug.

I remember from a symposium on higher level programming languages a lecture given in defense of PL/I by a man who described himself as one of its devoted users. But within a one-hour lecture in praise of PL/I, he managed to ask for the addition of about 50 new "features," little supposing that the main source of his problems could very well be that it contained already far too many "features." The speaker displayed all the depressing symptoms of addiction, reduced as he was to the mental stagnation in which he could only ask for more, more, more.... When FORTRAN has been call an infantile disorder, full PL/I, with its growth characteristics of a dangerous tumor, could turn out to be a fatal disease.

—Dijkstra, The Humble Programmer, 1972

And yet, seems like this

And yet, seems like this 7,000 buttons, switches, and handles type-of-plane is still somewhat wanted to hang around us :

even in .NET

Who knows. Will we get new FP features in PL/I, some day, even ?

The Philistines

Early in the rollout of Go I was told by someone that he could not imagine working in a language without generic types. As I have reported elsewhere, I found that an odd remark.

To be fair he was probably saying in his own way that he really liked what the STL does for him in C++. For the purpose of argument, though, let's take his claim at face value.

What it says is that he finds writing containers like lists of ints and maps of strings an unbearable burden. I find that an odd claim. I spend very little of my programming time struggling with those issues, even in languages without generic types.

But more important, what it says is that types are the way to lift that burden. Types. Not polymorphic functions or language primitives or helpers of other kinds, but types.

That's the detail that sticks with me.

Programmers who come to Go from C++ and Java miss the idea of programming with types, particularly inheritance and subclassing and all that. Perhaps I'm a philistine about types but I've never found that model particularly expressive.

My late friend Alain Fournier once told me that he considered the lowest form of academic work to be taxonomy. And you know what? Type hierarchies are just taxonomy. You need to decide what piece goes in what box, every type's parent, whether A inherits from B or B from A. Is a sortable array an array that sorts or a sorter represented by an array? If you believe that types address all design issues you must make that decision.

I believe that's a preposterous way to think about programming. What matters isn't the ancestor relations between things but what they can do for you.

In fact the text is mostly about the perils of nominal typing, rather than the perils of typing.
There is also something about how having a garbage collector is much better than RAII (isn't that a bit cheesy?). Finally, he suggests that, if C++ programmers don't start adopting Go, maybe it's because they spent so much efforts trying to understand the language that they don't want to let go of this investment.

Meh.

Some truths

There was quite a bit in the text that I'd agree with, including the reason he gives for C++ programmers not liking Go (although IMHO he misses the other half of the reason, which is that Go doesn't improve over C enough to really make it worthwhile for many).

But the part you quote struck me as confused as well. That criticism indeed is about nominal classes and subclass hierarchies, not about types or parametric polymorphism.

Also, that "someone" mentioned in the first paragraph is right. I have yet to see a safe typed language that got away with not having something akin to parametric polymorphism eventually. In fact, I know people who work with Go on a daily basis and think it sucks big time, a major reason being the constant need to downcast from interface{}.

Go might have generics in future

I have yet to see a safe typed language that got away with not having something akin to parametric polymorphism eventually. In fact, I know people who work with Go on a daily basis and think it sucks big time, a major reason being the constant need to downcast from interface{}.

I've spent about 100 hours rewriting some Java code in Go and that's true (that you are frequently forced to downcast from interface{}).
OTOH, it's not a major issue for me, just like it wasn't a major issue for me in Java 1.

The Go FAQ recognizes the issue and says that generics may be added at some point...
http://golang.org/doc/go_faq.html#generics

Considering how Java generics turned out I am fine with the Go team taking the time to think about a proper design for generics in Go :-)

Wrong conclusion

Java is a good example, but you are drawing exactly the wrong conclusion. The reason Java generics turned out sub-par is mainly because they were added too late! They had to work around all kinds of language design and infrastructure legacy at that point. Compare with C#, where generics were put into the language early on and integrate much nicer. That is, generics should be designed into a language from the very beginning, and not as an afterthought.

Actually we should speak of

Actually we should speak of .NET generics. More specifically .NET 2.0. That's the version of the runtime that introduced explicit support for them.

.NET 2.0 generics aren't erased and an oft-claimed point of .NET/C# doing better than Java is that it's mainly because of this design choice to have the runtime reify both the generic types and their instanciations, including reflection support, on behalf of the .NET languages compilers/interpreters, relieving the latter from the burden to make their own choice (in possibly would-be conflicting fashions). The whole core point of .NET right from the start was not "one language to rule them all" (e.g., C#) but rather "one runtime to serve them all" ("them" : the .NET languages, that is). Compare this to Java, which, from its early design, didn't pay much attention to have the JVM serve other purpose than just the Java language itelf. Or maybe I missed something.

I don't know if the non-erasure argument really holds, I'm no theorician, but I agree that the .NET generics had triggered thorough research and usability work as early as soon after .NET 1.0 release (circa 2001) and were not added "at the last minute" when .NET 2.0 shipped 5 years later. Actually, MSR practically haven't stopped designing and implementing the .NET 2.0 generics as their almost exclusive focus, between the .NET 1.0 release and the first v2 betas in late 2004.

It's known they couldn't use HM type inference for instance in the case of C# (they had to craft their own algorithm) but they sure wished they could have. Anyway C#, thanks in part to the .NET 2.0 generics, has known several, rather "successful" (though it's always a bit subjective) incremental enhancements ever since C# 2 : C# 3's Linq-to-XYZ comprehension syntax and extension methods being one of the famous. C#/.NET 2 generics probably played as nicely as they possibly could to allow and integrate with these additions to the language, which btw aren't absolutely C# specific in fact: even that weird VB.NET (just IMO, a language I never liked) could make profit of some.

It seems to have been seamless enough for them to build on top of .NET 2, at least in those respects.

.NET isn't really multi-language

Actually, although Microsoft always claimed that .NET is language independent, they never really delivered on that. I would argue that all those popular .NET languages are either variations of C# (especially VB), or foreign languages with three quarters of C#'s object system bolted on (e.g. F#).

Back in '99, when .NET was still called Lightning, there was an attempt by MS to get other languages on board, and encourage language groups from around the world to port their languages to the platform. Some tried, but it did not really work out all that great, both because the platform wasn't nearly as universal as they had claimed and because the core development team had other priorities than making it so.

Well, ok, up to some extent

Well, ok, up to some extent that one can always wish for to go beyond, I suppose (e.g., I, too, yes).

Question, though: which other runtime and/or platform is more language independent than the CLR + CTS can we use today?

I'm interested.

None

None. Except assembly code (or maybe C--). Having been involved in a related project a long time ago (and also in the aforementioned MS initiative), my conclusion is that language-independent VMs don't work. At least not on a level high enough to allow useful cross-language interoperability. Impedance mismatches between different languages are just too significant, and even small ones are already enough to preclude anything that goes beyond simple exchange of primitive data.

Integrated Language Environment for IBMi

Perhaps not popular, but the Integrated Language Environment (ILE) for IBMi is an example of an environment that allowed for multiple languages to coexist and call between each other in a fairly seamless manner. This is a biased option, though as I worked on the C and C++ compilers for ILE in a previous life.

Descriptions and documentation can be found here:

IBM Information Center

Search for "ILE Concepts."

I would argue that all those

I would argue that all those popular .NET languages are either variations of C# or [...] with three quarters of [its] object system

To be honest, I must say I ve made for myself pretty much the same observation, actually.

Two things, though. First, this strongly C# flavor/influence found in today's .NET Babel Tower is maybe a consequence of C# being somehow "naturally" the canonical language first (arbitrarily) chosen (by MS) to reflect the set of new features of its underlying platform. It happened 3 times already: .NET 1, 2, and 4, with the corresponding C# "ambassador" versions.

C# is seemingly the privileged .NET flag holder. I d venture that .NET language designers, consciously or not, have hard times to detach themselves from C#'s way of phrasing the CLR capabilities to the programmer (in a human readable source syntax).

Second, two notable exceptions to this point of ours, however.

The first is the example of Eiffel (okay, yes, "just" yet another OOPL), which well before .NET or even Java *, didn't fear to investigate really non trivial matters all at once (multiple inheritance, generics, feature renaming, operator overloading, etc, etc).
And I remember reading a pretty enthusiastic Bertrand Meyer, in his article on "The significance of .NET".

Meyer's baby is Eiffel, more than anything else. He never wrote the same about the JVM for one, or that I know of.
My impression is that he was writing mainly from a language designer and implementor point of view. I don't think Meyer ever cared much about C#, for instance (though he stayed polite about it :)

The second is even more interesting though it's not a general purpose language:

XSLT in .NET 2 was heavily refactored, wait, redesigned and reimplemented actually, to compile to IL (compiled on the fly, then cached) mostly to address performance concerns from the first try they made at it in .NET 1 (which was basically "just" a very sophisticated tree visitor + rewriter pair written in C# only).

As you know, even if a DSL, XSLT is considered much more functional than anything (it could have been).

I agree .NET somewhat disappoints a bit as far as language inventivity goes to build upon it, but there are still interesting uses of it ... from time to time.

EDIT
* let's recall that Eiffel's design inception/effort dates back... to the mid 80s; an oft-forgotten fact.

Adding generics later causes problems.


"generics should be designed into a language from the very beginning, and not as an afterthought."

Er, yes. Retrofitting generics into C++ made the language far more complex. Before generics, C++ had too much unchecked downcasting.

(The other classic language design mistake is not having a Boolean type. One is usually retrofitted later, and the semantics are usually slightly off for reasons of backwards compatibility. This happened to Python, C, LISP, and FORTRAN.)

The Go FAQ recognizes the


The Go FAQ recognizes the issue and says that generics may be added at some point...

That's not a good sign. The retrofit of generics to C++ was preceded by a denial that they were necessary, and followed by attempts to use them as a compile-time programming language. Neither was pleasant.

There are languages that don't need generics, like Python. And there are languages that had them from the beginning. It's better to pick one of those alternatives.

(The other classic mistake in programming language design is having to retrofit a boolean type. That usually results in weird relational operator semantics.)

Not enough improvement

Google was silly to let Pike design a language. No sum types or polymorphism? You have to be joking. Pandering to object oriented paradigm? There's some good stuff in there, but please, can't we get designers that at least *respect* theory, even if they're not theoreticians themselves?

The fact is, whilst it's arcane, messy, and broken, C++ has more than mere data polymorphism: it has some capabilities to handle functorial polymorphism. It's not up to Haskell's standard but it's better at it than Ocaml.

I would say the remark that Go doesn't go far enough is a better description of the problems in D which does, in fact, improve on C++ .. just not enough to bother with.

I've re-read more

I've re-read more attentively Pike's points. He shouldn't be surprised, indeed.

Well, I used to be a C++ programmer in my early career. Not out of choice but out of necessity. That was my company's and team's prior choice.

If I were still busy with C++ today, I can see easily why Go would fail to convince me that I need it.

On his type taxonomies point (that things shouldn't be all about that, only) :

Let's not be hypocrite. I don't know any seasoned C++ (or Java, or just "OO programmer", for that matter) who is not well-aware of the limitations and bias in OO languages' way of seeing the world (that is, the programs' problem domain) as "taxonomies".

Of course they do. Of course they know that modeling the software's problem domain as hierarchies of objects is intrinsically limited and biased, if only because, before anything else : it's mostly arbitrary.

Or if they don't, then I'm sorry, but "they ain't seasoned", precisely. They're still in their junior times and haven't been biting enough of the OO bullet, or they have vastly misunderstood what OO is not about. That it's not about finding the faithful "taxonomy" to the given problem, because such thing won't exist beyond the first edit of the documentation when you allow, upfront, your object types and modules to be both closed and open.

That's not OO design premises to find the golden taxonomy. The premises are weaker instead: pick up any taxonomy you see fit, then make sure you keep things manageable and under control, with no hidden contracts, or, no better, violated contracts. Also, try learn from the FP folks and keep state in general, and state mutation in particular, as controlled and explicited as you possibly can. Even better, just get rid of state and state mutation whenever you can afford (granted, easier said than done: end user requirements are rarely directly translatable in pure functions).

But then, you'll hopefully debug a little less over what the implicit OO's design bias in modeling the world (sometimes telling or assuming too much about systems' state) is forcing upon you anyway.

Yet, he writes :

If C++ and Java are about type hierarchies and the taxonomy of types, Go is about composition.

Okay, then, but which kind of composition are we talking about here?

Pike seems to view compositions as something that should only be about composing contracts. So, exit behavior reuse ? Why so ?

That's rather too bad news for C++ programmers who do like the freedom to think in terms of well defined behavior reuse sometimes.

Thus, is it that much difficult to achieve ? Even if willing to surrender on the issues introduced by the support for multiple inheritance or even, repeated inheritance, and sticking to a simpler, single implementation inheritance scheme, I would argue that it's not.

Maybe it was somewhat overlooked discipline issue in the early days of, say, Simula 67. But we know a little bit better today. Some languages have already scouted there.

I can see where Pike is coming from and where he wants to go re: more language-driven discipline by less features, but...

I know a language that allows you, if you really care about not having a part of a class' members be misused or inherited by clients or heirs, to stay hidden in that class, and if someone really wants to use and, prior, knows about that part of the implementation, then to do so only through a disciplined contract.

They have to make it very clear : they can only make profit of that part of the class' behavior via an interface.

It's not used frequently, but it's sometimes very useful and very specifically for this purpose and sort of concerns, without requiring you to make much more involved member visibility choices like "private vs. public vs. protected vs. internal".

And it's called, well, "explicit interface implementation".

Easy to look up.

No form of reuse

From the article...

If C++ and Java are about type hierarchies and the taxonomy of types, Go is about composition.

Actually, my biggest problem with Go is it's lack of support for composition beyond embedding.

An object-oriented type hierarchy gets me 2 things...
...polymorphism
...reuse through inheritance

Go is not object-oriented. Go provides polymorphism via interfaces. However, it has no good form of reuse. If I want to create a type that implements a particular interface I must implement every method in the interface, for every type that implements that interface. There is no way to 'inherit' an implementation from another object nor to 'delegate' to another object that implements the interface. Go's only method of 'reuse' is embedding, which is functional enough but a PITA in practice.

BTW - I like Go...

I feel like I should not just post a negative comment about Go without mentioning that I really like Go, and I will definitely be using it more in the future.

I have a lot of 'enterprizey' Java experience. I appreciate Java's static typing but not it's verbosity.
I find the combination of closures, goroutines, and type inference in Go to be thrilling.
Often, after writing some routine in Go, I marvel at the conciseness and clarity of the resulting code.

Not type inference

Sorry for pedantic ranting about a pet peeve of mine, but Go has no type inference. Nor does C++11. What some language communities sell as "type inference" these days is trivial bottom up type derivation for simple expressions. You talk about type inference when type information can be derived non-locally, and from use sites, not just from definition sites. Litmus test is whether you can omit the parameter types from a function, for example.

Type arguments

Would you consider inference of type arguments in parametric polymorphism to be type inference? I do. In Virgil you can't omit types from parameter declarations but you can omit them from uses of parameterized methods and classes, including applications and partial applications. The algorithm to infer the type arguments is much like unification.

But I agree that Go is not object-oriented (they have once-in-a-while described it as such), nor does it have type inference (they often claim this).

Valid point

Yes, valid point. When you have parametric polymorphism, more is going on. Go does not have that, but C++ does (provided you are willing to count templates). For clarity, I'd still avoid calling it "type inference", though -- it's just inferring the type instantiation, which is only a small part of the full thing, and a very local affair. The C++ crowd calls it "template argument deduction", which seems adequate.

local type inference

local type inference is a feature in its own right.

Yes

Right, and it is used in languages like Scala or (too a much lesser extent) C#. Both pass the litmus test I mentioned, at least in simple cases.

Unfortunately C# only allows

Unfortunately C# only allows this feature to be used for lambda expressions, and not for methods.

Regarding polymorphism via interfaces....

Go doesn't really internalize type hierarchies as such. All of the relations between types are boiled out by the time it emits opcodes. Its polymorphic routines are called directly. Although they may be identically named, they actually have no code in common, not even a dispatcher.

It was someone's design decision to have that property of the runtime model reflected in the source code. I respect that, although I feel your pain. I think that interfaces are, to some extent, semantically "cleaner" or "simpler" than inheritance. But when not supported by implementation inheritance they are inevitably more work.

Inheritance of implementation and the resulting labor savings is one of the strongest arguments in favor of type subclassing. What lazy programmers do for its sake when there actually isn't a clear "isa" relationship between two (or more) types is one of the strongest arguments against it.

I think that test code to assure that different implementations of an interface actually mean the same thing, should be "inherited" from interfaces. For example, I think a type with an "addition" interface should inherit (from the interface!) a 'test_addition' procedure which assures that the "addition" implemented by something claiming to use the interface is in fact associative and commutative. I know what you may be thinking, but vector addition on the surface of a sphere, and other such things, do not actually have the same semantics that we think of as being addition. And something that implements a "container" interface ought to inherit a "test_container" procedure that makes sure of things like an empty container which undergoes an insertion becomes a container with one member, and that when a container has had some known element added to it, that element must appear if its contents are listed and so on.

Of course, if there were a way to inherit code from interfaces, people would insist on defining the implementations in the interfaces. Which, depending on your POV, might defeat the point of having interfaces in the first place.

Ray

Anyone know what this means?

"expressions at the top level of the file"

Yes

Go allows some non-constant expressions outside of function declarations to initialize package scoped variables.

Thanks

Thanks

Go's prospects look upbeat to me

I hesitate to comment on what appeals to C++ coders, since few rarely saw sanity in my appreciation of Lisp and Smalltalk. But I know something about typical psychology, which often seems motivated by desire to use all possible features, so it can go on a resume to signal competence in future job searches. For some, social signaling seems more important than technical issues -- but this is necessarily conjecture.

(I'm not usefully representative of other C++ programmers, but I worked closely with many from 1990 to 2010. About two years ago I was asked to "please stop writing in C++" by C folks where I work who freaked at the sight of C++ when their arguments tripped asserts in my code. I taught myself C in 1983, then later learned about C++ from a software engineering magazine around 1988 when trawling library stacks. Then I found Stroustrup's first book in a store; later I had a chance to play with cfront before starting C++ full time at Apple in 1990. I never switched to Java when other C++ coders did, and coded 95% in C++ for 20 years including hobby projects. But I don't think of myself as an expert for various reasons, like my choice of subset.)

When Java was new, friends asked me about it, since I liked languages. So I said, "Sounds like a good idea." I explained why I thought GC and a VM were good ideas. (Moderate abstraction is really useful, and doesn't need to kill performance, etc.) Basically only hot spots need be really fast; elsewhere organization seems better. Then it only took a couple years before I was a backwards cretin who still used C++. Okay, so fashion often looks a lot like religious conversion in software.

I expect Go to do well too; the minimalist approach is sound. But religious conversion of existing zealots is a lot to ask without a hook differentiating Go from other options. For example, GC cannot sell C++ coders on Go when they could have chosen Java earlier, etc. I think raw material to sell Go is there, but technical wins may not be what folks care about first. Popular opinion and job opportunities matter a lot. When padding a resume with Go experience improves odds of landing jobs, it will take off, but not necessarily before then unless you clean the clocks of competitors in one app space or another. (I feel embarassed saying the obvious.)

I have trouble picking out a specific part of Rob's post for analysis. But this part has a couple points I nearly agree with:

C++ programmers don't come to Go because they have fought hard to gain exquisite control of their programming domain, and don't want to surrender any of it. To them, software isn't just about getting the job done, it's about doing it a certain way.

The issue, then, is that Go's success would contradict their world view.

Those are two different issues, only somewhat related to one another. First, sometimes a coder needs exquisite control. Second, most people choose a world view that flatters themselves -- usually, not just sometimes -- and some coders might think, "I'm only awesome because I wield exquisite control," and that would prevent them from considering a choice with less control.

The world view factor is always directly relevant if you want to persuade anyone to change behavior. To get religious conversion, for example, you need a thesis like this one: you will rock when you adopt X and other folks will suck when they don't. (Corollary: you are so deliciously smart you figured out X is the best choice, and others are such morons they didn't.) If you can still feel clean after selling that pitch, it's known to work if you really need adoption.

Inroads when control is needed will be harder. I was only able to use C++ embedded inside C for a while because I knew how to expose a C api anyway, and then avoid all standard C++ libraries so nothing from C++ had to be linked. (Then why did I use C++? I was more likely to have zero bugs once code actually compiled, if typeful coding style embodied some of my design's informal proof of why it worked. And, code was more terse.)

Why was I asked to stop writing in C++? Two reasons as far as I know. First, C++ is "evil" (cf those rotten C++ infidels). Second, it was impenetrable to folks who needed to debug their C layer, even when style was bare-bones C with classes. That's the reason you need to worry about: comprehension based on shared mental models. You can't screw over team members, it's the rule. That might suggest an evangelism tactic.

What really motivates many people is this: dramatization of self image. To get them to adopt a tool you offer, it must afford an advantageous change of self image without incurring a drawback, especially when loss has bigger emotional impact than gain.

Ah, a wisdom comment of

Ah, a wisdom comment of yours I can relate to, a lot.

The success of the announced and advertised inception of a new language depends indeed a lot on social factors, and for some largely out of reach of any objective, if not just rational, arguments.

A new language really needs to do better and blindingly demonstrate its potential or observable effectiveness at today's problems.

Let's not forget it just even takes a while for a language to be understood in its full potential. Look at Javascript, with all its flaws, but also qualities, that most have misused for a decade before a couple simple patterns be unearthed to keep things written in it cleaner or more manageable.

An oft-heard argument is it's somewhat wrong for a language to have a whole bunch of features rarely or very rarely used. But why so? Of course, if out of 40 keywords, you end up using less than 10 most of the time, something is likely clumsy with your language choice in the first place. But otherwise, what do we do? We program. And as much as we don't need to use, by far, the full set of English nouns and verbs even for vastly different communications, why would I care not using yield in C# that often, let alone other specifics such as the new() constraint on generics? Very most of the time, our design or implementation is fine enough with a more or less humble subset of the full language.

As a early adopter of C# back in late 2000, out of the then codenamed NGWS, I gave myself three full days of studying those unstable experimental compiler and libraries, even tweaking in ugly ways my then Windows 98 to just be able to play with it on the command line. I didn't care about the commercial argument package of MS on how it would bring revolution to the web, etc, blah blah.

All what I wanted to see is what Anders had been busy with after his "betrayal" or kind of, of the Borland Delphi folks 3 years earlier. I wanted to know if some rumors about a better design than Java, the language, were true. So I picked every thing I didn't like or had found clumsy in Java and looked how C# would see things.

I liked what I saw. I had no idea and no specific hope for .NET otherwise. But if, say, C# would be wanted, I knew that at least I would enjoy and feel safer with it than with Java, about which I, too, was maybe seen as a bit cretin not to find "wonderful".

The difficult part is having the language be really innovative in its ways to even allow subsets of it already more useful than the competition. Then yes maybe you can have a chance to overcome the social factors resistance. But it's never immediate anyway.

thanks, I can only offer funny ideas

I don't want you to get the idea I'll never respond. I just don't post unless I have something to add. Responding is flattery enough; remarks about wisdom make me wince. (If you write clearly, people like what you say, even when commonplace. It's like a weird inverse effect of the world view phenomemon. Often folks refuse to understand unless they agree; so if you can get them to understand, they might agree out of habitual association.)

YSharp: A new language really needs to do better and blindingly demonstrate its potential or observable effectiveness at today's problems.

Or ... you can probably get one to take off with funny false social proof. I'm imagining a television commercial for Go featuring incredibly competent Australian commandos from World War II, in a plane returning from a mission, who praise Go as a blinding 'chutist (as if Go was a person, of course). It would be weird, and therefore appeal to geeks. Extremely dry and reserved praise would work best, and be slightly funnier.

(Or you could write Go tutorials in T.H. White style, as conversations between Merlin and Arthur as if pulled from the pages of The Once and Future King, suggesting the idea Arthur can't be effective as king without a jolt of Go under his belt. A little creativity goes a long way, and alleviates the boredom of tech documents. I basically like all anachronistic jokes based on the idea programming has been around for 100's of years, to the extent of turning the tide in ancient battles, like English long bows.)

I read all your remarks, except the ones very heavy with densely packed acronym references to .NET and such, which taste like politics to me. For example, suppose I wrote a post that mentioned several American president names, with a few references to Stalin and Marx, with occasional references to the Constitution. Your internal Bayesian spam filter would route it directly into the nutball category. You wouldn't read it unless you suspected I was being funny. Similarly, you don't want folks to skip your posts as sounding like "Microsoft stuff" if they suspect it's easy to categorize.

Lol ! Did I give you the

Lol !

Did I give you the impression I was writing down a commercial script for Microsoft? Sorry, if I did !

See, I was just sharing my experience.

It just so happens I've made a living with the Microsoft stack for 16 or so years, now. And say what. I actually don't like it, if not downright hate sometimes most of it.

I'm not married to Microsoft, I have a wonderful wife, thank you. But they feed me. Indirectly: my employers have enough money to spend at MS.

Over 16 years, going on 17, on various more or less interesting projects, I have fought with:

DOS protected mode interface, aka DPMI, the DDK, Device Driver Kit, then 16bit 32bit thunking tricks under W95 and NT4, then COM, DCOM, COM+, .NET, SQL Server 7, 2000, up until today, SQL 2008, you name it.

So yeah, I know that Windows beast quite a bit, and people hire me to clean up or develop code hopefully a little better.

Truly, there is very very few things I like on the MS stack. This thing is just a pain and a mess and a waste of memory and disk space. But it's so easy to add to the mess when you hire junior guys that at some point, sometimes, you know...

They need folks like us to step back a little, and... yeah, KISS.

EDIT
Let's be clearer, shall we?

In 17 years of struggling with the MS stuff for my clients or employers, .NET and C# both were the "less worse" thing I've seen from Microsoft before or after then.

And that was 12 years ago. It's getting old, already. The versions 2, 3, 4, didn't surprise me after that.

So, pretty much everything else, IMO, given the company size in consideration and customer base already in place, has been either predictable, or boring, or warmed up, or hyped up, or average/mediocre, or downright painful. I can still hear about folks spending money/time in Sharepoint. Even when its v1 was released I could tell myself how much overkill this thing was promised to be. But that's just me. Maybe I am stupid and a Sharepoint fan can prove I am wrong. I'm just saying from what I saw companies do with Sharepoint so far, it isn't required, unless you are BofA or something, and you decided to NOT look at simpler, nicer CMS, that would do just as well. But you know, sheeple.

Lets just hope it was fun to code for the MS SDEs, at least.

So, their problem at MS is their dependency on their legacy, they need to ensure themselves to be able to milk from version N to N+1. That's their main problem, like any other big vendor. Add scale to this, and you understand that unless they are stupid, they need from time to time to put enough money in research to at least try to do a little better. That's what happened with .NET I think.

Otherwise, alike dmbarbour on these forums, I have rather radical ideas on what I believe we should instead head towards, with languages. But unlike him, I have big difficulties to even express them simply enough so that people can understand. As you can see, my attempts so far largely sucked, but I still trust the idea/my intuition. It's a slow process. It's fairly beyond (read: foreign to) the current interests, which all seem especially focused on opposing either paradigms, or type systems, or both. Which are interesting topics too, I suppose.

I guess I just need more time to put it simpler/more accessible.

A little creativity goes a

A little creativity goes a long way and alleviates the boredom of tech documents.

Well, I hear you fellow coder. :)

Whether or not I agree with part or all of your points, I love your amusing writing style in your comments so far.

I wish I could do the same, thus just know I much enjoy it. It's at the same time serious topics here (I mean, it's about PLs, right? And programs we write, right? And some control people's lives... or even more serious: money! ;) and also lots of nitpicking, etc.

But man, what is more nitpicking than a compiler anyway? And that's what we use 5/7, 12/24. I guess we guys are doomed to be as biased in that arguing dimension as much as politicians can be ... about, say, their (official) views on morality. If that makes any sense.

Anyway, enough with unfair labeling.

How about this. Remember all that craze about UML in the late 90s, for instance? Here's another funny take I loved from B.M. on the topic:

The positive spin

I read Pike's take shared by the OP here.

I can agree with him that less can sometimes be a lot more.

But I feel that language designers often miss the opportunity of using the satire tool, as Meyer did, to contrast more effectively their language baby's assets with what's in the state of art, to make their point in both a pleasant and informative way.

You, as practitioner, for example, have no issue using the casual stories and amusing metaphors to help yourself make your points (It's indeed something I like to encounter, and still too rare)

Why would they? It's effective for getting my attention anyway. :)

VB.NET is not just C# with different syntax

"People often snort at Visual Basic, either because they still have an outdated idea of “Basic” in mind, or because they think that Visual Basic.NET is just C# with slightly more verbose syntax. Nothing is further from the truth. [...] Visual Basic is unique in that it allows static typing where possible and dynamic typing where necessary. When the receiver of a member-access expression has static type Object, member resolution is phase-shifted to runtime since at that point the dynamic type of the receiver has become its static type. [...] The Visual Basic compiler inserts not just upcasts, but downcasts too, automatically. [...] Late binding in Visual Basic implements a form of multimethods since late calls are resolved based on the dynamic types of all their arguments." —Erik Meijer, "Confessions of a Used Programming Language Salesman" (2007)

Fair and actually correct in

Fair and actually correct in my opinion. Both languages make profit of many common features coming from the CLR itself (duh) but they do make different design choices re: the type checking and related error reporting. Not only I like C#'s syntax better but also its ways to do as less as possible to guess on the programmer's intent (via their respective static typing).

Lippert actually wrote several posts on his blog or on stackoverflow about the *different* tradeoff choices made in VB.NET vs. C#'s camp.

AFAIC, if it was *only* all about syntax, I'd have never touched Delphi after C++, or C++ after Eiffel, in college, or Javascript on the command line after dumb batch files syntax, etc, etc.

I'm pretty verbose in natural language, but I prefer more and more terseness in the languages I program with, as I get older.

Evidence, I even try to learn more of Haskell in my spare time, these days. Better late than never as they say. :)

No longer true, C# supports

No longer true, C# supports this as well with its dynamic type. C# and VB.NET are incredibly similar.

Hmm... Debatable. There are

Hmm... Debatable.

There are still a number oddities that are VB.NET-specific. This is one of them, that some VB.NET fans can point out C# not having.

Note I'm just playing the Devil's advocate here. I'm not using VB.NET, neither for my work or hobby projects.

I'd sum it up as that at runtime, of course, the two language type systems are indeed the same. Obviously since it's the same CLR under the hood. But statically, VB.NET and C# still have non-identical ways to serve the programmer's intent, and not only with shallow differences of syntax.

He misses the point

The main issue C++ developers are not jumping into Go, is because for most part, Go is a step backwards in abstractions.

C++ developers wishing to leave C++, are better off jumping to JVM, .NET, D, languages than Go.

Go drops many of the abstractions we take for granted:

- exceptions;
- enumerations;
- generic types;
- direct use of OS APIs, without the need of writing wrappers;
- currently only static compilation is available;
- due to static compilation only model, there are issues with 3rd party code;
- no support for meta-programming;
- rich set of available libraries;
- the PR about go routines and channels, usually forgets to mention that similar features do exist as libraries for other languages

The module compilation model and stronger type checking are not enough to warrant dropping C++ for Go.

The only thing Go teams shows to explain how great Go is, it to present REST/Web servers. Something that I can already do quite easy nowadays with available technologies, why change to the new kid on the block?

Talking for myself, after spending some months playing around with Go, I've decided that D is a better tool for C++ developers looking for alternatives.

- the PR about go routines

- the PR about go routines and channels, usually forgets to mention that similar features do exist as libraries for other languages

The big difference is that goroutines are full-blown continuations.
I can literally have 100000 goroutines operating at once.
This is why I am using Go instead of Java. A CSP library implemented on top of threads is no where near as awesome as goroutines.

I think that developers that are currently using NodeJS, or other similar async environments, should check out Go - they can have all that parallel goodness without having to turn their code into a giant hairball of callbacks.

- no support for meta-programming;

There is a reflection package. I know that's not everything people expect for serious meta-programming but I think Go is close to matching C++.

- exceptions;

I am definitely not a fan of Go's error handling style and I miss exceptions terribly but I am learning how to use panic/defer effectively.

- rich set of available libraries;

Actually, I have been pleasantly surprised by how much stuff is available for Go.

I think that developers that

I think that developers that are currently using NodeJS, or other similar async environments, should check out Go - they can have all that parallel goodness without having to turn their code into a giant hairball of callbacks.

We're using Node.JS (actually, IcedCoffeeScript), right now, strictly for RPC and web servers that do next to no computation and are doing all the heavy lifting in Go because of exactly this. Well, not the nested callbacks part because Iced makes that better, but because of the parallel goodness of goroutines, better memory management and actually compiling to decent native code.

Where C++ is still strong Go is not a competitor at all

Another point I'm missing from Rob Pike's argument is the systems programming aspect. Many projects which stick with C++ don't do this because they love the language - they do it because there is no practical alternative.

Things like (some) control over memory layout and allocation strategies are essential for many demanding tasks (see games, etc.). For low level performance critical code garbage collection just is not an alternative. Manual memory allocation makes it harder to write modular code and requires strategies to avoid memory fragmentation but many of its drawbacks can be overcome by using memory pools, etc. And for writing libraries for this kind of code a way to write generic code without imposing runtime overheads due to boxing, etc. is absolutely required.Especially when memory is tight and the application needs to stay responsive at all times. And on mobile platform this gets even more important thanks to slow memory systems, etc.

The reason C++ programmers don't flock to Go might just be this: that it's a competitor for higher level VM/GC based languages and not a viable alternative for the kind of tasks where C++ still enjoys frequent usage.

There are OS written in GC enabled languages

There I disagree a little bit.

There are operating systems written in GC enabled languages, like Spin(Modula-3), Singularity(Sing#), Native Oberon(Oberon).

Native Oberon is quite impressive, as it was even the official operating system for his researchers back at ETHZ, providing all capabilities that one would expect from an operating system.

All languages need some sponsor with big pockets, or some kind of killer feature, to get adopted. System programming languages have the additional difficulty of only be adopted as such, if someone is able to push an operating system written in them.

I would embrace a systems programming language with GC support, just that language is not Go.

There are OS written in GC enabled languages

There I disagree a little bit.

There are operating systems written in GC enabled languages, like Spin(Modula-3), Singularity(Sing#), Native Oberon(Oberon).

Native Oberon is quite impressive, as it was even the official operating system for his researchers back at ETHZ, providing all capabilities that one would expect from an operating system.

All languages need some sponsor with big pockets, or some kind of killer feature, to get adopted. System programming languages have the additional difficulty of only be adopted as such, if someone is able to push an operating system written in them.

I would embrace a systems programming language with GC support, just that language is not Go.

Felix

It's not clear that people will "embrace" a new language without "big pocket" sponsors even with an educated audience -- otherwise Felix would be used everywhere :-)

Since it has CT based type system including parametric polymorphism, polyadic generalised arrays on the way, Haskell style type classes, optional garbage collection, has had "goroutines" for a decade, and embeds C and C++ (usually) without executable glue (you need type glue!). DSL support is built in: the grammar is written in user space and dynamically loaded. And it generates C++ code which is sometimes faster than C.

Of course the documentation is seriously lacking, but the project is open and desperately in need of type theorists, a few more compiler writers, library implementors .. and of course users. You can get to the bare metal, or you can just scratch it in with Monads but most importantly you can contribute to the code base.

OS in GC'd langs

Spin was performance-competitive only on selected criteria.

The main claim for singularity was that language based protection was just as efficient as "classical" hardware-based OS-style protection. This was only true if you didn't compare the quantitative results to other systems; Singularity was orders of magnitude slower than state of the art or state of the practice. The science integrity of the Singularity claims is comparable to dramatic claims from the same authors on previous systems.

I share the view that Oberon is interesting. Deployment for use by researchers is very encouraging, but not a convincing metric for ultimate success.

Personal opinions (albeit fairly well-informed):

  1. It's clear that GC can be used in systems code.
  2. There is no conceivable reason to use GC in supervisor code, because reliable supervisor code does not allocate. Consequently, the debate about GC in kernels is misguided, and we should be focusing on the use/non-use of [strongly] typed languages.
  3. For more general systemic use, the main problem with GC today is the amount of real DRAM required for competitive performance. That is the remaining challenge to be solved.
  4. The C4 collector is a hugely promising development.

For context, I'm the BitC guy and also the architect of the EROS and Coyotos microkernels.

No Singularity

Right, the claims of Singularity (or any recent incarnations) were never really understood as no real applications ever ran. The theorem-prover-based successors are getting even less real!

Mozilla's Rust/Servo will be an interesting repeat, albeit at the browser level rather than kernel. Arguably just as relevant and certainly better validated :)

OS in GC'd langs

I feel important having you replying to my post! :)

I cannot debate too much about GC in kernels, maybe it is too much, not sure.

What I surely agree is your reasoning about strongly typed languages.

I think the world did a wrong turn betting in a language like C for systems development instead of a Pascal like one.

C is very good language for systems programming, but sadly some of the features that make it great, are also the ones that open the door to security exploits.

GC

For low level performance critical code garbage collection just is not an alternative.

While I agree that this also can be an issue for migrating from C/C++ to other languages, my estimate is that 80% of projects using C++ today are actually ill-advised to do manual memory management, and would work just as well or better in a good GCed language.

Your percentage guess sounds

Your percentage guess sounds plausible to me, given the latest advances in implementing generational GCs and such over the last 20 or so years, be it before or after Java, .NET, etc.

That said I haven't touched C++ in almost ten years, or not in (business and application load) significant enough projects. So, maybe I lost sight already of what our C++ coders friends are the most busy with today, if it even changed that much (?) I can only guess.

Nor I have ever bumped* into real issue because of the GC "alone" (note the quotes), in managed runtimes.

That said, however-however if you wish, I've never wandered into the real time apps realm either (in the strict sense of "real time"), which I suppose is a relevant and maybe typical use case for C++, C, assemblers, etc.

Now, sure, if one does funny things alike allocating dozens of millions or more of 1-character long immutable strings (totally ignoring StringBuilders and such thru the concatenations that follow, etc, etc) along with high varying degrees of those string instances' life times (to trick the generational GC algo) ... well, of course, one is likely to complain about the GC.

("Too Many Roots, Too Many Almost-Long-Life Objects" don't play nice with today's GCs. And Go knows it, just as Java and .NET have learnt.)

In my experience, today, for non real time apps, with a good enough design and a minimum of implementation quality, even under heavy server load and significant working sets, industrial strength GC isn't the very first issue re: CPU time and lags.

The DB use AND access to the ground remains the major bottleneck kinds to watch out for.

EDIT
* While I promise, I saw a couple times already how deceptively easy it is to run into shameful CPU time consumption just because of silly large object graphs traversed by some reflection-based ORMs **, for example (Hmph!)... But then again, that's a whole different story which is implying much more an application / architecture design issue than an implementation one of a single specific component of the underlying platform, like the GC.

And, yeah, bad, bad point for OO, there, especially when it's left in Oh, sooo innocent/naive! ... coders' hands (brains). 100% granted.

Not everything, but a lot always eventually comes down to what the human actually can be aware of (or not). That's where experience helps a lot to keep things relative. Just stating the obvious I suppose.

** Nah. Won't give names. :p

The problem, as I see it, is

The problem, as I see it, is most programmers' ignorance on both sides.

C++ programmers delude themselves into thinking that managing every byte of memory and freeing or recycling it as soon as possible is optimal. But few have any idea how malloc() is actually implemented or how much code can get executed constructing a C++ object with nontrivial constructors (or destructing that object, for that matter). Performance transparency on the C++ is lacking in the areas of how much an individual allocation costs as well as how much user code executes creating each object. C++ programs these days can have as much abstraction as Java programs, and all of that has costs.

On the other hand, Java programmers delude themselves into thinking that allocation costs are effectively zero, when in fact, they are not. Even with bump-pointer allocation, memory still needs to be zeroed and constructors run. Some memory barriers are needed to ensure that initialized objects' fields are properly visible in the memory-model sense. Again, the amount of constructor code that runs can be quite large, and how effectively the compiler inlines that is even less transparent than in C++. A lot of people assume VMs do sophisticated escape analysis and avoid allocating objects on the heap by doing scalar replacement and stack allocation; alas, this is more science fiction than reality, at least for JVMs. There are so many things that can go wrong with the stack of optimizations and analyses needed to trigger this that you can't rely on it. But the real killer is that Java applications hit the allocation wall, eating up too much memory and no longer fitting in the data cache. Allocating lots of small objects absolutely murders performance through cache effects.

I'd summarize by stating the difference is that Java programmers have too much faith in the GC and JIT and C++ programmers have too much faith in themselves.

I'd summarize by stating the

I'd summarize by stating the difference is that Java programmers have too much faith in the GC and JIT and C++ programmers have too much faith in themselves.

No idea whether this is an over general statement or not, but a priori, I'd hope for you to be wrong (in the generality).

However, I can sure confirm you're right in the same observation re: Java to be applicable as well to .NET, for another, and some ("thank God", not all of them) .NET programmers.

That's where the use of profilers can help a lot to educate people (*). I did so several times, you can prepare, wrap your point up, and make it "harsh", in less than one hour, in a face to face meeting with the coder. One should always keep it casual, though, as it's obviously most often about fixing some ignorance or inexperience, not about stupidity.

Bah. Point is, anyway, no matter how good it can be, you can kill any technology when you use it really wrong.

EDIT
(*) Granted, it's probably easier for MS Windows-busy guys like me. That ugly beast can be so much sub-optimal, sometimes, that it's never too difficult for us to find and show the anti-patterns of .NET running on top of it... ;)

What do you think about X10?

Hi Ben,

I was very intrigued by the "place types" of the X10 language, since it seems like a very direct way of talking about memory locality. They introduced it as a way of programming NUMA machines, but I've wondered if something similar could be used for programming with the cache hierarchy.

I haven't really thought hard about these issues, and you have, so I was curious if you had any thoughts about it.

PGAS

The PGAS folks around here (Kathy Yelick and her students: UPC, Titanium, ...) definitely think that way and bring it up in their talks. Original PGAS analyses were 2 level -- local and global -- but they generalized the analyses to be hierarchical. They work on big multicore machines and supercomputers (racks, clusters, ...), but they also have to heavily research memory use and will mention applications of this stuff towards it.

However, I don't think they have actually gone and applied it to the single node / many region case. Their single node research is more typically low-level perl scripts for autotuners or high-level communication avoiding algorithms. It doesn't sound super far off (hey, you both made the connection): each level of memory hierarchy might be fairly faithfully modeled as a node. Given their approach to algorithms, there seems to be some value (e.g., verifying the autotuners and maybe a road towards composition). Unfortunately, it also seems limited and probably has a large trade-off (e.g., precision, coping with fancy hardware features they like to exploit, etc.). Despite mentioning it, they don't seem to have gone in this direction, and I'm guessing these are some reasons why :)

We're in a funny boat today,

We're in a funny boat today, where, with and without automatic GC, app designers consistently come to the only 'sane' solution being to reboot the server every N minutes. Going to full automatic memory management leads to sloppy programming, while manual memory management leads to brittle programming. I suspect making a decision of automatic vs. manual isn't of across-the-board "well or better," but of picking your battles.

Picking C++ is the surprise: for what programmers value when picking a language (I can show you some fun statistics ;-) ), C++ generally rates low relative to say Java. I'm guessing new projects either have momentum (legacy in code or people) or, if people know what they're doing, they're against the wall on one of these battles. Languages like C# are exciting because they help straddle the decision: you can pin an object for a bubble of native programming rather than the language designer going religious on it.

Going to full automatic

Going to full automatic memory management leads to sloppy programming, while manual memory management leads to brittle programming. I suspect making a decision of automatic vs. manual isn't of across-the-board "well or better," but of picking your battles.

Yup, you nailed it, IMO. That's pretty much the crux of today's languages' (and their users') dilemma. I have the exact same feeling, if one asks me.

Back to the OP's post about Go, I believe the choice of the language to provide a GC is probably not, today, the first or main design attribute one may not be happy with (e.g., as a C++ programmer invited to consider Go).

Wouldn't be mine anyway.

What programmers value

Please bring on those fun statistics!

Funny. I was about to ask

Funny. I was about to ask the same thing.

Last time I checked Leo's nice survey's bubbles and clouds and all that... there was a bit too much data for me to grok.

Can't wait to see some more synthetic figures. :)

I think we'll start writing

I think we'll start writing in a few weeks. Some relevant numbers on a recent survey about the reason programmers picked a language for their most recent coding project (total votes and % of votes)

1. Describe how your team picks programming languages

The manager strongly influences the choice of language 304 18%
Someone above the manager strongly influences the choice of language 134 8%
Teammates strongly influence the choice of language 567 34%
I strongly influence the choice of non-key languages 497 30%
The project / product strongly influences the choice of language 897 54%
Other 237 14%
People may select more than one checkbox, so percentages may add up to more than 100%.

2.
What factors most influenced choosing that language? - Performance (speed, power, memory, etc. )
does not apply 76 5%
no influence 290 17%
slight influence 414 25%
medium influence 463 28%
strong influence 417 25%

3. What factors most influenced choosing that language? - Safety/correctness (e.g., static types)
does not apply 115 7%
no influence 504 30%
slight influence 397 24%
medium influence 353 21%
strong influence 291 18%

4. What factors most influenced choosing that language? - Tools (e.g., IDE)
does not apply 105 6%
no influence 502 30%
slight influence 385 23%
medium influence 357 22%
strong influence 311 19%

5. What factors most influenced choosing that language? - Extending code already written in that language
does not apply 147 9%
no influence 257 15%
slight influence 223 13%
medium influence 347 21%
strong influence 686 41%

(similar results for internal legacy and open source, but not for closed source)

6. What factors most influenced choosing that language? - Portability/platform (e.g., web, phone, hardware)
does not apply 121 7%
no influence 412 25%
slight influence 320 19%
medium influence 371 22%
strong influence 436 26%

Basically, speed is more compelling than safety, and platforms/portability more so, but most of all is legacy and libraries. It's also interesting to compare "slight/medium" vs. "strong" in the categories -- non-feature, value add, and must have.

Are you considering doing a

Are you considering doing a preliminary writeup for PLATEAU? I was thinking about doing a writeup of my critical programming language design ideas; convenient since we are all going to Onward.

I'll definitely be attending

I'll definitely be attending PLATEAU :)

We were planning to do a full paper sometime August, so a short writeup for PLATEAU may be inappropriate (?). It does seems like the right crowd!

Workshops generally don't

Workshops generally don't have policies against concurrent or redundant submissions (with the sense that workshop papers don't really "count"), just make everything obvious in your submit notes.

Cool, double checking with

Cool, double checking with the PC now. I've gone to some workshops which seemed more 'real' ( W2SP, Hot*), so don't want to step on anyone's toes :)

Btw, feel free to send the critical programming draft over. It sounds like a productive twist for pushing and distilling DSLs and calculi design!

You might get more

You might get more consistent answers if instead of "no/slight/medium/strong influence", you ask all those questions in 1 question this way:

How much do these factors influence language choice?

[    S   ][C][ T ][      L     ][  P  ][   O   ]

S = speed
C = correctness
T = tools
L = legacy
P = platform
O = other

Describe other: {textbox}

where you can change the relative sizes of the bars to indicate how much it influences. "Other" is important because it lets you know if you missed an important category.

Agreed about other -- we did

Agreed about other -- we did pretesting & free response for that.

Our scale helps because imagine several factors are critical vs. several factors are nice-to-have (e.g., you don't care so much). They may both show up as equal. I wanted to know things like value-add vs. critical, which changes things like how a designer should make trade-offs.

I agree that our form won't be super consistent, but expecting quantitative studies to be that good anyways seems to be a classic modeling mistake. In our case, for example, I detected some disturbing demographic biases: by getting users from Slashdot/Wired/LtU/etc., we only had 1% women!

Productivity

What about productivity (and/or expressiveness)? Do people even think about that?

Dev/prototyping speed

Perhaps closest to what you want is "Development/prototyping speed (as opposed to performance or correctness)":

does not apply 115 7%
no influence 482 29%
slight influence 334 20%
medium influence 345 21%
strong influence 383 23%

Dev speed is pretty similar to tooling -- so still trumped by ecosystem concerns. "Little/simple language" was even less important. What I found interesting is, again, the divide between value-add and must-have. Most are nice, but very little is crucial (given the choice between other langs).

I want to do a public release of our data, but that probably won't be until after we do a writeup. There are a few concerns there that will take time we don't really have.

A 1000 words

Or how about a chart? :) It's sorted by increasing "strong + medium influence."

Slicing on managers vs. programmers and startups vs. big companies will probably be a fun follow-up to this one.

Thanks

Thanks, that is nice indeed!

Sorting

How are the results sorted? I would have liked them to be sorted by "Strong" score, but that seems not to be the case.

They are (by visual

They are (by visual inspection) sorted by strong + medium, whether or not that's how they were sorted.

Yep!

Yep!

Some curious claims.

I am not claiming that C++ is a perfect language below, but some of the points made in the article I find curious.

We—Ken, Robert and myself—were C++ programmers when we designed a new language to solve the problems that we thought needed to be solved for the kind of software we wrote. It seems almost paradoxical that other C++ programmers don't seem to care.

I do not think it is paradoxical. The problem I expect is that I and many other C++ programmers don't solve the same kinds of problems as the authors of Go, and the language is much less well suited to the kinds of problems I do solve.

At this point I asked myself a question: Did the C++ committee really believe that was wrong with C++ was that it didn't have enough features?

Clearly yes. C++ clearly has a lot of features, but it is rather difficult to decide which are prefect to remove, as removing them will alienate some community of C++ programmers for which the features are, if not essential, very useful. I think an assumption of the article is that the features not useful to them are not useful to others. In the case of adding features, there are general things that C++ programmers keep having to solve by-hand. Putting them in the compiler means thay only have to be solved once for each compiler, not many times in each program. That seems very much like a net win.

It is also worth noting that the lack of generics are still asn "open issue" according to the Go FAQ, so it would appear that there is an admission that too many features have actually been left out.

Programmers who come to Go from C++ and Java miss the idea of programming with types, particularly inheritance and subclassing and all that. Perhaps I'm a philistine about types but I've never found that model particularly expressive.

My late friend Alain Fournier once told me that he considered the lowest form of academic work to be taxonomy. And you know what? Type hierarchies are just taxonomy.

I believe that's a preposterous way to think about programming. What matters isn't the ancestor relations between things but what they can do for you.

If C++ and Java are about type hierarchies and the taxonomy of types, Go is about composition.

I think that this gets to the heart of the problem. The way many people program C++ is not to start off delving into massive class heirachies. The ideas behind the STL (concepts) is more or less about "what things can do for you". The claim seems to be that Go saves C++ programmers from a problem then we don't have. This perhaps makes it less surprising that C++ programmers don't feel thyat Go offers any particular advantage in this instance.

Also the quote about taxonomy seems to be an unnecessary dig. In fact, there are several ad-homenim attacks in the article.

C++ programmers don't come to Go because they have fought hard to gain exquisite control of their programming domain, and don't want to surrender any of it. To them, software isn't just about getting the job done, it's about doing it a certain way.

I think this is a fundemental misunderstanding of why C++ programmers don't flock to Go. I would go as far as to say the argument is back to front. C++ allows programmers to write short, efficient programs for quite a variety of problems without having to worry about the tiny details.

If we take the list of features of Go considered as simplifications:

  • no const or other type annotations
  • no templates
  • builtin string, slice, map

These can certainly be simplifications your problem is well served by strings, slices and maps. If, however you work in a different problem domain, then the abstractions provided by Go have a lack of expressiveness.

A final point is that article is also based on the idea that a simpler language is easier to use. Taking this to extremes, C is a much simpler language than Go, since it does very little for you. There's generally a rather straightforward mapping between the C source code and unoptimized assembler. But the simplicity of the language means that good abstactions are not supported, which makes the programmer's life much harder. So while Go may be a simpler language, it does not follow that programs written in Go are themselves simpler.

C is about simplistic memory models, not a simple language.

C is not just a simple language. Heck, it's nowhere near as simple as, say, Scheme or Haskell. C is a simplistic model of the machine, memory, and runtime environment. It provides a few facilities for abstraction, but as a deliberate design choice, it makes sure that there are specific, simple, known ways for programmers to *break* every abstraction. That's really what the ultra-simple memory model of C is all about; enabling programmers to break abstractions.

That isn't a mistake or a design failure; That is exactly what the designers of the language set out to do. It is a goal in which they succeeded. It was first a language for writing compilers and operating systems in, after all. It was only reasonable to them that if a language is specifically for building things up out of the non-structured, non-abstracted memory locations and bytes you have on a "naked" (no operating system) machine, then the facilities the language provides should be viewable and usable in that language as having been built up in exactly the same way.

In C, abstraction is defined by what you do and don't do, rather than by what you can and can't do. Abstraction as considered by other languages, ie, abstractions you can't break, are specifically an anti-pattern, because C programmers *RELY* on being able to reduce the abstraction level and provide new operations as needed.

Kernighan himself expressed this as expectation by moaning, "there is no escape," in his famous essay about Pascal (you can read it online at http://www.lysator.liu.se/c/bwk-on-pascal.html). What he means by "no escape" is very specifically no way to break the abstractions of Pascal and provide new operations. Breaking abstractions is what C was designed to do and the reason he preferred it to Pascal.

This approach is powerful (perhaps even necessary!) at the lowest level of interfacing with the machine. But it creates a requirement that programmer knowledge, cooperation, and discipline goes with every part of the system, and programmer knowledge, cooperation, and discipline, unfortunately, does not scale easily or well to gigantic projects worked on by herds of programmers who barely communicate with each other.

What classes add to C++ is a means of specifically refusing to cooperate with another programmer who wants to reduce the level of abstraction relied on by your code.

That sort of 'non-cooperative refusal' abstraction paradigm simply hadn't been necessary in projects the size of those contemplated by C's designers. They'd have looked at you and said, "Wait, he's doing something that breaks the program, and some idiot thinks it's your job to stop him from doing it rather than his job to fix his damn code?! That makes no sense!" It would also have made no sense to them that, a few weeks later, he still hadn't fixed his code and hadn't yet been fired, and his abuse of the system was still considered to be your fault, and the aforementioned "some idiot" was now threatening to fire *you* for failing to prevent it.

The situation of not knowing who was doing it, or of not being able to get him to care about or understand the breakage he was causing, or not being able to get *him* to cooperate with the requirements, and therefore needing to actively prevent him from calling the code you'd specifically commented "/* DO NOT CALL THIS FROM ANYTHING EXCEPT THE FOLLOWING TWO ROUTINES!! */" or frobbing the struct field you'd specifically marked "/* DO NOT FROB THIS UNLESS YOU ALSO REMOVE THE REFERENCE FROM THE MAP! */"would have been foreign to their understanding of how programmers worked together, and, not to put too fine a point on it, how programmers remained employed.

So, anyway. C is a medium-complex language built on an ultra-simplistic model of memory and runtime. C++ provides abstraction facilities on top of a language specifically designed to enable programmers to break abstractions at will, which makes the parts that were added to C at least three times as complex as they might have been otherwise and probably qualifies Bjarne Stroustroup as a Mad Scientist for even believing that he could do it.

But, in a bizarre way, it kind of turns out to have been worth it. Being able to get right down to the metal and break low-level abstractions to build new primitives in, say, device drivers, in the same language where you have template metaprogramming and classes, makes both the low-level and the high-level parts of the language more valuable than they'd have been otherwise.

I don't see that kind of range with Go. It doesn't provide low-level abstraction-breaking nor high-level abstractions as good as C++'s.

What is a simple language?

C is not just a simple language. Heck, it's nowhere near as simple as, say, Scheme or Haskell. C is a simplistic model of the machine, memory, and runtime environment.

I supose this raises the question of what a simple language is. Some parts of C are not especially simple, I suppose, with respect to creatig complex function pointers or the whole array/pointer decay thing.

From some points of view, there is almost no abstraction between the language and the hardware which is what makes it very simple. In other words, you can very easily explain every language construct in terms of which op-codes it generates. In that way, the language is very simple in that there is only a very thin layer between the source code on the screen and the op-codes run by the computer. (ok that completely ignores optimising compilers)

I suppose you could say it's the machine model, rather than the language that is simple (I would agree), but I feel that the two are not really seperable. From some points of view, assembler is an even simpler language (for certain architetures) in that the syntax is very uniform and it is directly obvious what each line of code does. Of course, since it provides no abstraction whatsoever, doing anything with the language is difficult.

This might seem like an odd point of view, and I think it very much depends on the person. In teaching, i found that a small but noticable proportion of students seemed to grasp assembler programming much more naturally than higher level languages initially, because of the `obviousness' of it.

Simple =/= low-level

Don't confuse simple with low-level. Today's machines' semantics actually is insanely complicated when you go into the details, because of its complex state model and all the arcane technical artifacts that leak through. Just consider weak memory semantics.

Student perception is a bad indicator, because you usually get highly biased results. In my experience, high-level languages (e.g. functional ones) tend to be "simple" for those with no prior programming experience. The hacker types are struggling more, mainly because of the impedance mismatch with what they have already internalised and thus find "obvious".

It also depends of course on

It also depends of course on what each student considers as understanding a language. Experienced hackers may find it frustrating not to understanding the execution model, say, of a declarative language. More experienced hackers may find it frustrating not to understand the pipeline architecture of the microprocessor when taught assmbly. Even more experienced hackers are simply used to the feeling and don't complain as much...

simple == low-level for some people

Don't confuse simple with low-level. Today's machines' semantics actually is insanely complicated when you go into the details, because of its complex state model and all the arcane technical artifacts that leak through. Just consider weak memory semantics.

That all depends on the machine. And it all depends on what you're doing with the machine. If you steer away from true concurrency, things get a lot simpler for example. Besides, the machine in question was is a PIC12F675. It was chosen for teaching purposes in this case to be an accessible and cheap MCU (with a cheap dev kit), with pretty much the standard gamut of integrated peripherals, for an introduction to microcontrollers.

It's a fairly modern microcontroller, but one with at least somewhat simple semantics. It has a grand total of 64 bytes of memory, one accumulator, one condition register, and a regular RISC-like instruction set. Like all machines, it has quirks, but significant chunks of the execution model can be internalised quite easily.

Student perception is a bad indicator, because you usually get highly biased results. In my experience, high-level languages (e.g. functional ones) tend to be "simple" for those with no prior programming experience. The hacker types are struggling more, mainly because of the impedance mismatch with what they have already internalised and thus find "obvious".

I would say it does depend very much on the student. Additionally, I'm pretty much exclusively referring to students with no programming experience. For some students (again not a huge number, but a noticable number nontheless) starting from asm was easier, in that they could understand it.

I would say that a reasonable definition of simple is how unerstandable it is to a complete novice. For some, very low level is simpler than very high level, because they can see what every individual step does, even if that makes reasoning about a block of code much harder.

Don't confuse simple with

Don't confuse simple with low-level. Today's machines' semantics actually is insanely complicated when you go into the details [...] The hacker types are struggling more, mainly because of the impedance mismatch with what they have already internalised and thus find "obvious".

I agree with you.

To me, one possible, and likely objective way to measure a language's overall "simplicity" --as opposed to it going up the slope of complexity (essential or accidental)-- is when our brain after enough practice (in time and interests) is able to embrace the greater portion of it (the language), be it Turing complete or not, and in such a way that reading a formal specification (presumably "short" for simple languages) afterwards, when available, surprises us the least.

E.g., for schema languages, RELAX NG's formal semantics vs. XML Schema's :

https://www.oasis-open.org/committees/relax-ng/proofsystem.html

(I, for one, am still "miles away" from having mentally internalized XML Schema's full semantics after a decade of regular practice ! Granted, done much relunctantly, too.)

And there, quite unsurprisingly enough, it seems that the simplicity vs. complexity debate often turns out to be only loosely related to what we sometimes refer to as the language's "expression power" (or the relative/always debatable lack thereof, depending on the sort of applications one targets).

Simple

It's really quite simple:

C/C++ give you a direct "connection" to the hardware (meaning intrinsics, memory-layout, control over allocations, resources, etc.), while still providing a fairly large set of abstractions and programming styles. If you aim to create a language that is supposed to compete with C/C++, you better make sure that hardware-layer support is better than that of C/C++. And you also better make sure that its abstractions are superior to that of C/C++.

That's a tall order, it seems.

Can be done

It's not very hard to make something that is strictly better in isolation if we can persuade people that it is all right to end up with something that isn't compatible with the C or C++ (code, tools and knowledge) they possess. Which I agree is a tall order.

Even so, one wonders why languages that are supposed to be well-suited for low-level and systems programming just aren't. There still is no good standard way in C to describe an arbitrary memory layout (with specified bit fields, endianness and alignment).

I realise who I am replying to, but should add that I found it enlightening to read the motivation behind EASTL. Only then I really understood why only C++, for better or worse, is so hard to replace for some purposes.

Worse is better applies here

Worse is better applies here as elsewhere. Ada has very nice features in this department, which are often to its detriment.