InfoQ video + transcript of Rob Pike on Go

Didn't notice this linked on LtU yet.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I didn't have a chance to

I didn't have a chance to see this yet. Is it homepage worthy?

yes, it's quite interesting

I just watched it, and it covers more interesting language details than typical in video. (I think.) Probably many folks will have ideas about what Pike discusses. Most of the material is very rational and clean sounding, without much magic obscuring detail.

things covered

* quick compiles, small binaries, good dependency checking/enforcement - and how they get that, technically.

* their crazy approach to 'interfaces', in that they are sort of automatic and implicit and inferred.

* their crazy approach to 'types' and 'methods' on them, and how they use encapsulation as a form of delegation as a form of inheritance if you will.

* their crazy approach to 'threads' where working code is moved dynamically in the thread pool and stacks are dynamically grown on the heap.

* their experience with GC just not being practically good enough.

all in all, it made me think Go is much less weird than i previously though it was :-)

My 2 cents, having looked at

My 2 cents, having looked at Go more closely a while ago:

* quick compiles, small binaries, good dependency checking/enforcement - and how they get that, technically.

Frankly, these will only amaze people who take the situation in C/C++ or Java for granted. I don't think that, say, an OCaml programmer will feel obliged to be particularly impressed.

* their crazy approach to 'interfaces', in that they are sort of automatic and implicit and inferred.

* their crazy approach to 'types' and 'methods' on them, and how they use encapsulation as a form of delegation as a form of inheritance if you will.

Go's interface and method mechanism is basically a crossing of structural object types and type classes. Unfortunately, it also is significantly less expressive than either of those.

One related thing they didn't mention in the interview is that Go lacks parametric polymorphism. Pike et al argue that it's rarely needed given interfaces, but to be honest, I doubt that they will get away with that claim.

* their crazy approach to 'threads' where working code is moved dynamically in the thread pool and stacks are dynamically grown on the heap.

What they call "goroutines" are ordinary lightweight threads, with shared state and simple message-passing primitives. Nothing crazy with the implementation technique either, unless you are used to OS threads being the only means of threading in a language. AFAICT, it is using relatively standard techniques.

* their experience with GC just not being practically good enough.

Well, I believe he is primarily talking about GC in Go, and the current Go implementation has a rather simplistic GC.

all in all, it made me think Go is much less weird than i previously though it was :-)

There is nothing "weird" about Go (except maybe its syntax). In fact, it is a very conservative language. For people who are language savvy enough, like I'd think most LtU readers would be, there isn't much to be surprised about. Or to be excited, if you want to put it that way.

Contentious statement:

Contentious statement: expressive type systems are a bug, not a feature; runtime polymorphism should be preferred to expressive compile-time typing if it turns out to be more widely understood.

The problem with expressive type systems is how unfamiliar most programmers are with the level of abstraction, and the knots they tie themselves into trying to get types to flow to all the places they need to for everything to work out statically. Programmers are plently familiar with the flow of values, but getting complex parameterized types to work out just right can be painful for them, even when the logical type flow is not that complicated.

For example, Java's Enum:

public abstract class Enum<E extends Enum<E>>

I'd hazard a guess that most people in the Java world feel that Java's generics has been a failure owing to its having brought this kind of idiom into common use, when all it's doing is passing the concrete type at the end of the derivation chain back to the supertype, so that the supertype can use the final derived type in its method signatures to make the typing work out nicely. It's little more than a procedural call in value flow terms, but in type flow it makes too many programmers' heads hurt.

Be that as it may be

expressive type systems are a bug, not a feature

Be that as it may be, inexpressive (i.e., inflexible) type systems surely aren't superior.

runtime polymorphism should be preferred to expressive compile-time typing if it turns out to be more widely understood.

And here it is, the latest edition of the bi-monthly LtU typed vs untyped debate... :)

The standard reply to that is that the lack of a type system only gives the illusion of understanding. E.g., if a programmer does not understand why the Java Enum class is the way it is then he won't be able to properly write or use equivalent abstractions, with or without types. OO abstractions are complicated and often counterintuitive, the types just reveal it.

I'm not actually arguing in

I'm not actually arguing in favour of untyped / dynamically typed / etc. I'm making an argument about a usability tradeoff between static and dynamic typing, that the optimum is somewhere in the middle, and going too far down the expressive type road is a mistake, and that therefore a criticism of a language that its type system is insufficiently expressive even when it provides a dynamically typed escape valve is not automatically correct. (See note at bottom: I am not taking "expressive" and "flexible" to be synonyms with respect to type systems.)

I believe most practicing programmers generally understand value flow better than they understand type flow, even when the two are describing the same thing, irrespective of "types just reavealing" the "counterintuitivity". Even if the end result is the same, programmers understand it better, and can create it themselves more easily, when it's expressed in terms of value flow instead of type flow. For one thing, value flow is easy to debug, even with primitive tools like print statements; type flow problems tend to give you error messages instead, messages that increase exponentially in impenetrability with the combinations of type constructions used.

Suppose a different definition of Enum, one relying more on polymorphism; instead of needing a type parameter to describe the type of the return value of valueOf, it merely returns the base class, using polymorphism to make things fit:

public abstract class Enum {
    public static Enum valueOf(Class enumType, String name)
    { /* ... */ }
}

I confidently assert that most programmers can understand an idiom like this, and create new abstractions that follow it perfectly well, even when most of them would have difficulty with the generic Java version. All I am asserting is that expressive type systems put so much emphasis into, and thus require an understanding of, composable type expressions, that programmers do not in fact require to get their jobs done - all they need is value flow that they understand. There is a meaningful distinction in complexity between value flow that is known and understood to be correct, and understanding how to represent an equivalent type expression that describes the flow of types along the same paths as the values. The more expressive your type system, the more work programmers need to put into this non-essential work to get their essential work done; and the more expressive it is, the more composable and orthogonal it is, the more ways it has of going wrong.

Another way of putting it is that programmers are not susceptible to convincing, or teaching, by proof. Even if you prove an equivalence between understanding that a particular value flow is correct and proving it with types, that is not sufficient to get programmers to use the types with the same facility they do values. This is a human factors argument.

Note: I am taking "expressive type system" to mean not a flexible type system (otherwise I'd say that instead), but rather a type system in which you can use type expressions to express advanced relationships between the types. By this, a dynamic type system is not very expressive, per se; it doesn't express much about what's proven by types; but such type systems are usually very flexible (often too flexible, IMO; again, I am not making a simple static vs dynamic typing argument).

C# does that

Suppose a different definition of Enum, one relying more on polymorphism; instead of needing a type parameter to describe the type of the return value of valueOf, it merely returns the base class, using polymorphism to make things fit:

public abstract class Enum {
public static Enum valueOf(Class enumType, String name)
{ /* ... */ }
}

That's exactly how Enum.Parse looks in C# because it was defined before the language had generics.

The end result is that people complain about it. Having to mention the type name twice (once in the cast, and once as the value parameter) is particularly cumbersome. In .NET 4, they finally added a generic version of TryParse that doesn't have this problem.

I agree that complex generic use cases can be hard for programmers to understand, but I find that most of the complexity is a burden on the people designing an API and less so on the people using it. And, in practice, I think people generally do want to use type-safe APIs whenever possible.

I don't know how well Java serves as an example here. I think type erasure and use-site variance muddy the water when it comes to answering the question of whether or not people like generics. Over in .NET land I don't think anyone questions their utility and I find them much easier to use there.

C# doesn't use a constrained

C# doesn't use a constrained type for the type parameter; instead, it checks it dynamically at runtime (throwing ArgumentException in that case), rather than using the type system to do this. In so doing, the type argument becomes just a shorthand for passing the Type - and as a bonus, it can be inferred from the type of the out result parameter. The type system has thus been (ab)used for mere API convenience, without actually proving the static type-safety of the code.

We are, of course, talking about a specific example, one which doesn't in fact use a very complicated type expression, and so is dangerous for drawing conclusions by . But I haven't come to this view on a whim. This partial disillusionment in the applicability of expressive static type systems has grown in response to how I see practitioners struggle with getting their type declarations right, in Delphi, C# and Java in particular, with all this effort put into incidental complexity rather than essential complexity. Many of these people would probably be better off with a dynamic language but don't know any better.

Not equivalent

I believe most practicing programmers generally understand value flow better than they understand type flow, even when the two are describing the same thing

Types describe assumptions and invariants about all values that occur at certain abstraction boundaries. In general, you have to understand these assumptions - and their non-local implications! - in order to build or use abstractions correctly. A type system derives many of those implications, which can be highly non-trivial. Understanding how and where individual values "flow" is not enough.

Suppose a different definition of Enum, one relying more on polymorphism

I'm afraid that I don't understand the point of your example. The reason for the "strange" genericity of the Enum class is not the typing of the valueOf function. Enum's generic parameter and its recursive bound are necessary to adequately describe the restrictions on the argument of the compareTo method. I agree that they are not very intuitive. But I'd argue that the complexity there is primarily a consequence of the object-oriented modelling and its difficulties with binary methods, and not of the type system per se.

(Btw, this is the first time I hear generics being described as not polymorphic.)

Types describe assumptions

Types describe assumptions and invariants about all values that occur at certain abstraction boundaries. In general, you have to understand these assumptions - and their non-local implications! - in order to build or use abstractions correctly. A type system derives many of those implications, which can be highly non-trivial. Understanding how and where individual values "flow" is not enough.

I'm not arguing that one doesn't need to understand the invariants; I am arguing that one doesn't need to understand how to express these invariants in a static type system. An expressive type system that lets you state these invariants is not as easy to use as a dynamic type system.

I'm afraid that I don't understand the point of your example.

The point of the example is that, in general, practicing programmers find the generic version hard to understand because of the complexity of the its type declaration, but don't find the non-generic version complex. The non-generic version may be marginally less usable in Java, but that's a distinct issue. The point being, the more complicated the relationship being described by type expressions (because type system in question is expressive), the less apt most programmers are to understand them; and they are not by necessity motivated to learn these things because such problems are incidental to getting a working programming running.

After all, getting a working program sooner is more valuable than a provably correct program later or more expensively (in human capital terms).

Re polymorphism: I was talking about runtime polymorphism, not compile-time (parametric) polymorphism. If you spend some more time outside academia and with working programmers, you won't find this odd.

Accidental?

I'm not arguing that one doesn't need to understand the invariants; I am arguing that one doesn't need to understand how to express these invariants in a static type system. An expressive type system that lets you state these invariants is not as easy to use as a dynamic type system.

I have my severe doubts that a programmer can truly do the former without having a clue about the latter, i.e., I disagree that the complexity of types usually is accidental (although there are such cases, no doubt). But I suppose we have to agree to disagree.

In any case, what matters is not (perceived?) ease of use, but ease of correct use.

After all, getting a working program sooner is more valuable than a provably correct program later or more expensively (in human capital terms).

Oh well. A similar attitude used to be dominant in that (very real-world ;) ) company I'm working at. Until they discovered that that approach scales really badly. Moving away from it became a priority then, and still is.

[BK]After all, getting a

[BK]After all, getting a working program sooner is more valuable than a provably correct program later or more expensively (in human capital terms).[/BK]

Oh well. A similar attitude used to be dominant in that (very real-world ;) ) company I'm working at. Until they discovered that that approach scales really badly. Moving away from it became a priority then, and still is.

Indeed, and I was pushing somewhat stronger than my own opinions in that statement - though it depends on the problem domain, in some domains code is discarded or rewritten rather than maintained for significant lengths of time; and in other scenarios, time to market under budget can be most critical (startups especially), with problems cleaned up later after a lead has been established. Some would say that such concerns aren't relevant on LtU (and have criticised threads I've participated in in the past on that basis), but from my perspective as a guy maintaining a compiler and language and trying to sell it to working programmers, there's only so far you can go in divorcing language design from market realities.

Well, I believe he is

Well, I believe he is primarily talking about GC in Go, and the current Go implementation has a rather simplistic GC.

He actually said that the semantic model of Java can cause memory fragmentation, whereas in Go when a struct is embedded in a struct, the compiler automatically puts those together. I don't particularly understand why he thinks this is such a big deal. Sounds like a misunderstanding of the von Neumann Bottleneck to me.

Also, in general, with good GC (like the .NET collector), there are very few corner cases where the GC hard pauses or has behavior where it essentially seems to be leaking memory. The major case is a once-off non-recurring event that causes second generation objects to "die", and the GC isn't aware of this sudden event triggering a need for massive memory freeing. I am not aware of any GC algorithms that can handle this very hard heuristic problem automatically, but for this scenario the programmer can always call GC.Collect(). This particular GC problem is more common in reactive systems, like GUIs, where a dialog is suddenly destroyed and all of its resources are by proxy released.

Some info on Go's GC

I've read recently that on 64bit CPU, Go allocate a 16GB virtual address space for the GC, which creates some issue in some configuration.
See http://lwn.net/Articles/428100/#Comments

The major case is a once-off

The major case is a once-off non-recurring event that causes second generation objects to "die", and the GC isn't aware of this sudden event triggering a need for massive memory freeing. I am not aware of any GC algorithms that can handle this very hard heuristic problem automatically

Incremental and on-the-fly collectors handle that case, the former because it's always performing major collections, the latter due to its inherent concurrency.

re: the weirdness

the weirdness for me was that previously it all sounded like a limited and cut-off language in strange ways (i've forgotten the details by now), whereas this presentation made it sound less than half insane.

You're right, nothing new to see.

That is also my point of view.

Anyone that has a bit of programming languages cultures, will find out that Go does not offer that much new concepts, if at all.

Without Google's backing the language would probably die a silent death, but being a Google development, it might actually replace C if they push it really hard.

Personally I feel attracted to it, mainly to the similarities with Oberon, which I played a bit with. A few years ago.

But as soon as I try to do something significant in Go, I miss all the high level abstractions that modern languages do offer.

Actually I rather spend my time learning about C++0x as with Go.

Type classes?

How are Go interfaces related to type classes? To my cursory glance they look like pretty straight forward structural types.

Interfaces are structural object types, mostly

Interface types themselves are pretty much exactly structural object types. What is a little bit typeclassy about them is the way how any user-defined (nominal) types can be made "instances", by defining the appropriate methods on those types.

Of course, this mechanism is much less flexible than type classes, because it still only provides dispatch based on values not types, including the limiting OOish bias on a single receiver argument. Also, methods cannot actually be defined independently from their receiver types, as they have to occur in the same package. So in a way, packages act as implicit OO classes, or sets thereof.

The mechanism is also more limited than objects typically are, because there is no proper subtype relation, only a shallow and non-transitive notion of "assignability". There is no inheritance either, only implicit delegation through anonymous fields. So no open recursion or late binding or whatever you prefer to call it.

Got ya

Thanks. I kept looking for something like (static) type based dispatch but couldn't find it. I hadn't considered the separation aspect as being particularly "typeclassy". I had just mentally filed it in the bucket where I keep C# partial classes.

To put it another way, if somebody invented a Haskell variant where type class instances had to be defined right next to the type definition then I would still consider the language to have type classes even if in a bit less useful form. The separation aspect doesn't seem essential to the concept of type classes.

Runtime dispatch

Perhaps my calling interfaces "much like typeclasses" was confusing. Thanks for your explanation.

Runtime dispatch is actually more open ended. (Don't know if this is implemented or whether some impl. detail will prevent this from working but at least in principle) it should be possible to dynamically load a new go module and invoke its methods (that conform to an existing interface). It may even be possible replace modules at run time.

Having to define methods in the same package might not be so bad. You can for instance define

type Foo int

and add methods needed by some interface Bar. Not every int can fulfill the role of Bar, but every Foo will.

Interfaces can be composed from other interfaces so that gives you a kind of subtyping.

But Haskell typeclasses do seem much neater.

I wouldn't be surprised if go evolves in this area as people gain much more experience. I have a feeling their model of interfaces and structs can be pushed further.

"Runtime dispatch"

Runtime dispatch is actually more open ended. (Don't know if this is implemented or whether some impl. detail will prevent this from working but at least in principle) it should be possible to dynamically load a new go module and invoke its methods (that conform to an existing interface).

Well, yes, I don't see why this should not be possible. But I don't regard this as a feature of interface types, it's just a natural consequence of dynamic loading and higher-order data. That is, you can get the same benefit with plain function types.

It may even be possible replace modules at run time.

Are you sure? I didn't see this anywhere.

Having to define methods in the same package might not be so bad. You can for instance define "type Foo int" and add methods needed by some interface Bar.

Yes, but what this means is that whenever you want to retro-actively adapt a given type to a new interface you either (1) have to modify the original module defining the type, or (2) introduce a new wrapper type, with a complete set of delegation and conversion methods (since every new type is essentially an abstract type outside its package). Only in the case where the right methods happen to be there already with exactly the right names and signatures you can get away without that -- but that's rather a rare case (particularly considering the lack of proper subtyping, see below). In other words, usually you are no better off than you would be in a conventional OO language. Consequently, I find the claims about Go's interfaces being more flexible misleading.

Interfaces can be composed from other interfaces so that gives you a kind of subtyping.

You don't even need that composition, since this kind of subtyping is structural: interface{F(); G()} is assignable to interface{F()}, for example (which is good). But this is only shallow width subtyping, there is no depth subtyping, and the subtyping rules are not recursive and do not extend to other types (e.g. functions, methods) either.

Also, assignability is mingled with ad-hoc rules for abstract type revelation, which break transitivity, among other problems.

[Edit: The weak subtyping rules can be a real show stopper. Consider these variations of the above interfaces:

type T1 interface {
  F() T1
  G()
}
type T2 interface {
  F() T2
}

Now, a value of type T1 is not assignable to T2 anymore, due to the recursion. Nor does

type U int
func (self U) F() U { return self; }

implement T2.]

It may even be possible

It may even be possible replace modules at run time.
Are you sure? I didn't see this anywhere.

That was speculation on my part (if you can load new modules you are half way there). But apparently they do not allow "go" shared libs (as yet?).

Haven't studied go's assignability rules so can't comment on the other parts of your message for now.

Code update is hard

if you can load new modules you are half way there

You might be a bit over-optimistic there. Dynamic code update is a very hard problem, at least if you don't want it to break almost every aspect of safety (especially in a typed language). With dynamic linking alone you are not even 5% there.

What's the other 95%?

Does this depend on the particulars of your language? It seems that if you have a pure language with only structural typing, then pickling a value is as easy as writing out a description of its type (or a hash of the description) along with the value representation and then verifying an exact match on the way back in.

There are probably performance related issues that require extra effort. If you can load new code, then you probably want to garbage collect old code. Am I missing something? Maybe other implicit requirements?

Depends

Well, it depends on what exactly everybody means by code update. Bakul said "replacing modules at runtime", which sounded to me like some form of hot code update like e.g. in Erlang. That is a problem that is orthogonal to pickling. It potentially (not necessarily) involves issues like finding live data owned by that module on the heap and in the stack(s), transitioning such data from one type to a new one (and back, if your language is higher-order), while ensuring type-safety, updating stack frames, discovering and collecting old code, and probably lots of other issues, semantic and implementation-wise.

In a pure language, you may have the additional problem that such an operation, by it's very nature, is impure, and it's not clear what to do about that.

But maybe Bakul meant something else, and I was misreading his post. You can certainly find less intrusive ways to transition modules.

Oops. Didn't read this

Oops. Didn't read this before responding! Yes, something like Erlang. My point was that to get live upgrades you don't have to have fully dynamic typing. Or you don't have to give up on type safely like with dlopen(). With "pure" static typing but separating interfaces from implementations you can do live updates. But allowing live upgrade may place certain restrictions on a go implementation. May be one can combine GC with data migration to an upgraded module?

Anyway I think perhaps this provides a useful middle path.

I admit this is all pure speculation so I will stop now! But it could be fun to figure out an implementation!

live programming, dynamic update

As Andreas has explained, many design issues exist in how to handle live program state. Values already coursing through the system may have the wrong structure for the new code. Even with structural typing, you'll be in trouble if your new code expects more structure than existed in old values. Prototyping might help a bit... i.e. add a bit of structure to a parent/class object, and the children inherit it.

Ideally, you could recompute/regenerate the 'state' of the program as if the new code had always been in place. But this requires careful language design... i.e. even pure functional programming idioms tend to capture a lot of non-regenerable state in the IO streaming. (Even if you kept the whole input history for an FP program - which would cause a space leak - you'd have a problem because your new code might expect inputs in a different order as a result of generating outputs in a different order.)

Management of external resources is another major issue. For example, if your program has some relationship with a csound synthesizer, video stream, or a robotic arm - then you want this relationship to actively continue without glitches or hiccups.

Performance is actually a lesser issue. A hotspot or tracing compiler can allow you to get near ideal performance (i.e. dynamic inlining optimizations). It should be feasible to "warm up" the new code before injecting it (i.e. by using older profiles or incoming data) if necessary to avoid timing hiccups.

Erlang?

Erlang does allow live code update (and IIRC so did some Lisp systems?). It is not the dynamic linking alone but the open ended nature of interfaces -- it is the decoupling of interface from implementations -- is why I think go can potentially go in this direction. Erlang and Lisp systems must face similar sort of issues & constraints re when and where live code upgrade can be done safely. Even in Erlang you have to engineer the system carefully to allow upgrading code and migrating data to the new module.

Done

Done

Maybe extend the synopsis a

Maybe extend the synopsis a bit for the Home page?

What I like about Go

I haven't made up my mind about Go but here are the things that I like:

  • Interfaces. They are much like type classes in Haskell. Or you can think of them as a generalization of Plan 9's file concept (any object that provides open/close/read/write is a file -- this tamed complexity of dealing with IO devices, networks, etc. in a major way). Basically an interface defines a `role'. Any object can play this role if it implements the methods defined in the interface.
  • Channels. A means of type safe communication. The fact you can send channels via a channel creates kind of a grant capability (but channels are only within a program, which limits their utility).
  • Go routines. Ok, the name is hokey but the feature makes it very easy to use concurrency. I have written simulators in C and C++ that required 1k+ threads so perhaps I appreciate this more! Go routines are likely somewhat more expensive than simulator threads but much more convenient to use.
  • Anonymous structure fields. This gives you most of C++'s class inheritance without its complexity.
  • Its dependency management and lack of an #include facility. At $work, over the years -I flags have accreted to where every cc/c++ line is over a page long! Once the complexity management battle is lost, people find it easier to add a new -I than clean up. My attempts to cleanup were met with resistance since "something might break"!
  • Its name. It connotes dynamism to me and not make my hands shake :-)

Go seems like a fairly conservative language, doesn't really break any new ground, but it does seem like a very nicely put together package. It also has a good pedigree (if that is the right word), with people like Rob Pike, Ken Thompson & others, who have produced a lot of very good code over the years. I usually end up liking their pragmatic choices. But I have write a bunch of code in it to see if I like it!

Rob Pike on Parallelism and Concurrency in Programming Languages

Also another interesting infoQ interview with Rob Pike:
Rob Pike on Parallelism and Concurrency in Programming Languages

A lot of time spent trying

A lot of time spent trying to prove that Go is better than C++. I've kinda thought that one have to work VERY hard to create something that is not better than C++.

It Depends on your choice of

It Depends on your choice of metrics.

A primary metric in C++ is that you don't pay for any features you don't use at runtime.

A primary metric in a lot of not-C++ languages is that the feature-set is simple and internally consistent.

Which is 'right' or 'better' is a matter of engineering and ideology, not science.

I don't think this is as

I don't think this is as unique to C++ as you make it sound.

I'm sure there are others.

I'm sure there are others. Could you name a few of them?

Ada...

Ada...

Consistency

The consistency property as defined by Thomas Greene is: After part of the notation has been learned, how much of the rest can be successfully guessed? I've not written much Ada code, but from what I've read of it I believe that Ada compares very favorably to C++ with respect to this property (in both notation and semantics).

If I used C++, I would pay

If I used C++, I would pay for the pointer arithmetic, which I don't use. Indeed, I could want to use a GC, but GCs for C++ are non-moving because of pointer arithmetic, which means that I wouldn't be able to benefit from the performance advantages of improved locality of heap compaction.

(Of course, that doesn't invalidate your statement; it only means that C++ is not always optimal with respect to that metric. And you could also argue that, in theory, one could develop a GC for C++ that would only work under the assumption that no pointer arithmetic is used. It was only an illustration that low-level code with weak semantic guarantees can also have a cost in the long run.)

A primary metric in C++ is

A primary metric in C++ is that you don't pay for any features you don't use at runtime.

That's not true. Unfortunately, in C++ I have to pay for all features that my coworkers decided to use, and for all features that my coworkers might use in the future, and all features that I might be using in the future.

Except you do

Take exceptions for example. Turn support for them on (with a compiler flag no less!), and suddenly your binary becomes ~50% larger with all the added exception handling code, even in places that don't generate or handle exceptions.

Take templates for example. Without effective whole-program dead code elimination, every time you instantiate a template you get a whole new version of that class/code, not just the methods that you use. As an added bonus, you may also get a whole new family of compile errors thanks to the lack of separate type checking.

Take header files. No, please, take them! You can pay an arbitrarily large factor in compile time due to poor header file layout.

Try getting the standard C++ language--whatever that means--to run on a new platform, particularly an embedded platform. The amount of low-level support routines a typical compiler assumes is nothing to sneeze at. You might end up writing those in assembly or C and compiling them separately to create the appropriate libraries.

Zero-overhead Principle

I do not claim C++ is ideal (nor even effective) with respect to its metrics, especially not in absolute terms. I only said it is a primary metric, and that whether one design is 'better' than another depends on the metrics chosen. You can judge for yourself the extent to which C++ accomplishes Stroustrup's design principles, including this "zero overhead principle".

Overhead can be broken down a bit - e.g. into runtime CPU vs. space vs. compile-time overheads. It has been my impression that C++ focuses entirely on the CPU overheads. A larger binary only results in larger CPU overheads if you're forced to load the extra code. There are implementations of exceptions, for example, that do not require loading any extra code for the happy-path... i.e. where one can trace the stack and use an associative map to find the handler code only when the exception actually occurs. If implemented, this could reasonably be called a 'zero-overhead' exception handling technique as far as most C++ developers are concerned.

"Zero overhead" is a relative term, of course. For C++, that zero-point is obviously relative to a good C implementation. A lot of problems you and gasche and migmit mention are problems in C, too, and so simply aren't counted as 'new' overheads even if they do hinder absolute optimizability.

Templates are quite a problem-child, I agree. They result easily in redundant code, especially across modules, and a lot of cache misses. Templates also exacerbate the whole header-files problem. But they aren't a problem unless you use them... which meets the C++ design goal.

In any case, the relevant point of my earlier statement is unrelated to whether C++ actually accomplishes its design goals. I'm just saying that the quality of a language depends on the metrics you favor, and the design of a language depends on which metrics the designer favors. C++ was quite obviously not designed with much attention to consistency, and sacrifices consistency in order to achieve goals of greater weight.

These questions stunk

Interviewer: "What's the smallest type you can have in Go?"

Seriously?!

Interviewer: "It's an interesting take on the idea of open classes that you have in Ruby , where you can actually change a class and add new methods. Which is destructive, but your way is essentially safe because you create something new."

Yes, because Rob Pike clearly borrowed ideas from Ruby of all languages, and Ruby clearly had the novelty of open classes...

Overall, I took out of this talk and deep dives into Go the same things Andreas Rossberg did. At a SPLASH workshop, a student presented a paper about Go, and later he told me how excited he was by the innovation in Go and how interesting the ideas were. I was quick to point out to him that they were not new, but also not as expressive as previous research. I didn't lecture, but I just recommended he do his homework and for each feature in the language he thinks is novel, find a language that already did it (better).

Bottom line: It is good to hear that Rob Pike simply wants to save Google employees from C and C++. I guess if that is the metric he wants to be judged, internal political mandate will ultimately determine his success/failure.

Rob Pike simply wants to

Rob Pike simply wants to save Google employees from C and C++

I've had some vicarious angst for the Go team, as I feel that they have clearly stated their aims at every turn without communicative success. AFAICT, they wanted some features they'd appreciated in the Plan9 world, especially clean concurrency and build models. Both of these are hugely important for the programmers at Google, and there was no industrial-scale language delivering these features. So they rolled their own language. Seems pretty clear to me.

Then there's the debate/flamewar over design decisions. First, it usually ignores the above context; a decent solution that carries the desired properties is better than no solution at all. Second, there is a vast engineering advantage to sticking to your comfort zone. Not necessarily a good idea for research vehicles, but when you're aiming to "get things done", deep experience with older, simpler technology can be awful handy.

I do think Go provides an interesting platform for the more hardcore PL crowd-- just not in the way it's usually presented. It's a well-engineered, industrial system built from a relatively clean and simple foundation. Some have asserted it would be even better with feature X (say, parametric polymorphism), but the authors have satisfied their design goals and don't plan on integrating the feature for fear of breaking the properties satisfying the goals. So, feel free to prove their fears misguided, or show how their design decisions precluded some satisfactory implementation.

Personally, I'm satisfied just hearing the thought process of such an experienced systems designer/implementer; I find it informative to my own PL pursuits. Whether this particular treatment of Go and its desiderata justifies frontpage LtU attention is, however, a pertinent but separate concern.

We should only critique

We should only critique designs like we critique art, I like this and I don't like that, made from the perspective of another person's tastes. C++ is an example of a language that has a controversial rather than poor design, some people find it ugly while others find it beautiful. A good design is simply a language designed by a good designer, and it might not suit your tastes but you should appreciate it for what it is, right?

I got into some arguments recently over Scala. That I was critical of Scala and compared it to C++ led someone to think I was a Scala hater, while nothing could be farther from the truth. People tend to get very emotional about their languages (or paradigm) of choice. A good discussion involves acknowledging our biases and being objective about our and others' opinions.

Critiquing language design

We should only critique designs like we critique art, I like this and I don't like that

I disagree. At the very least, we can verify a language design against its own goals, and validate its goals against the target audience or niche.

Beyond that, there are plenty of objective ways to critique a design... e.g. with respect to suitability for large systems and small systems, open systems and closed systems. And since we don't design in a vacuum, we should compare against incumbent and competitive technology.

A good design is simply a language designed by a good designer

I think you've got that backwards. A good language designer is someone who groks the justifications for and against existing language designs, analyzes the requirements for a new language, and designs a language that meets those requirements plus as many nice non-functional properties as feasible. A wise language designer will liberally leverage existing language designs where appropriate.

People tend to get very emotional about their languages (or paradigm) of choice.

This is very true, and it makes rational argument with some people very difficult.

But the motivations for this emotional investment are obvious. Languages are platforms that compete with one another. E.g. there is little point to use both Ruby and Python or JavaScript from the same project. People prefer you to use the languages with which they are most comfortable so that they won't later be forced to use yours.

There is similar investment in many other platform technologies. E.g. it's pointless to own a console gaming platform unless people are writing games for it, and people won't write games unless enough other people buy into the same platform. Purchasing one or the other becomes an investment, and human emotions easily become part of that investment. There are many reasons that a language might fail or succeed unrelated to its technical properties, so may seem important to a language advocate that even valid complaints be buried. Beyond that, there are plenty of buzzword bandwagons, marketing, politics, prestige.

I hypothesize that most language fanboyism is by people who have never given a serious fair try to the competing designs... never gave them the opportunity of 'taste', never actually sat down to think about relative features and flaws. Most probably lack the skills to make distinctions more meaningful than color of the bike-shed.

Rather than 'art', from the human standpoint perhaps you should treat it more like critiquing a religion... i.e. walk on eggshells.

Muddled with implementation details

A lot of this is muddled with implementation details and overwhelmed by the fact that, on average, language implementors are of average language implementation skill and so any language design cannot be expected, on average, to "Beat the market".

What is a large system, anyway? An operating system? A telecommunications network? A military's hundreds of millions of lines of global command-and-control software?

What is a small system? An embedded device?

Since you're volunteering this methodology. Show me how you would do it for one language.

What is a large system,

What is a large system, anyway? An operating system? A telecommunications network? A military's hundreds of millions of lines of global command-and-control software?

Roughly, I consider a system 'large' once it's too complex or volatile (e.g. due to independent development) for your puny human head to grok all the relevant relationships. This is obviously relative to the observer, and on a sliding scale (some systems are obviously 'larger' than others).

The essential property is that in large systems you are forced to make meaningful distinctions between local reasoning and global reasoning, and you are forced to consider emergent behavior.

What is a small system? An embedded device?

A procedure or algorithm, a relationship, a single handshake, a simple process or workflow... almost all toy problems and programming examples fall into this 'small system' class by nature.

An embedded device? Depends on its relationship to the world, how well you can isolate its behavior requirements. E.g. an interface for turning an LED on and off might be implemented as a 'small system', but the whole set of rules for the desired state of the light might be a very large system. Which part are you implementing?

this is muddled with implementation details

Not really. Semantics are ultimately responsible for performance, scalability, modularity, and other properties. You don't need to know anything about the implementation of a mutex in order to know that mutexes, at a large scale where you cannot grok all the calling relationships, open you to risk of deadlock.

You don't need to know anything about the implementation of lossless (actors model) message-passing to know buffering requirements under conditions of temporary disruption. You don't need to know the implementation of message passing to know that lossless messaging will leak memory if you cannot distinguish temporary disruption from permanent failure.

From language semantics - assuming the language has one at all - a great number of properties can be studied without ever considering the implementation. Implementations can certainly do worse than the theoretical ceilings, but they won't do any better without violating abstractions.

I disagree. At the very

I disagree. At the very least, we can verify a language design against its own goals, and validate its goals against the target audience or niche.

And how is that different from critiquing art?

And since we don't design in a vacuum, we should compare against incumbent and competitive technology.

Again, you aren't really disagreeing with me here :)

A good language designer is someone who groks the justifications for and against existing language designs, analyzes the requirements for a new language, and designs a language that meets those requirements plus as many nice non-functional properties as feasible.

My point is that people as experienced as Rob Pike or Martin Odersky have the capability to and are going to produce good languages (Go, Scala). The designs are going to be well thought out given the goals, our criticisms are mostly a matter of disagreeing with the goals.

Rather than 'art', from the human standpoint perhaps you should treat it more like critiquing a religion... i.e. walk on eggshells.

Nice quote. There are zealots where this is definitely true, but hopefully we (language enthusiasts and professionals) are agnostic.

how is that different from

how is that different from critiquing art?

Functionality and efficiency are not properties by which we typically judge 'art'. Certainly, art and sex and elegance are part of how we judge many things - including architecture, cars, and language design. But we also judge them by how effectively they serve an objective purpose.

people as experienced as Rob Pike or Martin Odersky are going to produce good languages [...] our criticisms are mostly a matter of disagreeing with the goals

I agree.

Of course, this is a competition. Unless you have a captive audience, the challenge is to create outstanding languages. This will be tough even for experienced language designers.

Art is designed to convey

Art is designed to convey something, but maybe architecture or cars are a better analogy.

Of course, this is a competition. Unless you have a captive audience, the challenge is to create outstanding languages. This will be tough even for experienced language designers.

Sadly, in a crowded market the best indicator of success is the amount of resources behind the language (close to a captive audience). Barring that, being radically innovative can help, but typically experienced and radical don't go together.

Code is not art

I disagree. We aren't writing programs to frame and put on our walls. We write programs to solve problems and build systems to help us solve problems. Ask Dijkstra: "tools have a profound--and devious!--influence on our thinking habits, and therefore, on our thinking abilities."

I didn't say code was art, I

I didn't say code was art, I didn't even say language design was art, I said that critiquing language design is like critiquing art. You think about the context of the art along with the artist's history and known style.

I will claim that language design is still a bit of art and a bit of science. I would hate to use a language whose design was all science.

Artist's history

Are you suggesting a language may be better or worse, depending on who designed it?

Obviously, the 3 year old

Obviously, the 3 year old who draws a picture is doing art but not probably particularly notable art except to themselves and their parents maybe. There are definitely languages out there designed by inexperienced people, these languages are rarely notable though sometimes you get something like PHP that really makes you wonder. Again, context is important, PHP is probably an appropriate design for its domain.

I've definitely programmed in languages before that could have been designed by 3 year olds. For example, I was programming with an early version of Lotus Notes Script that didn't support things like looping or nested variable declarations, it was an incredibly poor language design that was probably done as an afterthought or at the last minute, or by someone who just didn't have a clue.

I once designed a language

I once designed a language (for data binding in UIs) that didn't support looping or nested variable declarations, but that was because I didn't want to encourage any deep business logic being written in it; that code should have been written in C# in an accompanying assembly.

Of course, it did actually support looping via recursion (it had simple functions and an if expression), but I didn't tell any of the business developers that...

I once designed a language

I once designed a language for data binding in UIs that did support looping and nested variable declarations. The language was hosted in C#, and the whole point was to mix business logic with UI logic.

I think your choices were reasonable. You could have gone the other way of course, what approach is better is in the eye of the beholder.

The burden

It may have something to do with the fact that Go is being sold as some great innovation and presented in a complete vacuum with respect to other language designs. They aren't just scratching their own itch without a care, but are marketing it widely. In that scenario, the burden of proof lies squarely on them to show how it solves a problem that wasn't solved previously, to situate the language against what has come before it, and to be honest about its relative advantages.

The problem is, it seems like they completely missed the whole point of object-oriented programming and parametric types and restarted from about 1978 (the introduction of CSP) with little more in mind than "C was OK, C++ was a nightmare" and "let's add CSP and drop some semi-colons". Look at Newsqueak. Now look at Go. Now look back at me. (oh wait).

To be honest I find the justifications for its development specious. Look at the code examples for Java and C++ given in some of these Go talks. Rob shows some horribly repetitive Java code that immediately screams "refactor" to me and then he rants "look how bureaucratic this language is" and "look at all this bad code!" This is wrong-headed in my opinion. You don't create a new language because of the bad code you can write, you create a new language because you cannot write good code in the existing languages. The fact that languages like C++ and Java succeeded so widely that people actually do write bad code in them is some kind of success, honestly. In five years are they going to ditch Go because someone wrote some bad code in it? Will that bad Go code appear in slides for the next language design talk? It's an absurd argument.

He also argues very strongly for fast compile times, given his experience with long builds on previous projects at Google. I can't go into too much detail because I also work here but I think this is 50% a build system issue, 40% Google's radical (I might say, nuts) approach to code reuse and internal versioning, and only 10% due to compile speed.

A long time ago, I got into

A long time ago, I got into an argument with Rob Pike about Java over a nice lunch, definitely a great experience. He is very much a systems guy and all his language ideology revolves around that. I'm really guessing here, but maybe as far as he might be concerned, the non-systems languages either don't exist or are inadequate. Yes, that is definitely a bias, but once you understand this context his arguments make more sense.

Maybe he isn't trying to convince you to use Go; especially if you aren't doing systems work. He is trying to convince the system programmers who are stuck on C++ and can or don't like the idea of moving to Java.