Rob Pike on Go at Google

Not a lot here on Go at Google really. Mostly a general overview of the language, whose major selling point seems to be that it was designed by famous people and is in use at Google.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I couldn't disagree more

The following slide sums up the issues that Go was meant to solve very nicely:

What makes large-scale development hard with C++ or Java (at least):

  • slow builds
  • uncontrolled dependencies
  • each programmer using a different subset of the language
  • poor program understanding (documentation, etc.)
  • duplication of effort
  • cost of updates
  • version skew
  • difficulty of automation (auto rewriters etc.): tooling
  • cross-language builds

Language features don't usually address these.

Old set of features

Most of those bullet points were already covered by languages like Turbo Pascal, Modula-2, Modula-3, Ada, Delphi, Oberon for around 20 years.

That was my point.

That was my point.

Very vague

The first 2 items on that list are more or less clear, and are solved by language features. The rest is a combination of tooling, education and pipe dreams. They don't really need a new language.

Check out next slides and tell me: what problem from this list does the lack of indentation based syntax solve? How does a copy of itoa in net package solves "duplication of effort"?

The way I think about Go is: if you take C and decide to improve it in a number of areas (memory safety, module system, simpler parsing) and make a minimal possible improvement in that area, you get Go.

If you improve C you get...

...go ... or or Vala or Cyclone or Genie or Cilk...

Indentation-based syntax

what problem from this list does the lack of indentation based syntax solve?

His point in the talk was about code in one language embedded within code in another, JSP-style. If the embedded language is layout-sensitive and the host language is not, you're headed for trouble. You might file this under "difficulty of automation", I guess: IDE support is going to be tricky.

A copy of itoa in net

A copy of itoa in net package means they don't have a bunch of programmers all implementing their own itoa in order to avoid importing (and parsing, and compiling) the very large package that itoa is part of.

This is an effort to "head off" the specific type of duplication of effort where you have a bunch of knowledgeable, smart guys all putting in the same bunch of little things in order to avoid creating compile-time resource-hungry dependencies on a few big things.

I've (re)defined utilities myself, specifically in order to keep build times as short as possible by not importing large libraries. If you multiply it by a thousand coders though, you get longer build times and bugs as dozens of versions of atoi (or whatever little utility) get compiled instead of the big package that people are defining their own atoi's (or whatever) in order to avoid importing.

Go is very much optimized based on the pragmatics of scale, on experience with programmers gaming the build system or dependencies in order to achieve scale, and on experience of time-wasting and unproductive arguments. They addressed several sources of confusion and flamewars by adopting explicit conventions and automatisms (gofmt, capitalize-to-export, qualified-names to refer to imports, etc) in order to create a 'clarity' that enables programmers to read each other's code and not be confused by unfamiliar conventions. The convention of using qualified-names also makes it easily possible to tell what dependencies are *not* really required, which permits tree-trimming.

Types as interfaces rather than inheritance is another method of reducing dependencies. You don't have to depend on a tree of other classes; you just have to depend on the interface definitions - and the interface definitions are simple, idempotent, and easy to deal with.

They've redefined and multiply-defined ubiquitous utilities such as atoi in order to optimize the build process.

They addressed recurring divisive-but-pointless social issues such as naming conventions and code style conventions, which can hinder productivity and cause confusion, and addressed them with gofmt (which destroys arguments about code formatting) and the capitalize-to-export automatism which destroys some arguments (and inconsistency, and confusion) about the names of exports.

I don't believe anyone needs to justify a lack of indentation-based syntax. Lack of indentation-based syntax is in fact the normal case. If anything indentation-based syntax is still the peculiar case that might be seen as requiring some justification.

In short, pretty much all of these decisions are driven by a desire to make the dependency situation more tractable and the compile times shorter. I completely get it, and given their requirements it makes perfect sense.

We've all done strange

We've all done strange things, sometimes even for a good reason. I've compiled C++ programs without linking to C++ runtime, by reimplementing parts of it myself (mostly forwarding to kernel32 functions, this is on win32), to shave off a dozen KBs of size. This made me pretty proud of myself (users are always proud when they manage to use a tool in a way its developers didn't intend or even tried to forbid), but I doubt this makes developers of Visual C++ proud. So I can believe they had to do something like that, but this isn't a good selling point for the platform.

In short, Go seems full of things that I can see myself doing (especially to solve hard problems without known elegant solutions and on a tight deadline), but I wouldn't be really proud of them.

Also, I didn't ask to justify the lack of indentation-based syntax. I asked how exactly it addresses problems from the list, as it's listed as one of the solutions.

Avoiding linking == gaming the build system

It seems to me that redundantly defining "itoa" to "optimize the build process" (at the cost of increased code maintenance!) is precisely a gaming of the build system.

Well of course it is.

Yes, it's definitely gaming the build system. The point is it's better to have it done once in library code than to have dozens or hundreds of people doing it independently. Putting it in the library code is limited redundancy that avoids wider redundancy.

Library code, once debugged, has a FAR lower maintenance burden than project code. Further, one additional copy in the library heading off hundreds of extra copies in project code would be a big win even if that weren't so.

Of course it would be 'cleaner' to have a finer-grained build system that avoided the effort of parsing and compiling things it doesn't need from an imported library in the first place, but a few extra copies of a definiton are better (from a get-it-working-right-now perspective) than an open-ended research problem.

What this is, as far as PL research is concerned, is an ad-hoc solution which is valuable because it points out a problem that needs a real solution.

So how should we construct the finer-grained import system that avoids the build-time effort of reading, parsing, and compiling unneeded parts of a library? That's a real issue, there's a real need for a solution, and it's an issue most researchers have been paying no attention to.

Ray

The Entanglement Problem

The entanglement problem describes the difficulty of extricating a module from one project or library for use in another. I've been thinking about this problem for some time, e.g. regarding entanglement caused by namespaces and constraints on dependencies to ensure predictable extraction.

Generally, I feel the best solutions leverage semantic linking, e.g. driven by types, tests, patterns, and values (e.g. exporting metadata) rather than by meta-organizational concepts directed from outside the language. Linking becomes staged programming, and potentially much more robustly adaptive.

concreteness vs. abstraction

There is a reason why we have natural language. Its much easier to talk about a concept by name rather than through its intrinsic real value.

Natural language doesn't

Natural language doesn't make heavy use of names, AFAICT. It is better characterized by context, constraints, local referents, and metaphor. Talking about a concept by metaphor or description is a form of talking about a concept by its intrinsic values.

I agree that use of names can be convenient, but names will tightly couple expressions that would otherwise be usable in more contexts. I think that natural language isn't a great example of modular language. :)

sans nouns:Natural x

sans nouns:

Natural x doesn't make heavy x of x, AFAICT. It is better characterized by x, x, local x, and x. x about a x by x or x is a x of x about a x by its intrinsic x.
I agree that x of x can be convenient, but I think that natural x isn't a great x of modular x. :)

sans nouns and adjectives:

y x doesn't make y x of x, AFAICT. It is y characterized by x, x, y x, and x. x about a x by x or x is a x of x about a x by its y x.
I agree that x of x can be y, but I think that y x isn't a y x of y x. :)

Care to tell us more about how natural language doesn't make much use of names?

I haven't a clue how Ben L.

I haven't a clue how Ben L. Titzer confuses names with nouns and adjectives.

Every noun is a name for a concept

My point is that every noun is just a name for a concept, adjectives are names for properties of nouns, and verbs are names actions on nouns. We don't include in every sentence the complete description of the meaning of every concept (in what terms would those descriptions be rendered, one wonders) which it references, but we refer to shared knowledge through identifiers of concepts. Those identifiers of concepts are "names" in my understanding.

The way I see it, natural language is much less context and metaphor than you assert; real communication requires naming shared concepts. If names didn't matter so much in language, then changing them wouldn't change the meaning of your sentence so much. Unless you meant something completely different by "name" in your analogy to natural language. If so, maybe with more rigorous definitions your argument would be clearer.

Certainly we must use

Certainly we must use abstraction, and it is convenient to "name" our abstractions and share them via names. But there seems to be a phase distinction between naming concepts (e.g. to extend the English language) and using the name concept (e.g. naming objects or events, identifying objects or events by name). When I say we do not heavily use names, I mean the latter.

I had a hard time

I had a hard time understanding the discussion, so here is my attempt at explaining what (I think) you say, in hope that it would help the future reader:

(I think that) dmbarbour says that, in the course of a conversation, we rarely define local names ("In my next sentences I will refer to the chair I'm talking to you about by the name 'current_chair'"), but rather use context and constraints ("the chair on the right", "this chair", "it").
We do sometimes define new names but it is a relatively rare event, often specialized to certain problem domains. And even when we define names they often keep their descriptive nature instead of being alpha-convertible proper nouns; this is apparent in programming language practice ('current_item', etc.), and often semi-implicit in the etymology of nouns: (Ent(scheidung))s(Problem) being an extreme example.

"this chair" is a name

"the chair on the right", "this chair", "it"

Functionally, these are names. Within a context, within a text/conversation, they are (local) proper names. In a programming language, you would use local variables, this_chair. When you want to globalize those "names", you *equate* them to proper names: this child is John Smith.

Free name party!

If I follow your reasoning, every expression that denotes something is name. The "name" name doesn't mean anything much, then, and I'm not sure what you can discuss with this interpretation. Indeed, most parts of a program mean by denoting things.

Nice wriggling

Sean: "Its much easier to talk about a concept by name"
dmbarbour: "Natural language doesn't make heavy use of names"
Ben: "My point is that every noun is just a name for a concept"
dmbarbour: "When I say we do not heavily use names, I mean <proper names rather than names for concepts>"

Nice wriggling, Humpty Dumpty!

Meh

I appreciate both Sean and David's contributions to the discussion and have no vested interest, but I must stay that dmbarbour remarks still have a point, even in the face of Sean's comment.

What you didn't report is that the message of Sean was a reply to a message by David that was saying: "we would be better equipped against the entanglement problem if we could use contextual descriptions rather than names only". Sean pointed at the natural language use of names but, once you think of it (see my explanation above), the tools of contextual description are massively more powerful in natural language (when processed by humans) than in programming language, meaning that the comparison to a natural language is actually *in favor* of David's initial point... exactly what he said.

In a programming language, if you want to speak of the same thing twice, you are practically forced (if you want to do things right) to give it a name. Declarations are basically the only way to reduce redundancy. This is massively untrue of natural languages, where in most circumstances you do not give a new names to the things you speak about. Certainly, we have a large library of things that have been given names, but there is an even larger amount of things that we talk about without giving them a fresh unique name; the ways to designate things are richer.

Perhaps use a random

Perhaps use a random generator to generate names automatically? Everything has a name (you didn't pick it!), you often just don't care about it. But I'm missing the nature of this debate.

I think David's argument is

I think David's argument is that contextual designation of objects (as a larger set of technique than just naming) may be easier to adapt to change, and therefore help for some aspects of software maintenance -- both in helping discovering the new objects, and helping avoiding being forced to use a new object with the same name that is disfunctional for you.

For example if instead of saying "import library Toto.List" you say "import library List which provides feature-foo", you may both let your environment select a "better" version than Toto.List that fills your needs if it appears, and avoid having your environment silently switch to the new backward-incompatible List library of Toto that doesn't provide feature foo anymore.

I don't know whether I'm a strong supporter of this vision, I see myself as a neutral observer. But for once I think I understand what David means, and it seems clearly related to the discussion (Ray's question of finer-grained import systems), and you ask explanations about it... a satisfying combination.

I'm a firm believer in names

I'm a firm believer in names only in so much that you eventually have to pick one, but its like a URL...who types any but the most commonly used URLs into a web browser anymore? Search as resolution from description to "esoteric name" is definitely the way to go.

The only thing we really disagree on is phase: I believe that search is a dev-time activity, I believe David wants it to be a compile-time activity (part of symbol resolution during compilation), I'm sure someone could even make the case for a run-time activity (e.g., service lookup). My reason for preferring dev-time is that you want to commit to a binding during development and not have it magically change when the code is recompiled in a new environment or under different probabilistic circumstances (if you have fuzzy binding resolution).

Reading through the comment history, there also seems to be some discussion about anaphora-style referencing in the language; e.g. what Lopes proposed in her Beyond Aspects paper. This is fairly orthogonal to what I'm talking about here.

All fuzziness at dev time

I agree with this. Constraints should be solved at dev time (even if the solution is a search algorithm).

Meh two

In context of live programming or maintenance, the notion of dev-time is itself fuzzy. But I do favor staged programming and securable programming models (ocaps, etc.). We can control where constraints are bound (spatially), when they are bound (temporally), and even control stability of various bindings.

When a language is blind to these issues - as most traditional languages are - it is actually much weaker for managing constraints, and less adaptable to different programming levels. Productivity suffers in many small ways that only seem clear in hindsight.

Dev time isn't fuzzy

There are multiple dev times, but I wouldn't call them fuzzy. My point is that the meaning of the keys you press can be fuzzily interpreted by the system as you code, but result in concrete semantics that can be inspected during the dev process. I count bag-of-constraints solve-me-if-you-can semantics as fuzzy. So staged is good and fine but I call foul if the output of an early stage is just a mess of constraints to be sorted out by a later stage.

Multiple development models

In spiral development models, or a traditional edit-compile-test cycle, perhaps there are multiple distinct dev-times. But in many other development models - live programming, continuous integration, maintenance of plugins on 24/7 services, PLs as UIs (tangible values, naked objects, direct manipulation, etc.) - there are no clear lines. I'm curious what your hidden assumptions are, and the extent to which you're aware of making them.

Anyhow, for any program we should have a clear understanding of what we're expressing, an ability to justify any design. A bag-of-constraints can be acceptable, but only if one is able to reason about it in certain ways: will it always have a solution? how much time and space will it take to find a solution? how will the system behave in the absence of a solution? can we reason about stability of the solution in face of instability of the constraints? can we control who introduces which constraints? can we protect the interests of certain agencies above others (priority, security)?

In a mess of constraints, it's the the 'mess of' that's a problem, not the 'constraints'.

Agree with the second half

If you can guarantee them solvable then they aren't the solve-me-if-you-can variety constraints that I'm disparaging. My point is that the more fuzzy search-through-a-huge-space approach is reasonable at dev time along with other fuzzy techniques, since you can inspect the results to make sure they're what you wanted.

I don't see what you're getting at with the continuous programming bit. If you're thinking of a UI as a PL, then dev time is when you're interacting the UI. The principle I'm (probably poorly) describing applies pretty robustly to interaction, I think. When you ask Siri to remind you of something, she doesn't just say "OK", leaving you to wonder what she understood you to mean. Rather, she presents you with a concrete reminder to confirm. Fuzzy interaction should just be an efficient means to arrive at concrete semantics, not a replacement of them.

Delay interaction to resolution time

Whenever resolution takes place, or (if you see resolution as a gradual thinning process rather than a one-time operation) whenever arbitrary choices need to be made, the resolver is free to decide (in case of excessive ambiguity) to ask the user the make the choice.

That's what happens for example when Firefox tells me: "there is a new version available, do you want to update it now or later?". It's a form of hardcoded runtime (in fact continuous) code resolution semantics, with late interaction.

That said, user validation is maybe not as necessary as you seem to think. David had a rather interesting post about this, Abandoning Commitment in HCI.

The important point of my

The important point of my example was the concrete-ness, not the modal confirmation, which I agree is best replaced with undo in most cases.

I also sort of see where you and David are coming from, I think, which seems to be that any use of a computer can be viewed as "programming". I don't really think of it as programming if you can assume that the programmer will be around to clarify what he meant as the program runs.

dev-time is poor excuse for bad abstractions

Human attention is a precious resource, not something to squander struggling with messy abstractions. Stability and responsiveness are significant concerns regardless of human supervision. Open extension is equally relevant for modular plugins and cooperative development. Software agents generally have the same concerns as the humans they serve. Developers benefit from consistent reasoning at all stages of development - prototyping, deployment, configuration, integration, maintenance, administration, etc..

If a constraint model is bad for unsupervised runtime, it will be bad for supervised development. If a constraint model is bad for supervised development, it will be bad for unsupervised runtime.

Programming language that require a strongly distinguished `dev-time` are not addressing concerns at development time so much as failing to address development concerns at all stages after prototyping.

I'm happy to support interactive and cooperative development. I envision collaboration from both humans and software agents. But I think it's unwise to make any special distinctions for a `dev-time`. For most meta-programming, it's even important to avoid special distinctions.

If you're thinking of a UI as a PL, then dev time is when you're interacting the UI.

Are you assuming a solitary, single-user system such as a calculator?

Fuzzy interaction should just be an efficient means to arrive at concrete semantics, not a replacement of them.

I agree with this sentence, which isn't specific to dev-time.

Fuzziness is deep in any programming system that interacts with the real world - sensor data has noise and aliasing; sensor analysis is subject to heuristic algorithms and estimated models; yet all this must lead to concrete actuation decisions. For HCI, we track gestures and voice and pen-scrawls and keyboard inputs with typographical errors. While commitment to discrete action can be delayed to obtain clarifications, it must eventually be made.

You're hung up on 'dev time'

You're hung up on 'dev time' and missing my point. Dev time can be e.g. during the debugging or administration of another component (that was developed at an earlier time). Let's call it interaction time rather than dev time. At interaction time, it is appropriate to use fuzzy methods to resolve human intent interactively, even though it's not a good idea to deploy those fuzzy semantics in an unsupervised situation (they are too complicated to reason about).

Not hung up on 'dev time'

I heard your point, I just disagree with it. Let me reiterate my own:

  • Fuzzy methods are often appropriate in unsupervised situations. Only when we can reason about them, of course. Fuzzy methods are quite appropriate for sensors and control systems, for example.
  • Fuzzy methods are often appropriate in supervised situations. Resolving "human intent" interactively is one class of such situations.
  • The properties we want for fuzzy methods in supervised situations happens to equal the set of properties we want for fuzzy methods in unsupervised situations. These properties include stability, responsiveness (real-time characteristics), modularity, collaboration and integration, a clear understanding of whether solutions exist or when we'll find out, clear protection of some interests over others.

Whether you call it dev-time or interaction time, it isn't more or less an appropriate point for 'fuzzy methods' in general. Of course, the full phrase "fuzzy methods to resolve human intent interactively" does beg the question of whether 'interaction time' is most appropriate.

My point is that the fuzzy methods that are good for unsupervised operations are the same ones you'll want for resolving human intent interactively. And vice versa. If your fuzzy methods are bad for unsupervised situations (e.g. involving collaborating software agents or meta programming), then your fuzzy methods will also be bad for human users.

Dev-time or interaction time is a poor excuse for bad fuzzy abstractions.

Ok. Maybe we disagree.

Heavy weight methods like a unification or SMT solver are the kinds of tools I have in mind for "fuzzy" tools. Do I think they should be available through as general as possible an interface to be used on other problems? Sure. But they are quite special purpose and complex and should not be part of ordinary runtime semantics for a language, IMO.

Hm

I see nothing wrong with this point of view, in the sense that it is shared by a lot of people, but I'm not personally convinced that there is a point in keeping so strong a distinction between the "language", that should have a precise semantics, and the rest. If your language has a precise semantics of using the code on your filesystem, but your package manager uses SMT-solving to update those packages on the file-system, the overall behavior is not much different than having some (constrained) level of fuzzy resolution directly inside the language.

I'm more of the point of view that we should accept to formalize more aspects of the programming experience. If people use fuzzy solvers and live update in their practice of programming, there may be something to be gained by giving those a proper status and working on their semantics. Because I believe that looking at things formally help designing them and making them robust, and I want not only the semantics of my lambda-abstractions and for loops to be clean and robust.

On a related note, I cannot help remarking that the state of tooling for "academic languages" is usually quite poor. There is a question of money/workforce (writing good tools is a lot of hard work that research teams cannot usually afford), but I think there is also a question of lack of interest, in some part of the PLT community, in taking tooling seriously. Making it easier to produce good tooling for a new programming language is a hard research problem that merits more work¹. I think a necessary condition for this is a more formal treatment of programming aspects that you would naturally prefer "outside the semantics".

¹: not to say that none has been done, there has already been an important amount of research. I'm a bit lazy to fish for references right now (do not hesitate to add some), but ICFP'12 "Functional Program that Explain their Work" by Acar, Cheney and Levy is one such example of a deep foray into tooling-oriented questions. I even suspect that one of the reason why researchers may be wary of working on these questions is the perception that a lot of effort has already been invested, with no clear result.

I think the lack of focus on

I think the lack of focus on tooling is simply because no one really believes we yet have a really good foundation around which to build this tooling. More bang for the buck to focus on core semantics right now, since that will shape what kind of tooling you'll need.

I don't believe you

I feel surrounded by researchers investing a lot of effort on tooling for C (Framasoft), Java (Jif) or Javascript (lambdaJS). Apparently they couldn't convince the programmers, or the funding bodies, of your "bang for the buck" estimation.

I think you can do tooling research with simply-typed lambda-calculus (and some certainly has been done in Scheme). There is also fun things being done on the Emacs mode for Agda, McBrid-ish tooling of sort, that relies on rich dependent types, but even there I don't buy the argument that the semantics is not there yet (you can already work on the Calculus of Constructions, or Martin-Löf Type Theory).

All in all, "the languages are not clean enough" looks to me like a rather weak excuse. Can you substantiate the idea?

To be fair, both industry

To be fair, both industry and academia suck at tooling, its just that industry sucks a bit less.

I argue in my maze paper that tooling is just another point under language design to consider with syntax and semantics, probably making tradeoffs between each point to create a better holistic experience; aka you are not just designing a programming language, you are designing a programming experience.

True excellence in tooling requires language support.

You can't make really excellent tooling without support from the language and runtime. Debuggers need symbol tables, place tags in the code, ways to single-step, ways to evaluate expressions in the code's current environment, ways to set and clear breakpoints, etc.

It may be possible to standardize the form that support takes, so that a standard IDE or suite of excellent tooling is applicable to new languages if they provide standard types of support in some standard form.

As an example of paradoxically good/bad tooling, Emacs has become a primary obstacle to the adoption of Lisp. It is, without a doubt, a good tool for working with Lisp code, once you know how to use it. But its nonstandard interface alienates new users even while its excellence in the hands of old-guard users inhibits the development and adoption of excellent tools that wouldn't alienate newbies.

I personally like Emacs for editing Lisp code, but it isn't the slick IDE experience that people expect from a "good" programming language. You have to invest effort in *learning* to use emacs, then invest effort in *configuring* emacs to run an inferior lisp process, then invest effort in learning how to use the inferior lisp process, etc... people don't want that. They want a very standard IDE interface of the sort they're used to, which means the sort that came along AFTER Emacs' own interface was invented.

If you are trying to sell people on a new language, you need to provide a proper IDE where they have nice standard mouseable menus, I/O panels separate from code panels, autocompletion, autofind, function/object renaming support, argument lists that pop up when you type a function name, tooltips, buttons, verbose help messages, integrated debugger, a tutorial that covers both the language and the IDE, printable and HTML documentation, a huge and well-documented function library, and all the rest of it. Without that, they'll decide that a new language "sucks" before they ever get as far as using the language itself.

This is not wrong. This is not right. This is not justified nor does it need to be. This is just a factual report about what it takes and what you need to provide if actual use of your new language by other people is one of your goals.

Ray

I'm not sure why you

I'm not sure why you consider your examples to be work on tooling. Jif and lambdaJS are languages in their own right that received plenty of work on rigourous semantics that were lacking in the underlying language.

Have you considered that

Have you considered that the issues you named for those heavyweight fuzzy methods seem to describe the "heavyweight" rather than the "fuzzy"?

The concept of a program is

The concept of a program is changing. Perhaps it's not so much "runtime semantics" but, when you write "ordinary", you mean "production time semantics." At some point, we need to hand off software to users, and it better not have a dynamic error of "oops, could not synthesize code for this case." Until that hand-off, however, I love programming with runtime semantics that are empowered by oracles.

Interestingly, a la Martin Rinard and arguably Ben Liblit, I'm not even convinced of this hand-off point anymore. Program in an infinite loop? Jump out and go forward. I bet we'll see papers soon about using symbolic execution to infer live repairs. Likewise, what if the end-user wants to instruct (augment) the program? The code creator and user are now the same person. Both cases seem like holy grails -- we don't have them because, to a large extent, we haven't figured out how to implement them.

There are lots of things we

There are lots of things we can do during develop that are not reasonable for deployment, like inferring missing functionality or risky default bindings. We've talked about this before.

I can definitely see software becoming "fuzzier" in general, especially if fuzzy semantics based on machine learning or simulation serve as the underlying programming model. But for our current precise and brittle programming models, I think we need to define a very rigid boundary between development and deployment, which necessarily limits adaptability and end-user extensibility.

Rinard's work seems similar to Verner Vinge's software archaeology distopian future, and seems appropriate only if we can't evolve out of our brittle programming models.

Not as much need for names as you assume

Designs that eschew use of names or URLs are not uncommon - associative memory, tuple spaces, some publish/subscribe models. It is feasible to develop a world wide web without names, where links instead describe content-based searches. Including unique values in content (e.g. UUID or URL-like values) would still be useful for searches, but isn't the same as a naming system. Relevantly, names are part of an associative/relational system generally orthogonal to content.

Secure hashes are a simple way to uniquely identify values in practice, and are leveraged in Tahoe-LAFS. But those are mostly a performance boost, ability to talk about a value-object without repeating it. Use of context, constraints, and anaphora are much richer options than secure hashing, and often more expensive (more data, more computation), but can (in practice) still uniquely identify content (especially when the developer has some control over context).

We don't need names. Names aren't essential for calculation or communication. But names can be convenient, and I've not suggested we should abandon them entirely. In articles such as interface and implement and local state is poison, I suggest structured use of names that allows us to take advantage while avoiding problems that result from more pervasive use.

Re: Phase and Commitment (Early/Dev-Time Binding)

I do believe we would benefit from much search in later stages. I describe some of these concepts, e.g. with stone soup programming and stateless stable models. I like various ideas surrounding multi-agent systems, blackboard systems, ambient-oriented programming.

However, I also believe in iron-fisted control over resources and relationships. Object capability model and secure interaction design have deep impacts on my visions and designs. Developers would consequently have control over the context of any search. Between controlling the search and controlling the context, developer control over search results is however precise as desired. Further, I'm fond of live programming. In that context, the distinction between dev-time and compile-time is very weak. A developer has much opportunity to refine parameters and context of a "compile-time" search in order to achieve desired results.

While precise, tight control over bindings is certainly desirable at some times in some contexts, it is certainly not desirable at all times and all contexts. A programming language is low level when its programs require attention to the irrelevant. - Alan Perlis Early commitment when binding dependencies is often attention to the irrelevant. Between stone soup programming and object capability models, I envision module systems that can be adapted to an appropriate programming level on a case-by-case basis.

But for once I think I

But for once I think I understand what David means

Ouch.

double post

sorry

Forth

In a programming language, if you want to speak of the same thing twice, you are practically forced (if you want to do things right) to give it a name

So we should all use stack based languages after all? ;)

Or Subtext, where code can be a DAG rather than a tree.

Ah, yes, "The Classic Turnaround Loop"

A copy of itoa in net

If duplicating "itoa" was necessary, something went wrong somewhere.

I suspect that the Google Go implementation pulls in an entire package even if you use only small parts of it. "Hello World" is a 1.2MB executable on Windows. This seems excessive.

Depends on what your alternatives are, I guess.

I too was underwhelmed by the feature set when it was first announced. But in the last few months I've had the chance to use it for nontrivial projects and it's really growing on me; now whenever I have to schlep back into a C++ or Java codebase (the two major languages on my team) I find myself thinking, "Ugh, can't we rewrite this in Go?" -- my Go code usually ends up being substantially terser (but IMHO no less readable) than C++ or (especially!) Java.

That said... if you have no use for the curly-braces-family of languages, well, I guess you won't have any use for Go, either, as you could argue that it's just more of the same. But it's a really good version of more of the same.

(Disclaimer: I am a Googler, but don't work on anything Go-related... just a satisfied-so-far user.)

Is there some use case I am

Is there some use case I am missing that requires curly braces? Silly me, I thought that sort of thing doesn't affect runtime.

Curlies are tolerable

..is, I think, what grandparent is saying.

The syntactic aspect of this

The syntactic aspect of this (curlies) is a red hearing is what I'm saying.

Syntax has a strong bearing

Syntax has a strong bearing on programmer acceptability. People who post here probably are comfortable with a broad range of syntax alternatives. That's not true of most programmers.

Go occupies an unusual place in delimiter syntax. Line breaks matter sometimes, but not always. That's hard to get right. Python succeeds, and UNIX "shell script" languages fail badly. I'm not sure if Go's approach is good or not; I haven't written enough code yet. It's certainly tolerable. Go has a convention that programs are run through an indenter/formatter before source control check-in, so at least you get consistent layout.

Go does adopt Pascal-like declaration order, which makes it much easier to parse Go code. Go appears to be LALR(1), although I'm not sure about the "if the keyword on the next line could start a statement, the line break is a statement separator" rule. C and C++ have context-dependent syntax; parsing requires knowing whether a name is a type. That was a design error in C. Early C (pre K&R) did not have "typedef", and the syntax was LALR(1). When "typedef" was added, the syntax became context-dependent, which meant that it was necessary to process the include files just to reliably parse the source. This inhibited the development of tools that worked on and modified C source code. Go already has several such tools, notably "fix". That's useful.

Well, it seems like Pike

Well, it seems like Pike summarized it all. He says "Go's purpose is to make its designers' programming lives better". And I believe Go is living up to that exact purpose.

The only thing that bugs me, though, is that I've heard the same thing about a different language. I don't remember where or when, but it certainly was about PHP.

I'd feel better if it was

I'd feel better if it was specifically their programming lives that it was meant to improve.

That'd be fixed if...

That'd be fixed were it self-hosted. Most programming languages are, after all, optimised for writing their own compiler in.

I don't think you understood

I don't think you understood my sarcasm...

I did!

I thought my response was appropriate to the tenor.

In the words of pianist Glenn Gould

The ideal ratio of artist to audience is one to zero.

Google has lots of human resource challenges on top of computer resource challenges. Just read Michael Burrows's rants about his fellow Google co-workers:

  • “Despite attempts at education, our developers regularly write loops that retry indefinitely when a file is not present, or poll a file by opening it and closing it repeatedly when one might expect they would open the file just once.”
  • “We find that our developers rarely think about failure probabilities, and are inclined to treat a service like Chubby as though it were always available.”
  • “Developers also fail to appreciate the difference between a service being up, and that service being available to their applications.”
  • “Developers are often unable to predict how their services will be used in the future, and how use will grow”
  • “A module written by one team may be reused a year later by another team with disastrous results”
  • “Our developers are confused by non-intuitive caching semantics.”
  • “Our developers sometimes do not plan for high availability in the way one would wish.”

I am also not sure which of these problems Burrows cited in 2007 are now being addressed thanks to Go, but I would love to hear from Rob Pike, Russ Cox, and Ian Taylor about that.

+1

+1

Luckily for the rest of us

Luckily for the rest of us, the intentions & personalities of language designers aren't of much consequence. What do I care what problems a tool is 'supposed' (by a complete stranger) to solve if I can use it to solve the problems I actually have?

Why more of the same? Because less of something different isn't enough of something practical.

They had to do something

Go is a reasonable solution for Google's internal problems. It's a language designed for server-side programs. C++ is too hard to make reliable. The interpretive languages are too slow. Google's effort to make a fast Python system (Unladen Swallow) was a flop. Java is the usual alternative, but it doesn't do concurrency very well.

Facebook's approach, incidentally, is to use mostly PHP internally. To fix the performance problem, Facebook developed a hard-code PHP compiler, HipHop.

dear god

what are other real viable alternatives? there are none? :-(

There is Ada of course!

There is Ada of course!

Go is best seen by how much is done with little.

Go does a lot with a little and for that reason is worth looking at.

Go does not do asynchronous I/O. Instead Go uses cheap threads implemented with split-stacks.

Go does not use locks Go uses channels.

Go does not have implementation inheritance instead Go just has composition.

Go has a switch statement almost as nice as pattern matching in the ML derived languages.

Go uses duck typing yet is statically type checked at build time.

Go does not have fancy polymorphism features go just uses interfaces.

The aim of Go seems to have been to make static typing and compilation palatable to users of dynamic languages, and I think it has succeeded in that.

Go brings a lot of nice things from the dynamic languages into a compiled statically typed language.

That said so far Go is not a general purpose systems programming language and it is not usable for the kinds of programs I tend to write, but I do think Go raises the bar nicely for new imperative compiled languages.

Interfaces for polymorphism?

Go does not have fancy polymorphism features go just uses interfaces.

When used for "polymorphism" rather than compartmentalization (eg. when you use a downcast to get an object out of a generic container), Go interfaces are essentially a refinement of (pre-generics) Java's Object type, which is itself an OO equivalent of C's void* type.

I do not understand how anyone could advertise this as a satisfying solution to replace "fancy polymorphism features". On this specific aspect (I do not discuss the concurrency approach), Go rather demonstrates how little programmers actually expect from their language.

Acceptability


The aim of Go seems to have been to make static typing and compilation palatable to users of dynamic languages, and I think it has succeeded in that.

That's a good insight. Go does succeed at that. Functions have to be declared in detail, but variable declaration is usually implicit. This gives Go programs a comfortably similar look to programs in dynamically typed languages.

C++ now has that, with "auto", but it's not yet used much.

C++11 auto

Quick aside since the topic is near to my mind lately:

I'm using "auto" in production code, and it is a great convenience, but definitely still requires considerable knowledge or attention to the feedback that an IDE provides in order to know what it will resolve to safely. That said, this seems to usually only need to happen when the code is first declared, as underlying code refactorings seem to interact with it reasonably. Time will tell whether the code acquires bitrot simply because of "auto" per se.

Go's concurrency

Hoare-type bounded buffer concurrency management is fine, but the way Go documentation suggests doing it is sometimes strange. The "Effective Go" document exhorts "Do not communicate by sharing memory; instead, share memory by communicating." Then their first example is
go list.Sort() // run list.Sort concurrently; don't wait for it.
which violates that rule, since "list" is shared between two threads. Almost all the examples in "Effective Go" are like that. They don't send data over a channel and get results back over a channel; they send a reference to data over a channel and get a signal (in the form of a minimal send on a channel) back when it's done. The spec is vague as to whether the run-time system is memory safe should there be a race condition with shared data.

There's a notion of unidirectional channels in the spec: "The <- operator specifies the channel direction, send or receive. If no direction is given, the channel is bi-directional." ...
<-chan int // can only be used to receive ints
But channels don't have two ends. The same "chan" variable is used for sending and receiving. If you create a receive-only channel, and receive from it, do you block forever, waiting for a send that can never happen? The documentation doesn't seem to say.

This is really a documentation problem. Go has a skeletal specification, two main introductory documents, a wiki, and a newsgroup. Nowhere is there a spec good enough for standardization or a second implementation.

Go's concurrency uses blind forks - the "go" keyword starts a thread (they call them "goroutines"), the thread runs, and if it finishes, exits silently. There's no way to check on the status of something forked off that way. It doesn't even have a name or ID. The user has to manually construct status reporting from the thread, so that it reports back if anything goes wrong. The "defer" statement can help with this, to close a channel when a thread exits, but that's not automatic. There's a lot that has to be done carefully to do it right, and the examples in the documentation don't bother.

A typical failure mode of Go programs is thus likely to be hangs on waits for failed threads.

On not having exceptions

Dereferencing nil in Go causes the entire program to crash.

The HTML5 parser ("h5") provided with Go will dereference nil on some real-world HTML documents. (Bug report filed for Go library).

Uh oh.

This is why a strong exception system is useful. In Python, LISP, Ada, etc. you can catch almost everything as an exception, so when some complex library package fails, recovery is possible. The Go developers claim that everything should have a local error check because errors are a normal part of program behavior. But without exceptions, higher level programs have no way to protect themselves from low level defects.

There's also the problem with local error checking that it's very difficult to test the recover code for unlikely errors.