## Two Bits: The Cultural Significance of Free Software

Christopher Kelty's book, Two Bits: The Cultural Significance of Free Software, can be read online, and I think parts of it will interest many here.

It seems that programming languages, while mentioned, do not receive a lot of attention in this work. I would argue that they are a significant factor in the history that is being told, and an important resource for historians (though reading the history from the languages is not a trivial undertaking by any means).

Still, seems like a very good discussion and well worth pursuing.

Edited to add: As Z-Bo mentions in the comments, the website of the book invites people to re-mix it (or "modulate" it). Motivated readers can thus add the relevant PL perspective, if they so wish.

## Comment viewing options

### A taste

Here is a short quote that might encourage people to take a look at the book:

It might be surprising that geeks turn to the past (and especially to religious allegory) in order to make sense of the present, but the reason is quite simple: there are no â€œready-to-narrateâ€ stories that make sense of the practices of geeks today. Precisely because geeks are "figuring out" things that are not clear or obvious, they are of necessity bereft of effective ways of talking about it. The Protestant Reformation makes for good allegory because it separates power from control; it draws on stories of catechism and ritual, alphabets, pamphlets and liturgies, indulgences and self-help in order to give geeks a way to make sense of the distinction between power and control, and how it relates to the technical and political economy they occupy. The contemporary relationship among states, corporations, small businesses, and geeks is not captured by familiar oppositions like commercial/noncommercial, for/against private property, or capitalist/socialistâ€”it is a relationship of reform and conversion, not revolution or overthrow.

### Holy wars

The section from which the quote above was taken, which appears early in the book, highlights the role in geekdom of stories about the Protestant Reformation, and hence of religious or holy wars. As we all know these two terms have specific meaning and undertones in the programming language world, and I wonder if Kelty's discussion sheds light on them. I wonder if old-time geeks, or anyone else for that matter, sees a connection between Kelty's history (and definitions) and the use of these terms in the PL world.

### luther, an unfortunate comparison

Very interesting link, thanks Ehud (for destroying my morning productivity ;).

I can't think of Luther without thinking of Against the Murderous, Thieving Hordes of Peasants. It's an unfortunate metaphor, given the strong and conscious anarchist influence in free software.

That said, religious stories are powerful.

On the PL side, my only contribution to the discussion is an amusing purgatory in Guile into which CLOS-like objects are sent to die and be reborn. From goops.c:

/* When instances change class, they finally get a new body, but
* before that, they go through purgatory in hell.  Odd as it may
* seem, this data structure saves us from eternal suffering in
* infinite recursions.
*/


### more disconnected thoughts

Kelty named a number of "wars", but not many in PLs. The closest I can think of hitting to that particular metaphor is The Right Thing (vs worse-is-better); and its manifestations especially regarding lisp (the tcl wars, for example).

But I don't know of any practitioner that defines themselves in this oppositional way. If anything, making a language is something more messianic (into the desert, my peoples!) than crusading. It's a nice point for discussion, but I don't see a way to draw a coherent PL argument out of it.

### Modulate

It appears that this book can be re-mixed.

Ehud: Perhaps edit the OP to include as a suggestion that people modulate this book to include more PLT history.

### Good point. Done.

Good point. Done.

### but it's wrong!

(I will get to some PL content in this but, first...)

I've read chapter 3 and skimmed earlier chapters. Chapter 3 most directly covers a period of history and an important debate ("free software" vs. "open source") - topics near to my heart and experience.

The book's fatal flaw is in locating the beginning of "Free Software" in 1998, even while it acknowledges a much longer history during which free software had a "subterranean existence". That is, it sites the origin at the release of Netscape Navigator and the concurrent announcement of the new term, "open source". With that coordinate system for history, the author then analyzes the ideological differences between free software and open source as simply two competing narratives, each of which alternatively describes a single common set of software practices. The two narratives, in their conflict, call attention to one another and create the appearance of a dispute yet, at the same time, they explain and help to regenerate a single set of practices - a particular way in which people on both sides work on software.

The analysis is flawed because it overlooks history. It overlooks the history and accomplishments of the free software movement prior to 1998.

It was in the 1980s that RMS first articulated the concept of software freedom and announced the GNU project. The first three major pieces of software released were GNU Emacs, GCC, and GDB. Each of these was startling, at the time, for its quality compared to proprietary alternatives. GNU Emacs was a better text editor than the proprietary Emacs products. GCC benchmarked surprisingly well compared to very expensive compilers and was unusually easy to port to new chip architectures. GDB was easily recognizable as the best available debugger on many systems.

A true movement, both with a shared ideological basis and a set of common practices, quickly formed. Programmers from around the world spent roughly the next decade systematically writing free software replacements for nearly every program found in a typical distribution of unix. Only the standard C library and the operating system kernel, two subprojects the Free Software Managed tried (and mostly failed) to manage "in house" languished. As the C library finally reached a state minimal usability and stability, Linus Torvalds finished his famous kernel, and the first GNU/Linux systems began to appear.

Now, those developments, along the years, provoked a genuine power struggle between RMS and certain capitalists. GNU Emacs led to a legal fight and personal dispute between RMS and James Gosling (Gosling had interest in a proprietary software firm selling a non-free version of Emacs). GCC and GDB disrupted what had been at the time a booming industry of start-ups whose businesses were to port compilers and debuggers to new embedded systems chips. It is out of those disruptions that Cygnus was formed - to attempt to recapture that market even in the presence of GCC and GDB by leveraging them and hiring up most key developers. The first GNU/Linux complete operating systems were, very early on, seen as a serious threat to Sun Microsystems for it was apparent that within a few years, GNU/Linux on PC hardware would compete with Sun for speed and features yet with no software licensing fees and much less expensive hardware. It was from that observation that investors backed efforts like VA Research (the famous LNUX IPO) and Red Hat.

The book briefly mentions ECGS (a hostile fork of GCC) but does not investigate the reasons for that fork. Cygnus had been promising partners and customers the benefit of integration with mainline GCC. That is GCC (or some partner) would make or pay for a new port or a new feature and the understanding was that Cygnus could help make sure that code was merged into mainline releases of GCC, where part of the future costs of maintenance would thus fall on the larger "community". It was a promise without substance, at the time. The FSF and its volunteer GCC project leader had their own agendas and were not always eager to accept every patch or quick to process pending patches. For the FSF, the priority for GCC was very clear: building a complete GNU/Linux comes first, everything else comes second. The ECGS fork, nominally originating outside of Cygnus but with strong ties to Cygnus, was an (ultimately successful) attempt to shove RMS and the FSF aside vis a vis the management of the GCC project. VA Research and Red Hat were examples of (successful, at least for now) attempts to leapfrog over the GNU project at the one remaining large task: systems integration - to remove a sense of urgency for volunteer activists to solve those problems and simultaneously make the integration process proprietary and profit making.

In other words, by 1998, the set of free software development practices that were generated by the ideological narrative of the free software movement had already had a huge impact on capital, causing significant losses for some, and a huge reallocation of capital as investors sought out models by which they could make money in spite of the free software movement. It was in *that* context that Tim O'Reilly held his "freeware summit" and that a small, self-appointed group of "leaders" (including CEO's of Cygnus and VA Research) sat around in a room and concocted "Open Source".

We should recognize, therefore, that the "Open Source" narrative was invented for a specific economic and political purpose. A handful of firms and a lot of money wanted to *use* publicly licensed source code and the labor of volunteers, but wanted to *thwart* the goal of giving all software users personal freedom. For that purpose, it would not do to have a charasmatic figurehead (love RMS or hate him) defining the branding term "free software". Just as RMS could embarrass Gosling or expose the lie in Cygnus' problems, RMS and the FSF - left unmolested - could have organized volunteers to eliminate any need for companies like VA Research and Red Hat. Rather than let this happen, "Open Source" was invented (and the OSI chartered), as a kind of "hostile rebranding".

The book makes the claim that the two narratives - roughly RMS' vs. ESR's - are superficially in conflict and yet both explain and generate a single set of "practices". This is clearly not so and the history given above helps to explain how and why:

Both narratives do have in common the notion of programmers freely sharing source code and cooperating across institutional boundaries on single works, in simple ways. Where they diverge, however, is in the larger strategic form those practices take. Free software practices are aimed at liberating users and thus (to this day) include priority goals such as making 100% free, fully integrated GNU/Linux distributions. Open Source practices shun such efforts because if the integration process is handled in a proprietary way by some firms then, technically, its done and open source has no ideological objection to using proprietary software development to gain power over users. Similarly, the free software movement is these days making a priority of fighting for software freedom on the web - an environment where users are increasingly at the mercy of the developers of proprietary, monopolistic web service applications. In contrast, "open source" encourages programmers to develop software for such applications.

In other words, the two narratives may agree on "how a programmer works, day to day" - but they differ sharply about "what a programmer works on, year to year". The narratives compete not to give conflicting accounts of a single set of practices, but rather to influence practices in one direction or another. Moreover, the history strongly suggests that "open source" was a term and a social formation consciously constructed *precisely* to launch a fight for influence over the practices of contributing programmers.

-------------------------------------------------------------

All of that said, I promised some programming language (PL) content and so, as a bit of an aside, I would like to make a few comments about the importance of programming languages in the history of the free software movement, and the importance of software freedom in the history of programming languages.

In the history given above I mentioned the disruption caused by the early success of GCC. That is one, small, PL angle.

It is also worth pointing out that, at least as the stories are told, one of the inspirations that led RMS to start the free software movement was the history of Lisp at MIT, Symbolics, and Lisp Machines Inc. Details of the history differ sharply among the first-hand accounts told years later but all parties seem to agree that while the MIT AI Lab was once one of if not the Lisp capital of the world, nearly all of the main Lispers there joined either Symbolics or LMI, took their software "private", and a former environment of open collaboration was lost. RMS was embittered by this, made attempts to resist it, and later was inspired to begin GNU to try to prevent such things from recurring in the future. The "forking" of Lisp into competing, proprietary versions was an early negative example that helped to form the concept of software freedom.

The other significant effect of the forking of Lisp was, of course, the invention and standardization of Common Lisp. As an aside, in the early years of proprietary software as a growth industry, similar standardization efforts were often seen by industry as the "cure" for the ills of taking formerly shared code into proprietary forks (e.g., POSIX, X11, and NFS).

The influence of software freedom on programming language design is also apparent, especially in the category broadly known as "scripting languages". Perl, Python, and Ruby are each fine examples of languages whose design emerged in and from their primary implementations being in free software form. Perhaps it is interesting from a programming language theory perspective that those three languages also stand out as being among the more quirky, ad hoc, least optimizable, least re-implentable languages around. Of those three, it may be interesting that only one - Perl - has found some ideological grounding for its nature in Larry Wall's comparisons between programming and natural language.

### 1998 / Netscape going FLOSS

This is the year that Thomas L. Friedman describes as one of his self-proclaimed "major events" that have turned the world "flat", in his The World is Flat book. It is a good example of a media generating myths, similar to how media generated the myth that ARPANET was conceived to prevent massive network failure in the event of a global war.

Aside from that, it is important to note that most software used to be free prior to AT&T and other vendors wanting to commercialize it. Before there was a significant market, academic institutions like universities and government research labs did not have to pay huge dollars for software. It was free.

Even prior to RMS, was the BSD license, which was first solidified in law when an entity (that eventually became SCO) tried suing Berkeley for copyright infringement, but their suit had to wait until the next business day due to late filing at the court house. Berkeley's lawyers, taking advantage of time zone differences between Berkeley and Princeton, filed in California court the same day they heard about pre-SCO's law suit. They filed a law suit that some of their networking code had been used without attribution, and developer attribution is a standard part of BSD License.

In modern day, it is also political that some open source software be poorly designed to prevent too much modularity; too much modularity potentially frees others from having to contribute changes back, since in principal nothing changed to the original source text. This modularity issue is rumored to be the main reason why people are supporting LLVM over GCC for projects like clang, as GCC contributors are supposedly frustrated by FSF resistance to making GCC more modular. Another, similar example might be what led the Haskell language's founders to, in effect, build their own Miranda, and have since relentlessly been dedicated to making it more modular. However, this example is a little different, as in the case of GCC, it is FLOSS software that effectively disallows *useful* proprietary plug-ins through its design.

### vouching for Z-Bo re GCC

Yes, it' true that RMS has helped keep GCC from being a more hackable platform because he was worried about exploits that would give the bulk of the benefit to proprietary rather than free software. (Personally, I think he badly mismanaged the kernel project generally, GCC in that way you describe, and the final push towards a complete GNU system integration and this his technical level mismangement in those areas are precisely what the "open source" adversaries used as leverage to beat back the free software movement.)

### RMS

Great minds tend to make shallow idealogues.

You forgot to mention GNU Emacs and Lucid, and the somewhat infamous JWZ flame war between RMS and Jamie Zawinski. It nicely documents FSF's refusal to make some things more modular and better support abstraction. In his interview in Coders at Work, JWZ lovingly describes RMS' persona as "the instransigence of Stallman".

I don't really support the rumors that FSF really tries to make software less modular. I just think that people like RMS inherently prefer open, editable code to software abstraction, for whatever reason. I do support the idea that LLVM is probably more modular and will over time steal GCC developers because of that reason (and also major corporate sponsors like Google).

### re RMS

A quick tidying up, nothing major:

a) Regarding GCC, RMS was quite explicit about avoiding modularity for political reasons. A very long-standing popular feature request was for the ability of GCC to be able to dump parse trees (a feature added long ago) and an ability to *read* parse trees (a feature kept out for political reasons). Similarly for RTL form. RMS' stance was that yes it would be a nice feature technically but no it can't go in because it would encourage people to write proprietary software front-ends to GCC.

b) Regarding GNU Emacs v. Lucid, as near as I can reconstruct that, RMS was right and jwz (who I greatly respect) was a bit of an ass in this limited, technical sense: The Lucid / XEmacs fork eventually had to fix the very problems that held RMS back from merging their patches; some of the people issues around the Emacs maintainer who joined Lucid were not handled very gracefully, leaving RMS needlessly in the dark; Lucid was not careful about paperwork and to this day there are legal difficulties moving code 'twixt the two forks as a result. It's a very different set of issues from GCC and its a bit unfair of jwz to sum it up as "why RMS is impossible to work with".

c) You are completely correct, however, that as a matter of coding style RMS likes big, sprawling, essentially monolithic programs. Emacs, GCC, and GDB all reflect this. They tend to be very orderly programs, designed to not be *too* hard to modify, but they tend to not be nicely modular in the ways a younger hacker might do it. (GCC, of course, has been cleaned up a lot in that regard since RMS last seriously touched it.) His code, in my experience, is very interesting to work with: a real pleasure and generally lucid (no pun) most of the time, and then a frustrating mess when you want to change it to do something he hadn't thought of. As a coder, he is a very talented politician ;-) (And, like I'm one to talk - just to be clear.)

### Thanks

Interesting stuff.

### Early GCC: I can vouch for that

GCC benchmarked surprisingly well compared to very expensive compilers and was unusually easy to port to new chip architectures.

In 1990-1992, I was working at a computer-aided instruction company, running on System V. Prior to gcc 2.0, the vendor compiler was a hair faster, so we stuck with that. When gcc 2.0 came out in 1991, it definitively outdid the vendor compiler. IIRC, the main advance was that 2.0 had better separation between optimization and code generation, so that it was easier to port to a new CPU and still get an optimized compiler.

(This was a Motorola m88k, running stock SVR4. We never saw much evidence that Motorola put a lot of effort into the software for that box.)

### Volunteers?

"RMS and the FSF - left unmolested - could have organized volunteers to eliminate any need for companies like VA Research and Red Hat."

That's a remarkable assertion. I'd be curious to hear how you support it.

(Disclaimer -- I have a bias towards seeing a need for such companies, since a similar one pays my salary. I do intend this as an honest question, though.)

### re Volutneers?

Brooks, I'm not sure why you think that that's a "remarkable" assertion so I'm not sure how best to satisfy you. If you like, please ask me via email (lord -at- emf.net) but if you do, please explain a little bit what you find implausible about the assertion so I know on which topics to focus a reply.

### I'm not buying it either

Also,

Rather than let this happen, "Open Source" was invented (and the OSI chartered), as a kind of "hostile rebranding

RMS and his ideology just rub many people the wrong way, so it was pretty much inveitable that people would want to distance themselves from him and the FSF.

### back towards the topic, ok?

Let's relate what you are saying with what the "Two Bits" book says, ok? And to the question posed about programming languages. A flame war about RMS' personality or ideology would be hopelessly off topic.

One of the book's claims is that free software "started" in 1998. The fact of the matter is that by 1998, there were already a number of GNU/Linux systems. One of these, Debian, was created to further the goals of the Linux and GNU project. The plan was even that the Free Software Foundation would be one of the primary distributors of Debian. These facts undermine the position taken in "Two Bits" and, also, undermine your suggestion that something about RMS's "ideology" would prevent a complete, stabilized, supported distribution under the GNU aegis.

Another of the book's claims is that while "open source" and "free software" contend over ideological hegemony, nevertheless they agree upon the set of practices to be produced. That is, per the book, whether a programmer identifies with a free software view or an open source view should make no difference to what the programmer actually does when programming, day to day. That claim doesn't hold up.

That claim doesn't originate in "Two Bits". Rather, it has been among a few (self-contradictory) claims made all along by advocates for open source. Yet, that claim of "shared practices" is undermined by the facts on the ground. For example, the eventual outcome of Debian. See http://www.gnu.org/philosophy/common-distros.html

How did, for example, Debian come to diverge from a strong commitment to software freedom? It is because between 1996 and 1998 the "open source" ideological hegemony was "worked out" in the context of Debian. Today's compromised status of Debian - the reason it is not endorsed by GNU - is precisely because it embraced practices consistent with "open source" but opposed to software freedom. (Perhaps it was with regretful recognition of that divergence that Bruce Perens took a tilt at a windmill with this piece: http://lists.debian.org/debian-devel/1999/02/msg01641.html )

Was it really the power of ideas and personalities that led to that divergence? Or do you suppose it might have something to do with the vast amounts of money and influence over the press that a small group of open source advocates poured into distorting the public discussion? The two ideologies (open source, free software) differ significantly over the desired practices. Only one of the two ideologies so often denies this.

Let's turn to the question posed in the subject post of this thread: what is the role of programming languages in the history being told in Two Bits. With the history corrected as I have above, some tentative answers start to appear:

Part of the answer has simply to do with those set of practices which characterize open source, and how they played out in (especially) Perl, Python, PHP, and Ruby - and then, indirectly, Java. The former four languages "grew", mainly via libraries: a pattern the Java founders sought to ground in principle and execute systematically.

Related to that, we can observe that each of those save Java is (mostly, not entirely) a "single implementation" language and that robust specification (save Java) is historically a low priority for each. Perhaps that is in part because if the source to the primary application is available, fewer people are concerned that the language be explicitly designed.

We can also see how the ".com" bubble figures in to programming language design: The very large influx of investment capital that propelled "open source" into the limelight had a second effect, equally harmful to software freedom. The second effect was the notion of using closely held source code as a private asset, selling only the service of running that asset on private resources. Let's take as an example "gmail". An open source advocate will say, if ideologically consistent, "Yes, that is an open source project. They built it out of open source code." A free software advocate is more likely to say "No, that is not free software. If I use that program, I'm not free to modify it, or even study the source code."

An effect of this on web-oriented programming language design is a diminished emphasis on application code sharing, and increased emphasis on library sharing. For example, in Ruby libraries you'll find various tools for templating or for AJAX hooks - but you won't find a module that implements Twitter. This is in keeping with open source ("We'll share code when it's convenient to us") and inconsistent with software freedom (because it says "We'll hoard code if hoarding makes money for us").

I think that that, too, has had influence on programming language design. By avoiding sharing application-level code, there has been little or no pressure on those language designs to contemplate features that would make it easier to modularize application code and making it composable. The resulting difficulties of, say, building a scalable Twitter or making an app which combines the functionality of Twitter and a blog host, support artificial barriers to entry - raising the market value of some .com firms at the social cost of hoarding software and encouraging software to be hoarded. (The same difficulties also have grave implications for user privacy and the protection of user data.)

In short, were I to tell the history Two Bits aims to analyze, I think I would look at the differences in practice between open source and free software. I think I would look at how a large influx of money during the first Internet boom propelled open source into a limelight, helping it to become the dominant hegemony. I would look at how the resulting divergence of practices relate to open source's emphasis on "sharing some source in support of hoarding other source". I think I would argue that open source evidence is deeply inscribed on the surfaces several of the most popular and economically influential languages that grew up over the past 10 years. I think I would argue that language design, as commonly practiced, regressed a bit as a result.

### Application-level modularity

I find your comments about application-level versus library-level modularity very thought-provoking. It's definitely a new perspective for me. Thanks.

### By avoiding sharing

By avoiding sharing application-level code, there has been little or no pressure on those language designs to contemplate features that would make it easier to modularize application code and making it composable.

Maybe I've overlooked something obvious. Which sort of language level features are you talking about? Java style interfaces in languages which support duck typing by default... ?

### application level composition

Maybe I've overlooked something obvious. Which sort of language level features are you talking about? Java style interfaces in languages which support duck typing by default... ?

I don't think you've overlooked anything "obvious" - that's my point. We don't try to make web apps composable except (barely) at the protocol level so we (collectively) lack a feel for how languages could help. I have a lot of guesses but they are just that. If the paradigm was to assume that, by default, apps are sharable in source form and best implemented as distributed, decentralized systems, I think we'd find lots of ways in which PLs could be usefully tweaked.

### If the paradigm was to

If the paradigm was to assume that, by default, apps are [...] best implemented as distributed, decentralized systems, I think we'd find lots of ways in which PLs could be usefully tweaked.

I think Waterken is a good start on that path. Much of the existing web infrastructure suffices to build distributed systems, with browsers as nodes.

### I agree; having the right perspective dictates your application

of PLT.

We don't try to make web apps composable except (barely) at the protocol level so we (collectively) lack a feel for how languages could help.

"Perspective is worth 80 IQ points." ~ Alan Kay

### What I find remarkable

What I find remarkable about the assertion is that you are asserting that, if it weren't for this political issue, there would be enough unpaid volunteers working enough hours on free software to make up for the efforts of the hundreds of paid employees from Red Hat et al working full time on the projects.

The idea of there being that many volunteer hours available to make up for this seems completely implausible to me. Where would they all come from? In the free-software projects I'm familiar with, most of the significant contributors are people who have been working on the project for years at some significant fraction of full-time. The only way that that many people can afford to do that is by having that be their paying job; you just don't get that many people who have the money to take that much time away from a paycheck and write code, and even fewer of them would choose to.

I also find it quite unlikely that, if there were such a pool of dedicated volunteers, something as unrelated to the code as this political issue between the FSF and "open source" would be sufficient to keep them away.

(Also, on a completely separate issue, one of the "needs" for a company like Red Hat is to provide enterprise-level contracted support to other companies that wish to use GNU/Linux on their computers. A volunteer-based organization simply is by definition not in a position to convert money into guaranteed expenditure of effort in that manner, and thus could not eliminate that particular need for companies like Red Hat. The only way the FSF could have eliminated this need would have been to entirely fail at getting a corporate user base for GNU/Linux, and I would guess that's not what you are referring to.)

### the unpaid volunteers question

What I find remarkable about the assertion is that you are asserting that, if it weren't for this political issue, there would be enough unpaid volunteers working enough hours on free software to make up for the efforts of the hundreds of paid employees from Red Hat et al working full time on the projects.

The idea of there being that many volunteer hours available to make up for this seems completely implausible to me. Where would they all come from? In the free-software projects I'm familiar with, most of the significant contributors are people who have been working on the project for years at some significant fraction of full-time. The only way that that many people can afford to do that is by having that be their paying job; you just don't get that many people who have the money to take that much time away from a paycheck and write code, and even fewer of them would choose to.

I do not contend that were it not for the "open source" political movement, a wealth of unpaid volunteers would have stepped in to essentially perform the same work that your friends do.

I do contend that with better organization - not more work, just better organization and better tools - the "upstream projects" could eliminate the need for a great deal of what the major GNU/Linux distribution companies do, vis a vis their actual distributions.

Remember that the distribution firms got their start largely by providing a service that upstream projects were not yet prepared to do for themselves: integration, configuration, testing, fancy bootloaders, holding snapshot stable releases steady and patched-as-needed, and distributing updates (usually by an automated mechanism). The "open source" (as contrasted with software freedom) firms convinced many, including many, many incoming engineers, that those tasks were not the job of upstream projects. The distribution vendors built technical infrastructure to work around that deficit for the benefit of their customers, hoarding that technical infrastructure in a manner abhorrent to the concept of software freedom yet right in keeping with open source principles.

My claim is not that the FSF could have commanded a volunteer labor pool the size of, say, the employment roster of a Red Hat or a Canonical - but rather that in the absence of such "open source" rather than freedom-emphasizing firms, the upstream projects would have been much more likely to self-organize in ways that eliminated nearly all of the need for such distribution companies.

Now, there are arguably good things that firms such as Red Hat have done that I think would have been significantly more difficult without such firms. One example that comes to mind is the way that Red Hat worked with the NSA to contribute back to the community at large a pretty nice, rather sophisticated set of security mechanisms for GNU/Linux systems. One of the rhetorical difficulties in daring to criticize such firms for some of their actions is that you have to be frank and honest to acknowledge some of their substantial give-backs to the technology.

Also, on a completely separate issue, one of the "needs" for a company like Red Hat is to provide enterprise-level contracted support to other companies that wish to use GNU/Linux on their computers. A volunteer-based organization simply is by definition not in a position to convert money into guaranteed expenditure of effort in that manner,

I agree to an extent. Remember I said that these firms would not exist in anything close to their present form, not that they wouldn't exist.

Here is one bottom line for you: If the distribution RHEL is free software, as opposed to open source, then why isn't the market for support of RHEL competitive?

My first full time programming job was to build a set of DSLs for configuration management of very large computing systems that were otherwise difficult to integrate and configure. My stuff was in the same category as, say, autoconf/automake and all that - with a better language design but a far, far less competent execution (I was junior and not quite up to the task). I can tell you anecdotally from that experience that large system integration, configuration, etc. problems can be far, far better automated than they are in the open source world while simultaneously making life easier rather than harder for upstream projects. Alas, about a decade later when I was better up to the task and tried to get folks interested in it, the response was about what you offer: "Firms like RHAT spend a lot of money on that so (a) we don't have to; (b) why bother, this is open source!"

### I do contend that with

I do contend that with better organization - not more work, just better organization and better tools - the "upstream projects" could eliminate the need for a great deal of what the major GNU/Linux distribution companies do, vis a vis their actual distributions.

I swear, I've heard this position before on the Internet somewhere. Did you write something similar somewhere else in the past?

Where does that put companies like Canonical Ltd. and its Ubuntu project, and Launchpad collaborative CT system?

### re z-bo's contention

Yes, I've written about that before. Most recently, at any length, in the context of the Arch revision control system project.

If you don't mind I would rather not comment in this forum at this time about Canonical and Launchpad. Canonical and Launchpad are not unrelated to Arch in their origins. It would be difficult for me to comment briefly and accurately on their relations without going on at length about some history that I find painful to recount. Hopefully it is sufficient to note that in spite of embracing distributed and decentralized revision control technology, Canonical has nevertheless strived to hoard critical infrastructure for producing and maintaining Ubuntu and in that regard is little different from Red Hat.

### Another bait

Perhaps this will spark further discussion (regulars know my obsession with code reading):

Peter Reintjes writes, "We soon came into possession of what looked like a fifth generation photocopy [of John Lions's Commentary on Unix 6th Edition] and someone who shall remain nameless spent all night in the copier room spawning a sixth, an act expressly forbidden by a carefully worded disclaimer on the first page. Four remarkable things were happening at the same time. One, we had discovered the first piece of software that would inspire rather than annoy us; two, we had acquired what amounted to a literary criticism of that computer software; three, we were making the single most significant advancement of our education in computer science by actually reading an entire operating system; and four, we were breaking the law."
...
Thirty years later, and long after the source code in it had been
completely replaced, Lions's Commentary is still widely admired by
geeks. Even though Free Software has come full circle in providing
students with an actual operating system that can be legally studied, taught, copied, and implemented, the kind of "literary criticism" that Lions's work represents is still extremely rare; even reading obsolete code with clear commentary is one of the few ways to truly understand the design elements and clever implementations that
made the UNIX operating system so different from its predecessors
and even many of its successors...

### Regrets

In 1998 I saw Lions's Commentary in the book section of the Microcenter in Santa Clara, for about $10. Wish I'd picked it up then. ...heh. Amazon has it for$36. The cover of the new edition shows one student furtively running the photocopier while another keeps watch.