A Manufacturer's Perspective on PL Progress

This may be the first post about manufacturing ever to appear on LtU. I offer it because, sometimes, we find lessons about our own fields in the fields of our neighbors. One or two of you will have noticed (perhaps even appreciated) my eight year hiatus from LtU. I've spent the last seven years working on a viable business model for American manufacturing through on-demand digital production. Testing ideas commercially has a way of making you look at them in somewhat different terms, and it seems to me that there are parallels between what has happened in manufacturing and what has happened in programming language adoption.

I recently rediscovered Milton Silva's post from April of 2020 Why is there no widely accepted progress for 50 years?, which prompted me to write this. The notion of "progress" is replete with observer bias: we readily notice advances on things that serve our objectives, and readily ignore advances on things that are not relevant to us on the day we happen to learn about them. So perhaps a better question might be: "Why has there been no uptake of PL innovations over the last 50 years?" Thomas Lord nailed the biggest issue in his response to Silva: "it's because capital, probably."

Because the parallels are direct, I'd like to give my view of why there has been no substantive progress in manufacturing in the last 50 years. With apologies to international readers, I'll focus on how production evolved in the United States because I know that history better; I don't mean to suggest for a minute that progress only occurred here.

The Path to Mass Production

In the US, the so-called "American System" of repeatable fabrication using standardized parts is commonly mis-attributed to Eli Whitney. Whitney mainly deserves credit for collaborating with Thomas Jefferson to copy the work of Honoré Blanc. Blanc, a French gunsmith, demonstrated rifles made from standardized parts in 1777. Blanc's methods were rejected by European craftsmen, who correctly viewed it as a threat to their livelihoods and the established European social structure. Unable to convince Blanc to emigrate to America, Jefferson (then Ambassador to France) wrote letters to the American Secretary of War and to his friend Eli Whitney describing Blanc's methods. Though unsuccessful in his attempts to copy them, Whitney did an excellent job of marketing Blanc's ideas in America. Among others, he found a strong supporter in George Washington.

Standardized parts took a very long time to succeed, driven initially through the efforts of the US Ordinance Group. Lore about Singer, McCormick, Pope, and Western Wheel Works notwithstanding, standardized parts did not yield cost-effective production until Ford's assembly lines generated standardized part consumption at scale. In the mid-1920s, Alfred Sloan created the flexible assembly line, enabling him to create the concept of a "model year" for vehicles. With this invention, Sloan resolved the achilles heal of mass production by removing the challenge of satiation. In both production and consumption, "more is better" became a driving thought pattern of the American 20th century. Through two world wars and several major regional conflicts, American mass production powered the economic growth of the country and a significant part the world.

Death Rattles

Viewed in retrospect, the re-entry of Toyota into the US market in 1964 with the Toyota Corona offered warning signs that mass production was in trouble. Unable to match the capital resources of American manufacturers, and unable to tolerate the costs of errors incurred in Deming's statistical quality control methods, Toyota had come up with the Toyota Production System, now known as lean manufacturing. US manufacturers had become complacent in their dominance, and were slow to consider or even recognize the threat this represented. By 1974, the Toyota Corolla had become the best-selling car in the world, and Detroit finally started to wake up.

By then it was too late, because 1974 also marked the global introduction of the standardized cargo container. With this invention, the connection between sea and rail shipping became relatively seamless across the world and the per-unit cost of international shipping became negligible. Labor and environmental costs became the principle variable costs in production, and both were cheaper overseas than they were in the United States. Standardized cargo shipping abruptly erased the geographical partitions that had protected US markets from foreign competition. By buffering transportation with inventory, the delays of foreign production could be hidden from consumers. Mass production began to move inexorably to countries that had lower labor and environmental costs for producers.

Basic arithmetic should have told us at this point that US mass production was in mortal peril: American salaries could not be reduced, so something else needed to change if manufacturing was to survive in America. Which prompts the question: Why has there been no substantive innovation in US manufacturing since 1974?

So What Happened?

There were several factors in the failure of U.S mass production:

  1. Complacency. Toyota started building trucks in 1935, but they did not release vehicles into the US market until 1957 (the Corona). In America, they were principally known as a sewing machine manufacturer (they built the best embroidery machines in the world until the business was taken over by Tajima in 2005). Their initial foray into the American market failed because the two markets had very different requirements, and they withdrew in 1961. The view in Detroit was that there was no reason to take them seriously. After all, they made sewing machines.

    The attitude in Detroit wasn't an exception. All of us are inclined to assume that the thing that worked yesterday will continue to work today, and consequently that market dominance is a sort of self-sustaining natural law. Which, for many reasons, is true. Until there is a regime shift in the market or the production technology. And then, sometimes very abruptly, it isn't true.

    Programming languages aren't competitive in quite the same way, but sometimes we indulge similar attitudes. In 1989, I was part of a group at Bell Labs that built the first-ever commercial product in C++. Cfront was fragile, and the prevailing view at the time was that C++ was a research toy that would never go anywhere. The Ada team down the hall, certainly, didn't seem all that worried. The waterfall model of software engineering was still king, and Ada was the horseman of its apocalypse. Both eventually collapsed under their own weight, but that happened later. Back then, C++ was young and uncertain.

    My 1991 book A C++ Toolkit described ways to build reusable code in pre-template C++. Picking it up now, I can only shake my head at how many things have changed in C++, and how those changes drove it to such a large success and to its eventual (IMO justified) decline. It succeeded by riding the wave of "abstraction is the key to scale". But in the early days it was largely disregarded.

  2. Capital. If you have an established way of doing things whose dominance relies on cost of entry and whose margins are thin, two things are true:

    1. New entrants will have to come up with a lot of money to compete with you using your methods.
    2. If the margins are low (which is to say: the time required for return is high), nobody with any brains will lend them that capital.

    Which is why there is essentially no investment in manufacturing happening today. Net margins in manufacturing typically run about 7%. If you can't get outside funding one way or another, you don't have much free cash to pay for innovation.

    Programming languages are similar: a new language in and of itself isn't very helpful. A significant set of libraries need to be written before the language is actually useful, which is expensive. And then existing programs need to be migrated, which is more expensive. And the benefit is... usually pretty uncertain. As in manufacturing, new methods and tools find success mainly by proving themselves out in a previously unserved niche. Clay Christensen refers to this as The Innovator's Dilemma.

    Of course, the other thing that happened by 1974 is that the microelectronics race found a whole new gear. Carver Mead's textbook Introduction to VLSI Design was released in 1980, and by that time VLSI was already mature. The problem for manufacturing was the the returns on investment in microelectronics were a lot higher than the returns on manufacturing investment. Not surprisingly, investors migrate to the most profitable opportunities they can see.

  3. Training. Buttonsmith (my company) has built our approach around a de novo look at manufacturing in the age of digital production equipment. The new approach relies on new software, new tooling, and new processes. With each new product, we validate our approach by deploying it to our prototype production facility to see whether we can make competitive, high quality products, at scale, in single piece quantity. It's a pass/fail test. Because our floor is constantly evolving, part-time production staff don't work well. It takes six months for a new production employee's productivity to offset their cost, and it takes a full year before or more before they really develop facility in all of the production activities we do.

    Shifting from mass production to on-demand digital production isn't an incremental change. It either involves a significant cost in re-training or building a new facility from scratch.

    Programming languages are similar. It takes a solid year for an experienced programmer to become facile in a new style of programming language. Syntax is easy. Understanding new concepts and new idioms, and more importantly where to use them, takes time. Which becomes part of the economic calculus of adoption.

  4. Ecosystem. When a manufacturer makes a significant change in their products or distribution, existing distributors and suppliers ask questions. Are they adding something new or are they planning to shift their business entirely? What will that mean for us? Is this a change that threatens our business model? If you open up a direct-to-customer line of business, your distributors may be unsettled enough to replace you with another supplier. This will happen before the profits from the new activity can sustain you. Which means that some kinds of change involve high existential risk.

    Partly for analogous reasons, new programming languages get "proven out" by scrappy risk takers rather than established incumbents. Python was initially released in 1989, but didn't gain mainstream traction until 2000. Even today, we see people argue that compiled languages are much faster than scripting languages. Which is true, but it entirely misses the point. Gene Amdahl, were he still with us, would be disappointed.

  5. The Inverse Funnel. In manufacturing, big things rely on littler things. So when the little things change disruptively....

    For more than a century, people have bought thread in spools made of a single color. Over time, industrial embroidery machines have been released with more and more needles to accommodate designs with more colors. The most capable machines today have 15 needles, and Tajima is generally viewed as the leader in such machines. Earlier this year, Tajima revealed a device that colorizes white thread, on the fly, using a digitally directed full color process, using a dye sublimation method. It is able to switch colors every 2mm, which provides essentially continuous color in the final product. So this year, for the very first time, it is possible to do full color embroidery with just a single thread. It is possible to do gradient embroidery. They expect to release automated conversion from PDF to colorizer instructions late this year. Without fanfare, embroidery has just become fully digital.

    A small irony is that the colorizer is typically demonstrated using a 15 needle embroidery machine with only one populated needle. Why? Because single needle industrial embroidery machines are no longer made. There is going to be enormous competitive pressure to adopt this colorizer, but the typical commercial embroiderer isn't rolling around in free cash. The good news is that their expensive multi-needle machines do not have to be discarded. The bad news is that cheaper single needle machines will return, and the cost of entry for a new competitor will drop accordingly. But all of the surrounding software, and the entire process of generating embroidery artwork, will be replaced.

    This is similar to what happens with new programming languages. Programming languages sit at the bottom of a complex ecosystem. The transitive cost to replace that ecosystem is awe inspiring. The more successful the earlier languages have been, the bigger that cost will be. A new programming language may be one of the most expensive transitions you might ever think to drive by introducing a new piece of technology!

There's obviously more to the story, and many other factors that contributed significantly in one example or another. The point, I think, is that when we succeed researchers and early stage engineers tend to be standing at the beginning of social processes that are exponentially more expensive than we realize and take a very long time to play out.

Change is very much harder than we imagine.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Fascinating article!

Fascinating article! Looking forward to hearing more about Buttonsmith.

Various points

"its eventual (IMO justified) decline"

I'd like to know your view of the justification for that decline. (I stayed out of C++ from start to finish.) It seems to me that C++ is now a Very Big Niche language, like Cobol. It will be a long time, if ever, before Cobol is turfed out of the V.B.N. it now occupies.

"at scale, in single piece quantity"

I don't understand that: I understand "at scale" to mean "in large quantities". Can you explain?

"compiled languages are much faster than scripting languages. Which is true"

Not so much. Essentially all languages can be compiled: Python is pretty close to the edge because of its extreme dynamism, though it is not usually used as dynamically as it could be. To me, "scripting language" means "language I don't like because it's too easy to use, and therefore threatens my livelihood as an Expert Programmer", which chimes with the themes of this post.

C++ Decline

In my view, C++ isn't the right language for anything. It fails as a kernel language because exceptions and destructors combine to make for impossibly expensive stack unwind, and even if you don't throw any exceptions, you still pay for them in code generation and code size. And don't bother telling me that's not true; we collected code cache residency experiments with EROS, and code compiled for exceptions lost 23% of code cache efficiency vs. compiled for that feature being turned off.

It's also a failure as an application language; the joke that there are many usable sublanguages comes to mind. C++ today is more complicated than Ada ever was!

I've never understood why there was so much uptake in embedded. That's a domain where simplicity counts for a lot, and small code size is good. C++ stopped being a fit for that long ago.

I certainly don't share that bias about scripting languages. If you're willing to throw very complex JIT optimization at the problem you can make scripting languages pretty darned fast. That comes at a substantial cost in runtime complexity, and most of the examples of such optimizers rely on GC - which still comes at a heap cost. All that said, there are real advantages to those languages.

c++ is a problem.

The biggest drawback of C++ IMO is that it's been designed with a complete and total disregard for the size and complexity of the library header files. Not only are they enormous and complex, they are frequently interdependent in ways that mean you absolutely can't have a small project that doesn't collect hundreds of thousands of lines of header code when you try to compile and then process them all thirty times.

There's a tipping point that gets crossed, where making the project a *little* bigger makes the time spent processing headers a *lot* longer. And that gets worse, not better, as the project grows more complex. So somewhere north of a million lines, C++ gets unusable. People hit 'compile' and then have to come back hours later.

The second biggest drawback of C++ IMO is a runtime model complicated by destructors and exceptions. Destructors create unsafe conditions that it's hard to detect and exceptions unwind the stack calling destructors automatically. This creates a disproportionate amount of complexity and "surprise" behavior. Destructors are a good idea, but given my choice I would BAN exceptions.

Pointers (and pointers to pointers, etc) were one idea that gave rise to one combination. Pointers, Smart Pointers, and References are three ideas that are expressive but give rise to nine combinations that give rise to twenty-seven combinations, etc, each with its own subtly different behavior and different ways to surprise the user. This was a bad idea.

Objects were a good idea but, aside from destructor calls C++ didn't really make them easier. They hung nicer syntax on them, but the objects that result from using it are rarely easier to use. There are some kinds of objects that are easier in C++
(when you want destructors automatically called),

and some types of objects that are easier in C
(when you want implicit downcasting on calls to specialized methods, or want to subclass incrementally with assignments directly to method pointers).

You can do both things in both languages, of course.

The easiest way to do implicitly-downcasting objects in C++ is to do them in C. But you can manage it with C++ syntax with lots of ugly cast operators and dispatch methods in the superclass with big ugly switch statements specialized to every possible subclass and so on.

And you can manage automatically calling destructors in C, with lots of ugly header files and preprocessor operations resulting in calls to automatically generated procedures explicitly calling destructor methods, as demonstrated by CFront.

When I look at C++, mostly I see nicer syntax for things that weren't really all that hard anyway, a lot of unnecessary complexity, and counterintuitive and overly restrictive type and casting rules. And then I remind myself that syntax has enormous importance to a lot of regular users, and those type and casting rules are there to allow people to prevent types from being used wrong or cast wrong by other people who don't understand them. I tend not to see these things as particularly useful because I tend not to write code with large groups of people. So I'm probably not the one to judge on those points.

c++ will have modules

Modules are coming to C++ so in another few decades that might change.

Conflicting Goals

IMHO ...

C++ suffers from conflicting goals.

And it has historically suffered from conflicting goals.

Its creator imagined that he could combine all of the capabilities of C, Algol, Simula, Smalltalk, and probably Lisp and Scheme as well, without having any trade-offs whatsoever. Any time when asked, "can C++ do ___?", the answer was to make sure that "___" was immediately added.

The last 10+ years have been kinder ... a lot of things seem to have been cleaned up. But the old baggage can't be dropped.

That's a bit strong

Since I was there, I can assure you that Bjarne was not that ambitious. Smalltalk, LISP, and Scheme were never design criteria for C++, and the resistance to creeping features in the early years was much higher than it later became. There was very early recognition that garbage collection wasn't on the table.

Bjarne did a truly impressive job of resisting feature pressure from all sides for a very long time - and I say this as one of the people he was resisting. My group at Bell Labs did the first commercial product built in C++, and I think there's a fair case to be made that we were one of the main motivations for overloading * and -> (which is to say: smart pointers). The discussion about adding those overloads took several years.

Part of the resistance was due to cfront. As C++ came to have real compiler implementations, change became much easier and the complexity of the language exploded.

In any case, rewriting history isn't constructive here. Or needed. If you want to win an argument about the worst PL design choices, template metaprogramming is pretty much the nuclear option. :-)

Template Metaprogramming.

shap mentioned that Template Metaprogramming is a nuclear option in worst PL design choices.

That is true. In fact C++ demonstrated so many awful things about template metaprogramming that I realized it's a bad idea even in Lisp and Scheme. (Where it is expressed as macros that create other macros, or macros that result in code that contains other macros). What I realized is that no matter what you want to express, there are better ways to express it in any 'reasonable' language and you can't comb a hairy ball smooth.

And that pretty much meant giving up on Lisp Macros as being fundamentally flawed in ways that get worse with scale. Every time someone makes the macro landscape more complicated, they make maintenance more complicated for everybody forever. There's a point of diminishing returns and programmers hit it *fast* when program size goes up.

After some experimentation with dynamic scope and extensions to first-class procedures, I decided that the fully-parenthesized prefix notation in the absence of macrology didn't serve a purpose, which I think puts me now firmly out of the Lisp camp.

So I can credit C++ with shining the light on the awfulness of template metaprogramming, so bright that the reflection illuminated fundamental flaws in Lisps.

Kind classes

Kind classes in Haskell have a little bit of the same character.

The ability to run Towers of Hanoi in the type system seems like a misfeature.

Oh, I'm not against scripting languages at all

I am against the term "scripting language", because I think most people use it as a derogatory term pretty much in the sense I gave above.

Oh, I'm not against scripting languages at all

I am against the term "scripting language", because I think most people use it as a derogatory term pretty much in the sense I gave above.

Oh, I'm not against scripting languages at all

I am against the term "scripting language", because I think most people use it as a derogatory term pretty much in the sense I gave above.

Oh, I'm not against scripting languages at all

I am against the term "scripting language", because I think most people use it as a derogatory term pretty much in the sense I gave above.

Oh, I'm not against scripting languages at all

I am against the term "scripting language", because I think most people use it as a derogatory term pretty much in the sense I gave above.

At Scale, in Single Piece Quantity

I agree that "at scale" means "in large quantities".

"In single piece quantities" means that the production flow is optimized for a run length is one. Given most things are now made with digital tooling, there is no reason that two sequential items coming off of a production line need to be identical, and there is no reason that has to be any slower than mass production methods. The turnaround time is often faster because there is no batching, the amount of finished inventory is zero, and the customer has purchased before production starts.

This contrasts with mass production, where the goal is to make a very large number of exactly the same thing in the incorrect belief that this is cheaper per unit. The amount of finished inventory is consequently large. Typically no customer has paid for those products at the time they are produced - which means they are all speculative. And it turns out that 30% of retail finished goods (on average) are scrapped because they never find a customer. In high fashion that runs as high as 50%.