C is Manly, Python is for “n00bs”: How False Stereotypes Turn Into Technical “Truths”

Jean Yang & Ari Rabkin C is Manly, Python is for “n00bs”: How False Stereotypes Turn Into Technical “Truths”, Model-View-Culture, January 2015.

This is a bit of a change of pace from the usual technically-focused content on LtU, but it seemed like something that might be of interest to LtUers nonetheless. Yang and Rabkin discuss the cultural baggage that comes along with a variety of languages, and the impact it has on how those languages are perceived and used.

"These preconceived biases arise because programming languages are as much social constructs as they are technical ones. A programming language, like a spoken language, is defined not just by syntax and semantics, but also by the people who use it and what they have written. Research shows that the community and libraries, rather than the technical features, are most important in determining the languages people choose. Scientists, for instance, use Python for the good libraries for scientific computing."

There are probably some interesting clues to how and why some languages are adopted while others fall into obscurity (a question that has come up here before). Also, the article includes references to a study conducted by Rabkin and LtU's own Leo Meyerovich.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

There's your problem right there...

Many of us like to think of the software industry as a meritocracy, rewarding those with the best skills who work the hardest.

Hence why many people are inclined to look for self- and social-validation in what languages they choose or reject. Language choice proves something about the chooser?

Many of us like to think of the software industry as a meritocracy

God forbid we should admit it has become about hacking power relations in all of society.

Many of us like to think of the software industry as a meritocracy, rewarding those with the best skills who work the hardest. To truly achieve this, we need to remove the hidden biases that can cause us to exclude great programmers.

Homogenizing workers who have programming skills doesn't make the industry a "meritocracy". The very opposite. It is part of a relentless process aimed at making one programmer as good as any other, even if that means standardizing on a fairly low common denominator of skill and self-/situational-awareness.

The last thing "industry" wants is programmers who are anything more than a particular variety of technician.

* For once, I find myself in entire agreement with you

 

Not The Whole Picture

I don't disagree that large parts of the industry want to commoditize software development, but this is a mistake in my opinion. A good developer, that is someone who is genuinely interested in programming, and has good communication skills can be significantly more productive. By structuring the company to give developers a good working environment and treating them like adults vested in the success of the business, you can create superior software with a smaller team. The key to motivation is to give people autonomy, mastery, and purpose, and the key to a successful business is to have the right people in the right roles. So we want self motivated programmers who aspire to mastery, are motivated by the purpose of the business, can work as part of a team, and can give critical feedback.

To bring it back to the topic, we mostly use Python because that's what the first developers in the company used, and it makes sense to recruit people who can work on each other's code. I don't find that Python is for n00bs, but generalising (which is not a good thing to do) Python developers are more scientific and mathematical, and we have several Python developers that I would properly call software engineers.

re whole picture

By structuring the company to give developers a good working environment and treating them like adults vested in the success of the business, you can create superior software with a smaller team. The key to motivation is to give people autonomy, mastery, and purpose, and the key to a successful business is to have the right people in the right roles. So we want self motivated programmers who aspire to mastery, are motivated by the purpose of the business, can work as part of a team, and can give critical feedback.

Businesses often discuss their more or less interchangeable, unskilled or low-skilled workers in exactly those same terms.

Throughout the higher profit product lines of the computing system industry, the tendency is for a ratio of a very small number of programmers serving a very large source of revenues. By Amdahl's law, improving the productivity of programmers quickly hits a point of steeply diminishing returns.

Looking at the culture in Silicon Valley, employers seem to place the highest emphasis, when hiring programmers, on cultural, sex, and age homogeneity. Having hired programmers, they emphasize non-conflict, compulsory performance of "enthusiasm", and very long hours.

The general theme of "good working environment" perks is kind of infantilizing (free food, alcohol, under-the-table speed, and so forth) and aimed at longer work hours.

There is something narcissistic in the way today's industry programmers in the SV tech firms tend to talk about themselves. Perhaps this is true elsewhere as well. There is much preening around self images like:

self motivated programmers who aspire to mastery, are motivated by the purpose of the business, can work as part of a team, and can give critical feedback

and I see the language debates which are subject of the top post in that light.

What is amazing how many millions of words this class of professional programmers can expend about who they are and what they do without them saying much of anything at all about the public users of their software and the social impact of industrial tech. That one of the supposed main subsectors is nothing less than "social network" computing in one guise or another must be some kind of new record in irony and lack of self awareness.

No conspiracy

The reason most of your diatribes don't resonate much with me is that you describe "industry" as if it was just a small cabal of Dr. Evil types. It seems to me there's more of an evolutionary process. There are widely varying ideas about how to run a software company out there, yours included, but there's also a selection process (the market). You seem to mostly be complaining about who the winners have been and what practices they've used to win.

conspiracy?!?

you describe "industry" as if it was just a small cabal of Dr. Evil types.

I have no idea what "Dr. Evil type" means.

"Cabal" is not really the right word and neither is "conspiracy"[*].

The condition of modern tech workers is determined by a relatively small group of people. They operate mostly in the open. Below I've provided a link to a Harvard Business Review article that might help paint the picture. What they do in tech is only superficially different from what they do in other sectors.

Corporate employees these days live under a more developed version of the kind of management historically associated with Taylorism or Fordism. Taylor famously brought forth the intensive use of metrics to tweek industrial processes. Ford recapitulated those ideas but added a new emphasis on standardizing his workforce by means of a kind of whole-life management.

I don't know if that's what you mean by "Dr. Evil" but, anyway, it is just the real direction corporate standardization of workers went.

Here's the HBR article that notably cites Google as just another variation of the kind of system used for, for example, Starbucks store employees. The main distinction of tech, in this area, is that the budget per employee is much higher.

https://hbr.org/2010/10/competing-on-talent-analytics

Something to notice in that article, perhaps: Note Google's emphasis on customizing interventions for the bottom 5% of performers (by its metrics). It's sort of a paradox or illusion in the sense that to an employee, such intervention appears completely individualized and personal. Yet, at the same time, that intervention is a massed produced commodity, manufactured and consumed internally.

Here is some background on Henry Ford's earlier generation of invtervention in worker lives:

http://jalopnik.com/when-henry-fords-benevolent-secret-police-ruled-his-wo-1549625731

No "conpiracy" or "cabal": this is just the banal condition of workers today.

The reason I harp on this in a context like Yang and Rabkin's article is because these management practices deeply relate to worker's view of their selves, their worth, and their role in society. And yet, consistently, in tech, these management practices distract workers away from their role in society vis a vis users and power over users. And at the same time, these management practices evidently confuse workers as to the degree of just how standardized and commodified their own lives actually are, to modern managers.

note:

[*] Although I say "no conspiracy" that isn't entirely true. For example, several of the Bay Area's largest tech employers have famously been caught in a wage fixing conspiracy.

Fair enough

You're right, you didn't claim collusion. Here's the part I still find unconvincing:

Throughout the higher profit product lines of the computing system industry, the tendency is for a ratio of a very small number of programmers serving a very large source of revenues. By Amdahl's law, improving the productivity of programmers quickly hits a point of steeply diminishing returns.

Basically, I think that you're saying that these companies are using sub-optimal software development practices but it doesn't matter because software development isn't the bottleneck. I don't think that's true. If you had a software development methodology that got you an order of magnitude productivity over the competition, I think that would be a game changer. I think the reality is more that "homogeneity" is currently more important to large scale software development than whatever advantages you can get through "heterogeneity", whatever that means.

Group dynamics are tricky

Comparing startups vs big tech companies vs finance companies vs .... and we see different engineering practices across them and within them. Picking just one factor, like using the language as a filter for long hours, is a gross over-simplification. When there's a lot of legacy, yeah, the same languages get used, but devs get paid to write new stuff too, especially when there are platform-level shifts like mobile and big data. Even when there's old stuff, there are agents of change, like performance, compliance, and difficulty hiring.

If I had more time, I'd love to measure how languages spread across teams in big orgs. Top-down vs. bottom-up orgs, Dunbar's Number, rapid growth vs M&A, generational shifts, etc. Developers posses a lot of agency in practice, and there are a lot of environmental factors!

Fordism

You have Ford all wrong. Ford was actually responsible for a dramatic improvement in working conditions. Ford's metrics showed that worker efficiency dropped off, and it was more efficient to swap in another shift hence the reduction from a six day working week to five. He also doubled the going rate of pay:
http://www.history.com/this-day-in-history/ford-factory-workers-get-40-hour-week

I would be interested in what you would do if you were leading a software company?

re Fordism

Ford was actually responsible for a dramatic improvement in working conditions.

While that is a popular story, it is not true.

Ford gave no inch to workers except as circumstance forced him. At every turn he sought to control as many aspects of the workers' full lives as possible. At every turn he pitted worker against worker.

The famous five dollar salary came at a time when Ford needed to intensify the work on his assembly line: extract tedious, hard work from each body faster than previously. He doubled the pay of some workers but only in exchange for (a) more than double the output and at a faster pace; (b) submission to whole-life surveillance and moral instruction by company agents.

Soon after, fearing his immigrant workers might turn out to be nascent communists, he began bribing Black clergy to supply him with Black workers who, in exchange, were expected to work against unionization.

18 years later, when salaries had reached $7 per day, Ford reacted to the depression by slashing wages to $4, below even what competing manufacturers were doing. Please note that this move boosted Ford's profits; that was the only reason for him to do it. Workers had little choice but to accept the deal with unemployment so high and so few jobs available.

Ford resisted unions with spying and physical violence right up until, at long last, the federal government forced him to accept a union contract. After that, he continued to spy on and use violence against workers anyway.

The condition of workers from, say, 1914 to 1941 worsened considerably. Ford's plants were no exception. Ford and people like him were not "actually responsible" for a handful of fondly remembered advances like spots of higher wages and a general shortening of hours. Those changes were forced upon men like Ford, no matter how much they bragged otherwise after the fact.

That is why, by 1941, Ford's workers were taking beatings from Ford's thugs: they wanted to stand up to the chief thug himself.

What Would You Do?

That's very interesting, and something I didn't know about Ford. Here in the UK the closest we have to someone like Ford was probably Cadbury, he was a Quaker and believed all human beings should be treated equally, so it would seem like he was a better model for a business than Ford. He had some of the same tendencies as Ford concerning the whole life of employees, effectively creating a new town for workers to live in called Bournville. You could view such things positively or negatively, as either being overly controlling of employees, or a genuine concern for health and wellbeing.

It is easy to criticise others when you have not made your own position clear, everything you have said has been a criticism of how other people run their businesses. How would run a software development company?

Igalia?

I can't answer for Thomas Lord, but Igalia is pretty interesting as a software development company. That said, it mostly improves on the question of the work conditions; being a consultancy business, they may have less leeway in terms of controlling the social impact of their work -- although their work on Accessibility, for example, seems quite cool.

re How would run a software development company?

Here in the UK the closest we have to someone like Ford was probably Cadbury

I know less about Cadbury but there are some similarities arsing from similar business needs. Cadbury efforts to stabilize and standardize their workforce included sex-segregated jobs, the firing of women upon marriage, introduction of the individualized performance record, the enforcement of temperance and strict sexual mores.

Bournville itself was motivated by need for a workforce inconveniently distant from the city but on cheap rather than dear land. Workers variously purchased or rented housing from the Cadbury's themselves, a company-town methodology for not sharing profits with private rentiers of workforce housing.

So, yes, a lot of the same elements of workforce standardization are present at Cadbury as at Ford. Looking just now I see there is a little bit of scholarship that tries to identify the common geneology of these patterns of workforce discipline but I did not look any farther into it than that.

There is some clear similarity to Bay Area tech. Firms have explored trying to build company-town housing but have generally been thwarted by local jurisdictions. In its place a kind of simulation: private transportation is provided to employees to encourage them to live in concentrated pockets in various regional cities. At the same time, some executives and investors in these firms buy up housing real estate in the markets where that concentration of workers drive up prices.

And, of course, the surveillance, record keeping, and disciplinary intervention in workers lives is both far more advanced and far less overtly brutal than in the late 19th and early 20th century.

Here is the point for programming language theory: When you look at these large tech employers (and not so much at their rhetoric) the empirical evidence is of a really intense, industrial-style standardization of the workforce. At the same time, compared to other sectors of the economy, wage costs are a much smaller fraction of revenues at those firms.

Consequently, potential productivity gains from PLT advances will be largely irrelevant to those firms unless, as Matt points out, the margin of increase is very substantial. (It might be interesting to figure out if for sure, and how, Go (lang) passes the threshold. Or the various frameworks/languages some firms adopt over plain Javascript.)

The "False stereotypes" that turn into "technical 'truths'" about languages represent at least in part, a non-recognition of the really quite standardized role of labor in software development.

How would run a software development company?

If you mean a company like the big firms in Silicon Valley, as an engineer, I generally agree with Eben Moglen that those firms should be regarded as and treated as huge environmental disasters.

What instead?

Imagine you have the opportunity to found a new software company that develops products to market and sell. How big would it be, what would it's values be. You have still only criticised others. If you are not part of the solution, you are part of the problem. What would your solution look like?

re What instead?

You have still only criticised others.

Stop trolling.

Questions != Trolling

Would you have answered the question if I had left out the part about criticism?

re questions

No.

Trolling

So other readers will easily be able to see which one of us is trolling, and which is engaged in a dialogue (hint a dialogue goes both ways).

GETTING OFF TOPIC AREN'T WE?

there are 3 kinds of people in the world:
1) those who don't even know there is any suck.
2) those who can see the suck, but don't know how to / aren't able to fix it.
3) those who are like the 2's but also have ideas on how to address it.

e.g. when it comes to programming languages, most people in the world are 1's, some (like me) are 2's and even fewer (although a concentration of them are found on LtU which is why i love it) are 3's.

Same goes for talking about what is right/wrong with: politics, company ethics, consumer choice, environmentalism, etc.

non-scientific notion of empirical

You're abusing the notion of 'empirical'. When you edge towards a hard universal claim about software engineering and languages in Bay Area etc. companies, it's easy for me to point out counterexamples from the same set of companies. 'Standardized' is hard for comparing Netflix (30+ languages; siloing 200-person teams) from Adobe (a series of acquisitions) from Google (invent their own languages) from Facebook (giant PHP app w/ custom VM), and then again from the many smaller firms.

The economic argument doesn't mesh with my interviews and analyses, and I admit I lost the thread. Early tech companies are largely engineering and prototype driven, and use technical leverage (e.g., AWS, big data frameworks, web scripting) to quickly iterate as a small team. As they find fit, they introduce large, accretive sales & marketing orgs. Engineering either freezes there, so a company taps out at ~$100M, or they advance to focusing on scale (cloud orchestration, distribution/encapsulation, ...), leveraging economy of scale (VM automation, elastic compute, ...), and side verticals (reuse, generalization: platforms). Technical advantage is huge in both the early and mid/late stages. Likewise, the industry has been recognizing this, evidenced both within changes to engineering orgs, and institutional support of dev tool markets that dwarfs the 90s and 2000s.

re comment non-scientific notion of empirical

it's easy for me to point out counterexamples from the same set of companies

You haven't done so and so I think you are trying to refute a claim I didn't make. Your counter-examples:

'Standardized' is hard for comparing Netflix (30+ languages; siloing 200-person teams) from Adobe (a series of acquisitions) from Google (invent their own languages) from Facebook (giant PHP app w/ custom VM), and then again from the many smaller firms.

Taking these one by one:

Netflix (30+ languages; siloing 200-person teams)

Here is the widely cited talk I think you are referring to:

https://youtu.be/WvpmiGURFYQ?t=8m33s

I've set the "start at" point for that video link to the juicy bits. What it describes, is an advance in the division of labor and commodification of programmers around the managerial units of the "small team" and the "micro-service". (In a car plant, you might think of a small team responsible for installing engines, another small team responsible for wiring electrical systems, another small team working on painting the exterior.)

One of the chief virtues of this arrangement (for Netflix in this case) is that the firm as a whole resists any sort of dependence on any particular member or members of these small teams. If one team simply left the workforce one day, for any reason, they could be swiftly replaced with only minimal disruption.

Adobe (a series of acquisitions)

The combination of stocks of commodities is a larger stock of commodities.

Google (invent their own languages)

Shouldn't that count as a failure of contemporary, academic PLT? :-)

Taking "Go" as an example here: isn't it an effort by Google to, much like Netflix, standardize as commodities some of its internal coinage? In other words, to again homogenize its workforce and reduce transactional costs among internal groups?

Lastly, in your last paragraph:

The economic argument doesn't mesh with my interviews and analyses, and I admit I lost the thread. Early tech companies are largely engineering and prototype driven, and use technical leverage (e.g., AWS, big data frameworks, web scripting) to quickly iterate as a small team. [....]

And the premise of VC investing is that investors buy up those small teams by the dozens and by the hundreds because there is so little intrinsic to any team that has to do with its ultimate success or failure.

Partial mea culpa

I had believed, as part of the Ford-ism discussion, you were saying mass mono-lingualism, e.g., in "the empirical evidence is of a really intense, industrial-style standardization of the workforce". So, I may have read too much into that.

We're pretty agreed that there have been been pushes to standardization, though we may be on opposite sides of our opinions on that being good for collaboration and thus collective output. (E.g., I *like* languages that help collaboration.) At the same time, it's pretty clear that companies and individual disagree widely on how that should work, especially when it comes down to particular tools and methodologies.

RE: so little intrinsic value to a team, I may be misinterpreting. Team + technology + traction + market are the basic components for a valuation. The more holes, e.g., earlier the company, the more team has to make up for that.

re I *like* languages that help collaboration.

I *like* languages that help collaboration.

Aside from very clear judgments such as those against Henry Ford having thugs bludgeon organizers, or any capitalist engaging in such asymmetric whole-life surveillance and interference -- aside from that kind of thing -- I am mainly just being descriptive. I have tried to explain clearly why so much academic PLT has so little traction.

As it happens I like languages that help collaboration, too.

If I am to be prescriptive, I would say that I think it is a tragic, socially regressive, even fascistic trend to define "collaboration" within the confines of wage labor. In other words, I think it is evil that Netflix controls (by means of doling out subsistance) a fleet of "small teams". I would rather see collaborating "small teams" just cranking out code in their free time, associating with whomever they please rather than with who their boss tells them to work with.

In fact, once you get to "collaboration" of the form where a boss is telling various teams how and when to interact, I think "collaboration" is a bad choice of words. It's mainly just following orders.

Thinking along these lines, BASIC might be a virtuous language in the sense that just about anyone can pick it up and get at least simple things done with it, if they put in a little effort.

Earning a Living?

I would rather see collaborating "small teams" just cranking out code in their free time, associating with whomever they please rather than with who their boss tells them to work with.

Why should programmers be any less entitled to earn a living writing code than any other trade or profession? Or do you think plumbers should fix my drains in their free time?

Small is good

I tend to mostly agree with Tom here, except for his use of the term "free time" which does seem to imply software development wouldn't be a professional activity. One of the advantages of a market economy is supposed to be the freedom that it affords its participants. This may be getting a little off topic for LtU, though.

sociopoliticoeconomics

Indeed, we're on the edge of the general problem of how to organize a society, which involves the inseparable pair of economics and politics; not that I'm not deeply interested, but it's not LtU fodder unless firmly rooted in how it affects PL design.

Programming languages for professionals

I am not sure the is anything wrong with programming languages being chosen for socioeconomic reasons. Programming is done by a team, with a history, and that will determine what is used. I would not use a language where there is a small number of developers, because it makes it too hard to recruit people. I like Haskell, and I am starting to like Rust, but it would not make sense to force the other developers in the company to use something they don't want to, as they were recruited as Python developers. I have written the odd thing in node, but then it is more or less down to me to maintain it.

Language Structure and Organisational Structure

I wonder if there is some similarity between the macro-structure of languages and the structure of the organisation that uses them? I would expect large teams to need modularity to divide up the work, but smaller teams may prefer functional. As such there may be merit in discussing which languages work better for what kinds of organisations, and where they might be gaps in the market for new languages to suit organisations that don't fit well with any of the current languages.

verily

Conway's law. I brought this up last year when trying (unsuccessfully) to explain to the then-executive director of the WMF why the various software initiatives being pushed by the centralized, top-down Wikimedia Foundation are totally inappropriate for the grassroots, bottom-up wikimedia sister projects such as Wikipedia.

language modularity support

What, then, are the kinds of language modularity that can be supported? A lone programmer working on a small program can benefit from modularity features (as a neophyte programmer in the 1970s, after writing a 600-line program in BASIC I went to my mother, who had been programming since the early 1950s, and said, "tell me about structured programming"). A lone programmer working on a medium-sized project (say, a few thousand lines of code) can benefit from somewhat larger-scale modularity features. All those things would seem more important for a team than an individual, but perhaps there are qualities of modularity support that apply to a team that would never apply to an individual? (I suppose by "functional" you mean Haskell-ish rather than Lisp-ish.)

speaking even as an individual programmer

i hate oppression too, BUT i don't want to lose any of the advances in componentization and modularity that we have gained over the years. Anybody who wants to maintain their own personal Big Ball of Dung is welcome to it, but PLEASE nobody throw my baby out with the bathwater.

The four life stages of companies

  1. Engineering-driven: "We make good product."

  2. Sales-driven: "We sell product, good or not."
  3. Finance-driven: "We make money."
  4. Survival-driven: "We keep going, even if we lose money every year."

Overly Simplistic

This is overly simplistic as it ignores market segmentation. There are many companies that focus on making money and having great engineering, like Rolls-Royce, McLaren, Ferrari, and many more from other sectors.

the point is

one of those things is probably, if forced to choose, the real primary driver. As in, the engineering (and/or luxury) for the companies you mentioned, otherwise they couldn't charge a premium.

Telling

The most telling trend, IMHO, is the classification of programmers as "software engineers". Software development is an art form, like mathematics. It is not engineering. It's not even remotely related to engineering.

Another tell is the continued existence of "OO shops" and "OO languages" despite the fact OO is completely and utterly debunked and proven unsound.

Finally, for all the progress made in academia, almost every system is moving backwards in time. Dynamic languages are getting more popular, the few statically typed languages around have failed to incorporate decades old ideas. The only exception I know of is Apple's Swift (which has pattern matching and real sum types).

I think the problem is simple. Industry wants processes they can manage. But by definition a REAL computer programmer necessarily NEVER repeats any pattern enough times for it to be manageable by industrial management processes, for the simple reason if they're a good programmer using good tools they should be automating that pattern. in other words, they're managing their processes themselves.

Forward into the mud

While I agree with much of that, I'm not on board with the characterization of dynamic as a step backward. On the contrary, I see static languages as part of the mindset that makes programming a form of engineering rather than a form of mathematics. Seems to me mathematics (this is a point I'm developing a blog post about, though as usual it's taking me ages) — mathematics is fluid in contrast to the awful regimented static type systems of computer science because mathematics is a conversation between sapient mathematicians whereas programming is a futile attempt by the human programmer to have a "conversation" with the non-sapient computer. The non-sapience of the computer is part of what makes it valuable, but treating it as if it were a person is a guaranteed fail. The ball-of-mud quality of Lisp works because it minimizes the programmer's conversation with the computer, and dynamicity is part of that. It's no accident, I think, that dynamic programming came from academia, where sapient academics were already engaged in these sorts of intellectual conversations with each other. Of course, Haskell has also come from academia, but I see that as a simple misreading of why programming lacks the fluidity of mathematics: if you think programming lacks the fluidity of mathematics because it's not dealing with the same concepts, you might well try to create a PL that explicitly works with more traditional mathematical concepts and end up with Haskell; whereas I reckon where programming gets into trouble is in trying to "explain" concepts to a computing device at all.

Mathematics isn't dynamically typed

I empathize with your comment that static typing disciplines can obstruct the fluidity that exists in mathematics, but certainly mathematics isn't anything like dynamically typed. In mathematics, there is no run-time. Even if there aren't explicit typing annotations, a mathematician can tell you what possible values inhabit each expression.

Standard mathematics is certainly unityped

certainly mathematics isn't anything like dynamically typed

(emphasis mine).

I know what you mean, the standard mathematics foundational formalism (first-order logic and ZFC set theory) is in fact unityped both statically and (arguably) dynamically. Mathematicians typically (but not always) have the sense to not exploit that, and some complain enough that they bother using different foundations.

All variables have the same "sort" (set), hence the logic is statically unityped. Those variables then range over sets, whose elements are sets, and so on recursively. I'd say that while there is no "runtime" in mathematics, one can substitute variables with values.

Everything else is encoded as a set, but formally "0 belong to 1" is a well-formed proposition that can be true or false, depending on how 0 and 1 are encoded into sets. But exactly as John Shutt wrote, mathematicians typically don't ask these questions.

True

You're right. I guess you can view ZFC as dynamically typed. As you say, substitution works. I had more in mind the practice of mathematics and was responding to, e.g., this:

mathematics is fluid in contrast to the awful regimented static type systems of computer science because mathematics is a conversation between sapient mathematicians

Almost every branch of mathematics that comes to mind starts by defining / constructing a space of values under study (topological space, measure space, metric space, ...). Then each theorem starts by naming assumptions about what values belong to what space. It feels quite statically typed to me.

There are stories of mathematicians that write things like "m element of n" in place of "m < n", but that just feels like failure to encapsulate the implementation details of an abstraction. I haven't met anyone who considers it good practice.

The uni-typed foundation has advantages in terms of being able to "roll your own" type system as needed. But I'd guess that this isn't done very frequently. Mostly the needed "types" are standard and already constructed elsewhere. I'd argue that the fluidity of mathematics has more to do with the implicit casts (little proofs that mathematicians can insert implicitly) than with a lack of types.

(But point taken: it's certainly never a good idea to use the word certainly the way I did.)

Good quote

"The ball-of-mud quality of Lisp works because it minimizes the programmer's conversation with the computer, and dynamicity is part of that."

Minimizing the programmer's conversation with the computer, ie putting in ONLY what makes the program work, and not a line more to check the correctness of the result might seem counter intuitive, but there is a different kind of reliability and creativity that comes from having uncluttered, easier to read, easier to change code.

I know it sounds funny calling Lisp "easy to read" but short is easy, and that's my point. The language could be Python or anything else dynamic and still make my point.

errrrrr... tell that to

e.g. the Mars Climate Orbiter etc.

to boldly go

Mistakes are a consequence of trying to do something new; you can substantially eliminate them by never doing anything that isn't already thoroughly understood. As I recall (from someone from NASA speaking many years ago at Stellafane, where they didn't have to dumb down their technical explanation), the Hubble primary was ground with incredible precision to the wrong conic section.

Software development does

Software development does have processes that can be managed (that's one among many), but these processes involve much more overhead to get the predictable result to qualify as engineering. Given there's no regulatory burden mandating this overhead, less rigorous approaches have outcompeted the engineering approach in time to market, and been sufficient to gain momentum despite the lower quality.

+1ish

When people say there is no such thing as software engineering it drives me utterly batty because one result of that is confirmation of all the utterly stupid bad self-defeating habits programmers demonstrate day in day out in just about every org I've ever been in or worked with. There are engineers out there who measure and test and project and simulate and sketch and try to boil it all down etc, and they should be held up as good examples, rather than told they are not a true scotsman.

process

If there was one thing I had to pick that really differentiates software engineering from other engineering, it'd be the bizarre obsession with "process". Open a mechanical engineering textbook and you find information about how to design mechanisms. Open a "software engineering" textbook and you find chapters devoted to process and management that most other engineering disciplines relegate to a project management textbook.

Interesting observation.

Interesting observation. Perhaps it's partly because other engineering disciplines have obvious physical limitations that limit the expressible solutions. Beyond inherent incompleteness, that doesn't seem to be the case with programming/computation, so perhaps "process" is a way of enforcing reasonable limits on the unaware?

limitations

Limitations I've faced (off the top of my head):

  • Memory
  • Bandwidth
  • Disk-space (or nonvolatile storage in general)
  • Maximum allowable processing time
  • ... (probably others I can't think of right now)

The fact that "space/time tradeoff" is a classic software design consideration implies that there are limits that we work within when developing software (just as, for example, mechanical engineers make strength/weight tradeoffs)

My own suspicion is that "process" became popular because software lacked the kind of analytical predictive models that other disciplines used to mitigate the risk of generating incorrect products (if you can't control the product, then try to control the process). Funnily enough, the reaction against that--the agile movement--has resulted in other engineering disciplines thinking more about how they can respond to change.

The fact that "space/time

The fact that "space/time tradeoff" is a classic software design consideration implies that there are limits that we work within when developing software (just as, for example, mechanical engineers make strength/weight tradeoffs)

Certainly there are desiderata in terms of space/time, but these are rarely hard and fast limits, and where they do matter a great deal, software development tends towards engineering, ie. designing a DBMS, big data, hard real-time, etc. Most software development isn't in this category.

I suppose what I was getting at by appealing to "expressible solutions", was that the number of ways of structuring programs with the same space/time tradeoffs seems significantly greater than the ways of structuring circuits that achieve the same functions with the same circuit cost. Circuit design languages are so restrictive as compared to general purpose languages for good reasons!

So a focus on "process" is perhaps to tame this expressiveness into a smaller, more consistent, more auditable set of core functions?

The trade off I'm most interested in is

programmer hours. One might sacrifice speed or memory in order to get a solution out the door faster.

why "process"

I think an archaeological examination of the focus on "process" in academic and quasi-academic computer science would lead back to the NATO declaration of a "software crisis".

That action then led to a few decades, at least, of lots of public and some private spending for research aimed at identifying, diagnosing, analyzing, and curing the alleged crisis.

Pointers at early history:

https://en.wikipedia.org/wiki/Software_crisis

Here they are in 1984:

https://en.wikipedia.org/wiki/Software_Engineering_Institute

and 1987:

http://guppylake.com/nsb/WarSpy/ ("A Spy in the house of war", Nathaniel S. Borenstein
Bulletin of the Atomic Scientists, April, 1989)

Around that latter period emerge new lines of thinking that are initially still in search of a formal "process" but which are beginning to make that "process" lighter weight:

https://en.wikipedia.org/wiki/Agile_software_development#Relationship_to_DevOps

Anecdotally, as I recall both software start-ups and established big firms in the valley, process was regarded as a kind of pure cost -- a necessary, often inefficient routine of coordinating groups -- the rules of process to be honored in their breach as much as anything else. People were not interviewed and hired with respect to their familiarity or skill level at "process". More typical was on the first few days of a job people would hand you slightly out of date print outs of general information about which revision control archives to use, how to check out a tree, how to check in changes, what time the weekly meeting was, and so on.

The modern trend towards "devops" and siloed small teams seems to me to reduce process further AND to shift the focus of what remains towards keeping the areas of responsibilities of programmers narrowly confined. In other words, it is much more a tool of personnel management, less some strategy for solving a hard technical problem.

The process racket and strong typing

Come to think of it, one of the major premises of decades of academic fascination with typing systems seems to be that eliminating errors from programs is a critical, hard problem --- a premise that is straight out of the NATO software crisis agenda.

Process can be personal.

At one place where I've worked, the "software management process" was that every major module was "owned" by its lead and assisted by two to five juniors depending on how big it was. On average we'd be the Lead for one module and the Junior for two or three others - closely related to the one you were lead on if possible. For example my office-mate Simon got senior for UI development, junior for design, and junior for documentation.

New programmers came on as junior in two modules, after a while spent getting familiar picked up another module, and after another while, if they were any good, they were offered the lead on something. The best coders got lead on two things and junior on one.

I got lead for natural-language analysis and quality assurance (QA wasn't really a module, but we kind of treated it like one), and junior for ontology development. In practice that meant I spent most of my time on language modules because at this shop QA was a breeze; the code quality was really good. Ontology (which was mostly database design about how to organize knowledge for agent access) rarely had any work for me because its lead was completely on top of it. In fact he spent most of his time working on his other lead because ontology was usually quiescent.

It was very personal, and it worked very well. People weren't very interchangeable, and felt real ownership of their modules, right down to the fact that if they were sloppy or made bad design choices it was themselves, and not anybody else, who'd feel the eventual pain. If one of the devs quit or had to scale back for personal reasons (such as childbirth or convalescence after an injury) one of the juniors for that dev's module or modules would usually move up to lead for the duration. Most people took junior for the modules they were most interested in, which was kind of like apprenticeship to become the lead for that module at some point in the future.

The engineers were not interchangeable, but I felt like we were more productive than I'd ever been elsewhere. We certainly had better code to work with than I've ever had at any other workplace. But somebody quitting or even scaling back due to personal life issues (like childbirth, or convalescence after getting hit by a truck) was a major event that had to be considered carefully and managed - it could be a crisis, but usually wasn't because most modules were fairly bug-free at any given point in time - we were implementing new features more often than we were bug-squashing.

That place is my shining counterexample to the drive to commoditize programming - and it really worked, for that shop. But there weren't a whole lot of devs - only 22 of us, I think, when the company was finally bought out.

re process can be personal

shining counterexample

I appreciate that you liked that job but all the details you give of it point to a commodification of programmer labor. You say that programmers weren't "interchangeable" but you describe that they were and that the transaction cost of swapping one out was a definite but finite amount of careful consideration and management. The same is true if you want to replace a carpenter or a sous chef (both which commodity-professions tend to be highly social and are often personally fulfilling).

so what?

i haven't tried to go back and figure out, but has anybody claimed commodification is either good or bad or neither or both?

re so what?

has anybody claimed commodification is either good or bad or neither or both?

My claim is that the commoditization of programmer labor implies a fairly low bar for how "sophisticated" or hard-to-master a successful programming language can be, at least if "success" is meant to be some kind of large commercial impact. The academic enthusiasm for ever more dizzying and subtle heights of abstraction may well prove permanently, more or less, beside the point.

(Of course there are always, at the margins, exceptions to that kind of rule.)

For me, the commodification of labor in general is a historic, long-established fact and there is little point to deciding if it is good or bad. There is no going back. The important thing is that it has a kind of internal logic to it that can help to understand the form and function of human society and perhaps some of its general trajectories.

No abstraction

Java seems to be designed to totally avoid abstraction and context so that one can understand a snippet of code without context (except for the referenced classes) not much less than Fortran.

I think that's the whole point - large scale programming is tedious because they don't trust programmers with abstraction or context.

How many modern languages have macros? How many have anything more than accidental meta-programming?

Companies are avoiding abstraction and language designers are catering to that lack of abstraction.

re no abstraction

Interesting observation.

About this part:

large scale programming is tedious because they don't trust programmers with abstraction or context.

I don't know that I would frame it as a question of trust or even as a conscious choice the companies make. The specific history of Java strongly suggests otherwise (see below).

If you want to be reductionist about it I think it would make more sense to look at the complicated competition among rival firms.

Java:

Java came out of a research lab at Sun and in its earliest incarnation was called Oak. Oak was originally conceived as a tiny, fast, garbage-collected, object-oriented language for embedded systems. Its conservative approach to abstraction related to two design goals for an embedded system programming language: 1) It had to be fast on slow and simple CPUs and able to run in tight memory; 2) It had to be "crash proof" in the sense that run-time failures due to programming errors should always be detected and handled as exceptions, never allowed to cause random data corruption or executing corrupted code.

I don't know the motive of Gosling et al for setting themselves those goals. Anecdotally, during Gosling's day at CMU a joke that was going around the CS department, about the upcoming networked world of embedded systems all around us, was "If a thermostat isn't turing complete, I don't even want to touch it." When I first saw Oak, I did kind of assume without proof that Gosling was riffing on that sensibility from CMU.

My impression from quiet inter-company meetings near Sun was that Oak had pretty much stalled. I got the sense the research group at Sun had faint existential worries, but a lot of swagger to push past them. Larry Augustin had just quietly and correctly proved to the satisfaction of investors that the new Linux kernel, combined with the GNU user-space, combined with PC hardware's rise-from-below would probably kill Sun's hardware business. The Valley had put a lot of sweat and treasure into anti-trust actions against Microsoft, scoring some mostly phyrrhic victories. Mosaic was in circulation. Netscape was a very well funded start-up that had not yet released its first products.

The inter-company discussions in the Valley started to congeal around a vague plan to (a) Somehow make browsers into a portable platform for desktop applications; (b) Thus displace Microsoft's dominance on the desktop; (c) Use the web server to renew Silicon Valley's strong suit of client-server computing; (d) Leverage the web to finally actualize the long-standing dream of moving consumer transactions to the Internet, helping Silicon Valley expand into financial industries; (e) Standardize around a semi-proprietary "ubiquitous" platform -- a handful of programming languages and OS features, hardware independent, running on your wrist watch, your thermostat, your laptop, your browser, your web server, and who knows where else.

You can see they were very good at predicting the big themes of where the industry would go, but not especially good at leading and leveraging those changes according to plan.

Java, famously, early on, was conceived as the language and run-time system for "hardware and OS-independent" desktop apps.

From what I heard, it was for that reason more than any that the plug was pulled on an early internal-to-Netscape effort to inject Scheme into browsers as an extension language. The directive came down that the extension language would have to "look like" and have to somehow complement Java.

At the same time, the plan to create a ubiquitous platform and to spread to the back-end and to embedded systems led to work on Java compilers, mainly (as I recall -- take this with a grain of salt) at a company called Cygnus and at IBM.

It was mainly IBM that moved to try to become early leaders in middle-ware, consistent with their general strategy of being system integrators. As I recall, though I did not track this part closely, I think IBM was the leader that brought Java to early prominence in the emerging field of middle-ware. Sun was very accommodating of IBM's lead.

To get to the point about industry motives for picking Java:

(a) There were no other read-to-go options for those decision makers. That was a big advantage to Oak: it already had years of work behind it and a working implementation by the time it was reborn as Java. What other choice was there?

(b) None of the business decision making in that history had time, really, to ponder such lofty ideas as the appropriate level of abstraction to allow factory-line Java programmers. Instead, there was a lot of pressure from above for people to converge on Java because the cross-company talks envisioned a ubiquitous platform. And Java was the choice there because the Sun researchers talk a good game and because Oak was on the shelf.

Hardware Debugging is (was) Very Costly

The idea that software written in a specific language for embedded devices should fail gracefully when a bug is encountered, isn't new but more a remnant of that time.

It is only during the last decade that debugging of code on an embedded device has become simple, and often you're still stepping through compiled machine code. The old days, not so, your device would freeze and you're back to blinking leds to see what went wrong.

So, that requirement, at least, makes a lot of sense.

expressiveness

Most software development isn't in this category.

Really? It seems to me that most modern software is interactive, and therefore at least soft real-time in nature. And while desktop and server systems have effectively unlimited memory and storage, mobile devices--which are a huge market these days--are a lot more constrained.

So a focus on "process" is perhaps to tame this expressiveness into a smaller, more consistent, more auditable set of core functions?

But you can do that without resorting to "process". Software developers just choose not to. For example, hardware designers typically focus on synchronous logic models, because asynchronous logic is harder to design due to its nondeterministic nature. In contrast, software developers happily use wildly nondeterministic asynchronously interleaved threads. For whatever reason (presumably historical differences in the development of hardware and software design as disciplines), software developers deliberately choose more expressive, but consequently more complex, design spaces to work within.

Also, to be clear, I don't think that having some kind of process in software development is a bad thing. It can be an important part of engineering a useful system. I just think it's weird that software engineering as a field got so obsessive about it, when other engineering disciplines just treated at as sort of a background matter.

Really? It seems to me that

Really? It seems to me that most modern software is interactive, and therefore at least soft real-time in nature.

I don't think this qualifies under what we were talking about. The interactivity needed in a UI isn't a hard property to achieve, you just run the UI on a separate thread and send it messages. This isn't in the same category of problem as designing a program around data structures and algorithms to meet strict resource limits in space or time.

And while desktop and server systems have effectively unlimited memory and storage, mobile devices--which are a huge market these days--are a lot more constrained.

Mobile devices are just as constrained as desktops of 10 years ago, which is to say, "not very". Microcontroller development is much closer to engineering when a certain quality result is desired, but this is hardly a common software development activity.

I just think it's weird that software engineering as a field got so obsessive about it, when other engineering disciplines just treated at as sort of a background matter.

Given your previous point that programmers chose the more expressive but complicated non-deterministic asynchronous models to build upon, it doesn't seem strange. If they're not willing to constrain the model to make it tractable without a process, it seems process would be the only way to tame it.

process redux

The interactivity needed in a UI isn't a hard property to achieve, you just run the UI on a separate thread and send it messages. This isn't in the same category of problem as designing a program around data structures and algorithms to meet strict resource limits in space or time.

And not every mechanism is designed under the strict resource constraints required by, for example, launching into space. As a consequence, the amount of "process" applied to engineering those mechanisms varies depending on the constraints under which it is designed. The same applies to software.

If they're not willing to constrain the model to make it tractable without a process, it seems process would be the only way to tame it.

I think that my fundamental objection to that line of argument is that, at least in my experience, engineering "process" (in software or in other disciplines) is not about "taming the space of expressible solutions". Its focus is on making sure that the requirements of the system are understood, that the design will actually meet the requirements, and that delivered product is what was designed (and actually meets the requirements). So it's mostly about taming human fallibility when it comes to describing what they want or need, and when it comes to creating a design that does what's intended (that latter part might include your "space of expressible solutions", but I think it covers more than what you're talking about).

If you wanted to make an argument about the focus on "process" in software development, I suppose you could claim that software systems have much more nonlinear input/output behavior than other engineered systems, and are consequently more susceptible to human error in the specification of requirements and design. Perhaps that's what you're getting at when you talk about the "space of expressible solutions". I don't think it's got anything to do with a lack of physical constraints though. It's simply the nature of discrete systems (whether realized in software or hardware).

As an aside, I think that one of the reasons that Agile approaches have become so popular is that describing (let alone implementing) the desired behavior of discrete systems is something people struggle with. So building in a rapid feedback loop to fine-tune that behavior can be a good way to "tame the human fallibility" in explaining what a system should do.

Engineering

I've always understood engineering as a process to make sure that people don't make the same mistake twice. When you scale that up to a large number of people, and a large number of mistakes, then you get the heuristic families of design techniques that make up an engineering field. It is still just making sure that we've tested for the problems encountered before.

Software is weird though. How do we sit down and design a novel piece of software? The only approach that I've ever seen work is burn through the first few prototypes quickly. Keep tearing them down and starting from scratch with a fresh perspective.

Teaching that to people is hard. It's easier to teach them to do what they're told, and then invest resources in avoiding the development of new software. Process is simply a technique for controlling (avoiding) novelty. Anyone can sell that to management: a way to sustain output and minimise risk by rehashing things that almost worked before. The very simple truth about the software development process is that it exists to prevent programming.

design

How do we sit down and design a novel piece of software?

How do we sit down and design any novel system?

The very simple truth about the software development process is that it exists to prevent programming.

Given that there isn't a single, well-defined, "software development process" that seems like an over-broad generalization. Engineering process mostly exists as a way to ensure that, when creating a novel system, the system does what it's intended to do. What prevents it from doing what it's intended to do is human fallibility (the engineers--or programmers--didn't understand what the system was supposed to do, or understood but made a mistake in implementation). Some processes are more efficient than others at catching and eliminating mistakes. The useful ones are by no means about "avoiding novelty".

How do we sit down and

How do we sit down and design any novel system?

By understanding it well enough that it loses its novelty.

Some processes are more efficient than others at catching and eliminating mistakes.

Yes, and discussing them in general does not seem to be an over-broad generalization. Identifying, understanding, and thus removing the unknown from a system is common to those that are efficient at catching and removing mistakes.

software engineering

My point was that software is not really any different than any other kind of engineering design with regard to either novelty or process.

Understanding MEANS building.

By understanding it well enough that it loses its novelty.

Exactly. That is exactly why you have to sit down and build prototypes. You do as much design as you are confident of for each prototype, but building prototypes (and observing what's wrong with them or what problems you run into or things you can't do with that design) leads to a full understanding of the problem much faster and with much greater certainty than discussion and planning. Discussion and planning only help when you are talking about something simple that can be built out of standard and familiar parts.

It is only once you have a full understanding of the novel system that you can undertake a serious design effort for the version you're going to live with. And a full understanding requires building prototypes.

Novelty

Yes and my point is this: if we understand a novel system well enough that it loses its novelty, if we're a decent programmer then we *encode* that understanding in generative software. That in turn is a novel problem, but the original well understood novel problem now is never coded by hand again, since we have automated it.

Is this not the whole point of higher order type systems, to have a unified language that can automate *any* pattern that is repeated often enough to comprehend?

And if we do this systematically, how is it possible to consider software development anything but an art form? At every step we could be engineering we eliminate the engineering by delegating it to software the construction of which is itself novel.

engineering & art

So the engineering moves to ever-higher levels of abstraction. That's not the same as saying that it doesn't happen, or that art is what's being done instead. Of course, without understanding what you view as the difference between art and engineering (i.e., what the one does that the other does not), it's hard to say whether we're disagreeing, or just talking past each other.

I've just heard a talk by

I've just heard a talk by Walter Tichy on empirical evaluations of software processes. He observed that, in empirical evaluations, most processes fail to produce a measurable benefit (hence are typically not published, sadly). He conjectured that developer are already working at maximum productivity anyway, but I'm not sure what was the baseline, whether "no process" or "a standard process".

Contradictory

If, as you say, programming is a craft bordering on an art form, then there is no such thing as an "unsound" technique.

I happen to like OO. It encourages programmers to organize their programs around apis. The public part of an interface is the public API, and that's separate from the private implementation.

A class is a module that also has the advantage of encouraging each instance to be independent as possible.

Useful.

Calling it "unsound" makes it seem as if you found some less-than-important mathematical property of OO and you pretend that it's the only thing that matters..

Modularity encourages

Modularity encourages organization, not OO specifically. OO is simply a particular kind of modularity, and one not well suited to many scenarios.

Can you give me an example where

modules would work but classes wouldn't?

"To Work"

Depends on your definition of "to work", but here are a couple of the more obvious indicators that other modular techniques would probably handle your case better than OO or classes:

  • the presence of friends,
  • the use of double dispatch,
  • resorting to the visitor pattern,
  • the occurrence of F-bounded generic binary methods,
  • "dependency injection".

reaction

My solution to the friend class problem is:
Don't make the internal representation hidden by the langauge, make its methods and variable named in a way that marks them a implementation dependent. I personally name them with a leading underbar.

Double dispatch is a problem in general with languages where the types aren't known at compile time. Though I don't know a language which solves it this way, I'd favor a guard clause syntax in such a context if types really aren't available. I always fall back on prolog as my example of a language where types aren't known statically (or even named) and yet which seems to handle the problems arising from that cleanly.

I don't know what to say about the visitor pattern.
1) Java's closing of classes always offended me.
BUT
2) migrating functions back to the root class and other primitive classes in Smalltalk and Ruby always seemed to me a library-hell disaster waiting to happen, and possibly result of not supplying raw functions or macros... Ie oo is good but it's not sufficient.
3) I rather enjoy working with free form languages that let me treat individual objects as slots and access fields from outside, add new fields and add new methods. Such anarchy can't give you the most efficient execution mechanism, but with a smart trace jit it doesn't have to be deadly. And it would be nicest to have multiple sorts of mechanisms.
4) I have used the letter/envelope pattern in C++ a lot of times, sometimes to speed up compilation! Boilerplate offends me, but I admit that it works well enough.

My feeling about types (which solve the double dispatch problem if only in static cases) is that they're useful, but they make a compiler much less light weight. A single person can implement a wonderful language that's not statically typed. It takes much more work and to make a typed system, and there's rigidity at all levels waiting to trip you up.

***I don't know what F-bounded generic binary methods are, please explain. ***

I've never knowingly used the "dependency injection" pattern and so don't know what's wrong with it.

resorting to the visitor

resorting to the visitor pattern,

Had to look this one up to check what the SE jargon actually meant: it looks like an OO solution for the Expression Problem? Would that be quite a tongue-in-cheek reading of "to work", or are there nice ways to solve this that I can plug into a real compiler today?

classic but with problems

You read it right. It's a classic OO pattern (Gang of Four), but it's clearly ugly: void return types are a symptom of abstraction failure.

void return types

I don't believe that procedures returning void necessarily signal an abstraction failure. They signal procedural as opposed to functional abstraction. That is, abstracting verbs as opposed to nouns.

Procedural abstraction is not failure. It is in fact a very reasonable paradigm when the program is completely centered around the manipulation and transformation of a very large and carefully designed data structure.

You may argue that such a program has no abstraction, but that's only because you're not counting procedural abstraction. Procedural abstraction is about making new verbs, not new nouns.

At some point when it's developed, if a larger program wants to use its structure as a noun, sure you can wrap it up in an object and export an interface. But when dealing with a large structure, you need a lot of verbs that say what action to do with it next, in order to have readable and understandable code. Otherwise you wind up with single procedures that are thousand-line monstrosities doing the same manipulations at many different points in the body.

Basic "Visitor" doesn't solve the expression problem

You're almost right, but not quite. The basic "Visitor" doesn't solve the expression problem, though it is related.

As you probably know, the expression problem has two standard "non-solutions": the object-oriented and the functional decomposition. In the object-oriented one you can easily add variants, in the functional one you can easily add functions processing the variants.

The visitor pattern just allows encoding the functional decomposition in an OO language: you can easily add functions processing the variants, since they are encoded as separate visitors, but they list all variants they handle and inherit from one interface listing the variants. And that specific interface is mentioned by objects.

I know there are visitor-based solutions to the expression problem, but they are not trivial. I think the first one is in this paper (Sec. 4, Fig. 6 and 7; the claim to novelty is in Sec. 1.4).
http://www.cs.au.dk/~eernst/tool08/papers/ecoop04-torgersen.pdf

Interesting

I may have read that first time around without really appreciating it. It certainly looks interesting now. Thanks for the pointer.

I think the relative

I think the relative advantages of objects vs. abstract data types have been hashed out many times at this point. Each yields a type of modular extensibility, but which is ill-suited to scenarios where its dual excels.

contact me for snake oil franchises

And there's almost as many times when somebody professes to have found a solution to it. Does anybody believe there really has been a good solution yet? Or does it just push the "it depends on your context" answer up another level on the teetering stack of abstractions and paradigms?

Unsound technique

Oil painting still requires technical knowledge, so does photography. Just because its an art doesn't mean the underlying technology doesn't have rules.

at least half the time

one can't be a good artist before being good technically.

Heretical!

You keep saying that Objects are "Unsound" because you found a problem that the interface doesn't solve in simplest (and frankly the most usable) object system.

And then you draw the untenable conclusion that using class systems, ie programming in OO languages is proof of incompetence.

It's such an outlandish thing you're saying.

Basically, you're talking nonsense. Yes you found one problem that would require a different sort of solution in a duck typed language, something it doesn't do all that easily.

So what?

That doesn't make such object systems, over all, worse than the alternatives, I would say quite the opposite. YOUR PROBLEM ISN'T THAT IMPORTANT TO ME.

Is that clear?

A working definition.

The dividing line has been debated many times before, so I cannot offer any fresh insight on the subject. But in the context of this discussion it might be relevant to point out that while both arts and engineering are based on work within a technical domain, a salient difference is how their worth can be evaluated.

Engineering processes are valuable because they allow a series of locally-evaluable steps to arrive at a desired end-product. Art is valuable precisely because there are no locally evaluable steps, yet it arrives at a conclusion that can be seen to have value. One is the equivalent of carrying a flashlight, while the other embraces the joyful surprises resulting from exploration in the dark.

Not all programming is engineering. But some programming is. The dividing line runs somewhere through software development.

software engineering

Software development is an art form, like mathematics. It is not engineering. It's not even remotely related to engineering.

I'm not sure what you think "engineering" is. But my experience, having worked as a software engineer, an electronic engineer, and an aerospace engineer, is that engineering is (at its core) about solving design problems using a pragmatic mix of whatever art, science, mathematics, empirical experimentation, craft experience, and design heuristics are available to get the job done. Software development is no different than other engineering disciplines in that regard. But perhaps I'm missing something. What makes you say that software development is "not engineering" and "not remotely related to engineering"?

software engineering

Hear hear.

Programming languages are not chosen by programmers.

But programming languages are not chosen by programmers after all.

In the real world choosing a programming language for building a piece of software is a strategic decision that is made by evaluating different criteria.

How portable the programming language is, how much effort would be required to move the software to a different platform, how easy would it be for a new team to leverage the source code, etc.

In the real word programmers do not choose the programming language for building software.

It's the CTO that chooses the programming language and, then, the team of programmers for building software (and yes, how easy is to find a good team of programmers and how expensive is to keep them happy are just another criteria to decide for a given language).

Of course programmers may choose a preferred programming language of their liking, and even become fanatics about it, and this will then define how well they will be hired in the future.

A strategic decision?

More like a whim or a fashion, much of the time. And this is true whether the choosers are developers or management.

People also matter

This thread started, as I understand it, from an observation about what programmers think of PLs, and quickly changed direction over the question of whether what programmers think of PLs has any effect on what language they end up programming in. This has led to a variety of most interesting discussions, so I count it a success; but I would also suggest that what programmers think of PLs inevitably does matter, in the long run. Whatever authority-driven system may be in place, I guarantee it isn't pure; if it were it would fall apart, as human flexibility is always the safety net for such rigid systems. The ambiance from the peripheries works its way in toward the center through the cracks. (Look how holistic ideas from Naturphilosophie have worked their way into physics, from "fringe" status to occupying the high ground. At each paradigm crisis, those ideas were on scene looking for opportunities to gain ground. Now try googling "These are your father's parentheses". :-)

Accepting for a moment the hypothesis that programmer perception of PLs matters one way or another, what as a programmer does one want in a PL? For my part, I'm not interested in machismo per se, but I would really like a programming language such that programs I write in it will still work fifty years from now. I don't personally have any programs fifty years old (although I think my mother has some around somewhere), but I do have one about forty years old; it's on paper tape (a seriously rugged archival medium), written in a 1970s custom BASIC that could probably be converted without much trouble to any other traditional BASIC... if you can find a working traditional BASIC in 2016. I've programs thirty, twenty, and even fifteen years old that would be very difficult for me to run. The ones I'd be most likely able to resurrect were written in... C.

QBasic

Remember QBasic that came together with an old DOS (6.2 if I recall correctly)? Well, here is a reincarnation that runs inside browser :)

QBasic

Well, the lambda calculus is

Well, the lambda calculus is over 50 years old and still going, so that's probably a good foundation.

I like the lambda calculus a lot.

I like lambda calculus with gradual typing. I want to be able to tag any variable reference with one of four kinds of type annotation.

throw <type> meaning "if it isn't this type throw an exception."

assert <type> meaning "halt if it isn't this type or fail at compile time if you can definitely prove it ever won't be. It is permissible to issue a warning at build time if you can't prove it will be."

prove <type> meaning "fail at compile time unless you can prove it's of this type"

assume <type> meaning "Believe this will always be of this type because I say so. Fail at compile time if you can prove I'm wrong. It is permissible to issue a warning if you can't prove I'm right. But whether you can prove it or not use it as an axiom in further type proofs and optimizations. If the resulting program crashes it's because I screwed up."

ship it!

it would be fun to try to use.