Big questions

So, I've been (re)reading Hamming /The Are of Doing Science and Engineering/, which includes the famous talk "you and your research". That's the one where he recommends thinking about the big questions in your field. So here's one that we haven't talked about in awhile. It seems clear that more and more things are being automated, machine learning is improving, systems are becoming harder to tinker with, and so on. So for how long are we going to be programming in ways similar to those we are used to, which have been with us essentially since the dawn of computing? Clearly, some people will be programming as long as there are computers. But is the number of people churning code going to remain significant? In five years - of course. Ten? Most likely. Fifteen - I am not so sure. Twenty? I have no idea.

One thing I am sure of: as long as programming remains something many people do, there will be debates about static type checking.

Update: To put this in perspective - LtU turned fifteen last month. Wow.

Update 2: Take the poll!

How long will we still be programming
 
pollcode.com free polls

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Open questions in functional programming?

This is a topic I discussed with Edward Yang, more than a year ago, in the the more specific context of "functional programming". He planned to do a blog post about it at the time, but I guess he hasn't got around to writing it yet, so someone here may be interested in, or amused by, the list I had prepared.

One thing that I find perplexing is that many of the questions in this list are not *formal* questions (in the sense mathematicians may have formal questions; although many open question in mathematics are actually informally formulated because we don't know how to make them precise). Some other fields, some even very closely related to functional programming, have list of rather formal open problems (TLCA list of open problems), but I often feel that they are relatively "small" problems (except maybe "what is a good computational model of Univalence?") -- not talking about "P, NP?" which is not really PL-related.
Some researchers also have a knack for hitting perplexing open problems that are easy to formulate formally, and it often comes from being very demanding about using the most rigorous tool in the most expressive way possible, even though the particular need at hand could possibly be filled by a quicker, less principled (compromise, workaround) solution.

The question about the K axiom has since been solved by Jesper Cockx.

# High-level questions

- Can we make it practical to write programs along with their specifications, and *guarantee* that they match?

- Can we make proof-assistants so usable that all mathematicians could use it?

- Can we extend industrial-strength automated reasoning technology to high-order logics, instead of encoding our problems to SAT/SMT solvers?

- Can we produce a usable effect-typing system?

- Can we produce a usable separation logic?

- Why are we consistently failing to deliver satisfying module systems to actual programming languages?

- Can we use functional programming technology to produce a good low-level programming language?

- Can our language theories accomodate for live code update, variations (architecture-, configuration-, language-version-dependent code), code distribution?

- What *are* the "right" presentation*s* of equality in dependently typed languages?

# More specific questions

- Can we regain principal type inference in realistic programming languages with dependent types? (This may require layering the type system.)

- What is the structure of the subclass of dependent pattern-matching that can be written without using the K axiom?

- Can we make reasoning about parametricity practical?

- Can we design practical *pure* languages, with no ambiant non-termination effect?

- What is the right proportion of nominal typing / generativity in a type system for usable programming languages?

# Solved questions

- Can we produce functional programming languages that are competitive in performance with other languages for most tasks?

- Can our language theories accomodate and explain object-oriented programming?

- Can we integrate the main ideas of functional programming into everyday practice of users of mainstream lanugages?

- Can we design programming languages with no ambiant effect (except non-termination), and implement them as (non-derivable) libraries?

- Can our language theories account for incomplete programs with holes?

# Maybe-out-of-scope-maybe-underestimated questions

- Why do some languages/systems see wide-ish adoption while others don't? Besides expressivity, what are the success factors?

- What are good surface syntaxes to accomodate users tastes, varied tooling, convenient library-level/domain-specific abstractions/sublanguages ?

- What are good programming systems to teach programming?

- Can we make developping good tooling easier?

- What are the right user interaction models for collaborative program construction and maintenance? Do programming languages and theories need be adapted to remain pertinent with respect new interaction models (mashup, code wiki, etc.)?

Some speculation

Can we extend industrial-strength automated reasoning technology to high-order logics, instead of encoding our problems to SAT/SMT solvers?

The SAT community is already working on quantifiers from the other end, but these are quantifiers over finite domains. In SMT the issue is that you run into undecidability, but a lot of techniques to handle quantifiers have been developed. I think the best approach is to have a SAT/SMT solver as a tactic, and integrated with the language so that if you don't manually specify a proof term then the SMT solver is invoked automatically (so that you have the same streamlined workflow as with a pure SMT approach, but with the ability to drop into manual proof if necessary).

Why are we consistently failing to deliver satisfying module systems to actual programming languages?

I would say it's not entirely failing: Scala's has modules (objects).

Can our language theories accomodate for live code update

I'm not sure if languages can or need to do much here. It's mostly a question of application architecture. Techniques like substituting one code pointer for another in a running program are never going to work well. The application needs to handle this in a sensible way, for example serialize all state, then convert the data representation to version n+1, then start up the new version (at least conceptually, the conversion could be done lazily when the new program accesses pieces of the data to minimize downtime). Even better is an event sourcing architecture so that you could run two versions of the code side by side for a while.

What *are* the "right" presentation*s* of equality in dependently typed languages? [...] Can we make reasoning about parametricity practical?

The answer to the first question is most likely HoTT. The answer to the second question is most likely internalized parametricity. An interesting question is whether these fit into a common framework. I think they do. Both are about functions that preserve something. In HoTT all functions preserve equality as defined in HoTT, with parametricity all functions preserve relations. They are both of the form if F x y then F (f x) (f y) where F depends on the type. Even in an ordinary dependently typed language you could define a special function space that preserves F:

A ~> B = (f : A -> B, F A x y -> F B (f x) (f y))

i.e. a function paired up with a proof that it preserves F, which is indexed by the type. There are two further tweaks needed:

(1) You want the function space of the second component to be the ~> function space too F A x y ~> F B (f x) (f y)

(2) You want to support dependent functions (x:A) ~> B x.

So you end up with something like:

B : A ~> Type
(x:A) ~> B x = (f : (x:A) -> B_0 x, (e : F A x y) ~> F (B_1 e) (f x) (f y))

Note that B itself has ~> function space.

You can put a locally cartesian closed structure on this function space, and then HoTT & internalized parametricity overloads normal lambda calculus syntax to work with this function space. Perhaps you can make this overloading parametric in F, i.e. the thing that's being preserved. You might even be able to support multiple such function spaces in the same language.

There are other F's that are interesting. HoTT uses equivalences for F on Type, i.e. two types A,B if there are f : A->B, g : B->A with f (g b) = b and g (f a) = a. This results in a groupoid structure. Maybe if you only require f (g b) = b you get a category structure.

If you leave both out, so you only require the existence of f : A->B, g : B->A, you get exactly the invariant functors. Given a Q : Type ~> Type, and given f : A->B, g : B->A, you would get f' : Q A -> Q B and g': Q B -> Q A. Of course this won't be locally cartesian closed, but it will be cartesian closed.

If you only require f : A->B, then you get the usual functors.

This is even related to type classes. If you have the List type constructor, then this preserves Eq-ness. If we have Eq A then we have Eq (List A).

Lots more things could fit into this framework, like unit systems (functions that preserve certain symmetries), continuity (functions that preserve closeness), differentiability (functions preserve small perturbations), etc.

A more general mechanism to locally overload the syntax could be very useful in other cases too.

What is the right proportion of nominal typing / generativity in a type system for usable programming languages?

Generativity is basically implicit effects in the type system, so just like any other effects it should be (1) used as sparingly as possible (2) explicit in the type. Explicit effects like F : Type -> SomeEffect Type, sure, but implicit ones with F : Type -> Type, no. F x should be equal to F x no matter what.

HoTT as *an* equality rather than *the* equality

I was rather careful to formulate my question allowing several important notions of equality.

HoTT might be unsuitable as a sole notion of equality for programming, because the fact that equality conversion may incur arbitrary computations is not a thing I would be comfortable with, at least if it was implicit (and if all uses of equality must be explicits, well, life is hard). It may be possible to give good guarantees when manipulating h-sets, and find a good language design that helps people ensure that the types they are programming with are indeed h-sets.

It has been suggested to consider two-level system with a "rich" HoTT equality and a weaker, "strict" equality. In any case, this would be at the level of the propositional equality. There remain the question of what is a good definitional equality of the system, and which operations it allow. It might be the case that programming can be done mostly using definitional equality, with propositional equality used in proofs.

the professionals are self destructing

But is the number of people churning code going to remain significant? In five years - of course. Ten? Most likely. Fifteen - I am not so sure. Twenty? I have no idea.

Programmers are already increasingly useless. This is inevitable.

To see why, consider agricultural labor:

I have read that in 1800, just shy of 90% of the U.S. working population worked in agriculture.

In 1900, only about 40% worked in agriculture.

In 2000, only 2.6% worked in agriculture.

This shift is the result of the blind process of capitalist competition and nothing more.

Socially, this shift is in some ways, a disaster. Our food system is petroleum dependent. Our produce is, on average, of poor nutritional quality. We have decimated the soil. We have emptied ancient aquifers. We have poisoned the Gulf of Mexico. We are killing off our pollinators. We stand just a few crises away from mass starvation.

Perhaps worst of all: there is essentially nothing individuals can do to escape this situation and reverse the problems.

Sure, a few lucky, eccentric people can escape to private paradises and hope nothing intrudes but the point is this: At every step of the way, our society has not only displaced smaller-scale, often individualistic agriculture -- our society has made community food security impossible for communities to secure.

We've not just stopped taking care of our own subsistence while industrial centralization does its thing. We have given up our freedom and capacity to take care of our own subsistence.

Why would computing be any other way?

At every step of the agricultural disaster, colleges and universities trained new experts to concentrate on ever more esoteric, ever more "sophisticated" problems.

Those professionals had an economic imperative, at every step: get rid of farm workers.

Get more food commodities out of the workers you can't get rid of.

Figure out how to mass produce and mass market industrial food.

Every step of the way the professionals competed to see who could jump the highest when told to jump. That's where the money and the acclaim were. That's what the competition among the professionals required. If you wanted to be a professional but would not help accelerate the agricultural disaster? You would not find work.

In the 1970s there was a brief dream that we might expand the capacity of communities to build computing systems and program for their own needs. This was naive.

The vast resources of industrial capital, and the experts who aspire to it, were deployed to eliminate as much labor as possible from computing, and to sell the greatest possible amount of computing commodities, at the highest profit, using the least amount of labor possible.

Our devices are locked. Source code is unavailable. Computing is increasingly centralized on massive industrial server farms. Socially, we are only a crisis or two away from a complete disaster when these systems fail.

As in agriculture, we are utterly dependent on precarious, centralized, ill-conceived, highly centralized production -- and we have sacrificed our liberty and our capacity to get by without those systems.

Just highlighting key

Just highlighting key points:

Our devices are locked. Source code is unavailable. Computing is increasingly centralized on massive industrial server farms. Socially, we are only a crisis or two away from a complete disaster when these systems fail.

This is such a pessimistic

This is such a pessimistic view of the world. It's like saying self-driving cars will fail, when they already exist and are poised to change the world. Where PL fails us, ML is picking up the slack.

We have been on the course of industrialization since the neolithic revolution. Turns out people want to specialize rather than be hunter gatherers; it has worked well for us, and the impending disasters predicted every decade have failed to materialize.

re this is such a pessimistic

This is such a pessimistic view of the world.

The hell you say. It implies that capitalism must collapse under its own weight as a direct consequence of its internal rules combined with its positive effects of developing production.

Was it pessimistic to say "Hey, the feudal lords are doomed!"?

We have been on the course of industrialization since the neolithic revolution.

Wrong. We have been on a course of technological advancement for all of history.

"Industrialization" is not merely technological advancement. It is technology combined with (1) The invention of governments that impose capitalistic property laws (the revolutionary overthrow of feudalism); (2) The eviction of the peasants from the land from which they drew subsistance (the enclosure of all productive land under the jurisdiction of capitalistic property law); (3) The invention of mandatory wage labor for the propertyless (the assumption of the power of life and death by capitalist governments). "Industrialization" is currently in the process of dying because capitalist competition continuously tries to eliminate factor (3) -- wage labor. (IMO it's a bit of a horse race whether civilization or capital collapses first.)

Turns out people want to specialize rather than be hunter gatherers;

History suggests that hunter gatherers succeeded well enough to destroy their lifestyle by over-exploitation, whereupon they were materially compelled to take up slave-based agriculture and animal husbandry. History begins around that point.

impending disasters predicted every decade have failed to materialize.

You should read more news.

Was it pessimistic to say

Was it pessimistic to say "Hey, the feudal lords are doomed!"?

The feudal lords were never doomed. They evolved into an aristocracy than as capitalists, or in communist countries as high officials.

This is a bit political, but the detachment of land from subsistence was necessary in achieving industrialization and urbanization, which is a lot more efficient than subsistence farming. You only have to look at the countries that have not yet undergone that transformation to see that. Or heck, just look at China, where farmers still have lots of land that is used quite inefficiently...the agri-corps or at least large-scale farmers would be a huge improvement here.

"Industrialization" is currently in the process of dying because capitalist competition continuously tries to eliminate factor (3) -- wage labor. (IMO it's a bit of a horse race whether civilization or capital collapses first.)

Human beings are incredibly versatile and have been adept at avoiding doom for the last 100,000 years or so. Hey, Asia is crowded...well, let's travel over that land bridge to America!

What is happening right now is that automation is eliminating the need for lower skilled jobs, and lessening the need for even high skilled jobs. We are approaching a pivotal moment in our history where work is much less valuable. They are not working to eliminate labor, labor is just dying out naturally. The huge question is what will replace it, which is something that we will have to start tackling in the next decade.

History suggests that hunter gatherers succeeded well enough to destroy their lifestyle by over-exploitation, whereupon they were materially compelled to take up slave-based agriculture and animal husbandry.

Citation? Hunter gatherer populations were naturally kept in check by their inability to exploit the land very well. They were never able to over exploit it, just like squirrels will never eat all the nuts off of trees in your city. It was only with agriculture that a sedentary lifestyle was even possible.

You should read more news.

News tends to emphasize the negative because that is what is interesting to read/hear about (makes sense: problems need fixing, so focus on the problems). But the truth is, we are living in an unprecedented period of peace and prosperity.

re feudal lords

The feudal lords were never doomed.

That's why there are so many literal serfs in England! Wait....

They evolved into an aristocracy than as capitalists, or in communist countries as high officials.

As far as I know, the rise of mercantalism compelled the end of the feudal system which first adapted as an aristocracy and then collapsed (often with violence) into the simplified, two-class society of liberal capitalism. (Titular aristocracy notwithstanding.)

This is a bit political, but the detachment of land from subsistence was necessary in achieving industrialization and urbanization, which is a lot more efficient than subsistence farming.

Well, duh. That is not political, it is tautological. Capital is a revolutionary force that rapidly accelerated the development of society's productive capacity. (Remember, I started off by pointing out a decline of agricultural employment from around a 90% share in 1800 to about a 2.5% share today.)

Or heck, just look at China, where farmers still have lots of land that is used quite inefficiently...the agri-corps or at least large-scale farmers would be a huge improvement here.

By aspiration, if nothing else, Chinese socialist government is attempting to rapidly progress through capitalist development, including with policies that encourage urban migration and agricultural modernization.

You said this following:

Human beings are incredibly versatile and have been adept at avoiding doom

You said that in response to a statement that the industrial age is ending because capitalism is very far along the way of eliminating wage labor.

I hope you are right that humans will adapt to the death of capitalism. I think it is a horse race.

In any event, just as feudalism was doomed, now capital is doomed.

Hey, Asia is crowded...well, let's travel over that land bridge to America!

There is no frontier we can open up anymore, not even space, that can restore the utility of wage labor.

What is happening right now is that automation is eliminating the need for lower skilled jobs,

All jobs. One of the striking features of high-tech firms is how little labor they consume relative to their revenues.

We are approaching a pivotal moment in our history where work is much less valuable.

We hit that wall in 1929. Then we blew up a very large share of the world's factories and fields and slaughtered 10s of millions of workers. Then we rebuilt better for a few decades and hit the same wall again around 1970. They we stopped using commodity money for trade and emphasized financial capital. Now we have skyrocketing permanent unemployment, mass incarceration, environmental ruin, environmental and economic refugees on a mass scale, ongoing and growing water crises, impending food crises, the collapse of political states at the periphery of the developed world, escalating threats of a hot version of WWIII, ....

. They are not working to eliminate labor, labor is just dying out naturally.

I don't know who "they" is supposed to be. Capitalists work to eliminate labor through blind competition. Workers actually do the heavy lifting of making themselves obsolete. Everyone in capitalism, top to bottom, contributes to bringing about the increasing obsolecense of wage labor.

The only real problem is that as wage labor becomes useless, workers become poorer and poorer, even while the capacity for wealth creation goes through the roof.

The huge question is what will replace it, which is something that we will have to start tackling in the next decade.

I hope you are right. The elimination of wage slavery is the elimination of class-based domination and exploitation. They are one in the same. It entails the end of the state and the end of exchange-based production.

Citation? Hunter gatherer populations were naturally kept in check by their inability to exploit the land very well.

That appears to be false.

I think one of the seminal works in this area (not a work I'm personally that familiar with) is "The food crisis in prehistory. Overpopulation and the origins of agriculture" Mark Nathan Cohen, 1977, Yale University Press.

The transition to agriculture is associated with a great decline in quality of life (more, harder work), lifespan, and nutrition. The techniques of agriculture were apparently understood well before it became dominant. From this we can conclude that the transition to agriculture was materially compelled, not a "lifestyle choice". Finally, ancient agriculture could not operate without slavery and general hierarchy of domination -- it both required and enabled class-based society.

They were never able to over exploit it, just like squirrels will never eat all the nuts off of trees in your city.

Animal populations without a balance of predators starve themselves by over-exploitation all the time. Why do you think we moderns hunt deer?

It was only with agriculture that a sedentary lifestyle was even possible.

There was nothing sedentary about it. It was much harder work than the pre-neolithic era. And for a poorer life, at that.

News tends to emphasize the negative because that is what is interesting to read/hear about (makes sense: problems need fixing, so focus on the problems). But the truth is, we are living in an unprecedented period of peace and prosperity.

Uh... yeah. I didn't mean "watch more tv". We are not living in an "unprecedented period of peace and prosperity," no matter what Stephen Pinker wants to tell a TED conference.

Uh... yeah. I didn't mean

Uh... yeah. I didn't mean "watch more tv". We are not living in an "unprecedented period of peace and prosperity," no matter what Stephen Pinker wants to tell a TED conference.

You're in a minority then. I can only charitably interpret you to be considering absolute numbers rather than the statistics for your definition of "peaceful period". Unfortunately, that's not a meaningful metric of peace.

re "peaceful period"

Some decent critique of the claim that we are in a peaceful period is given by John Gray, earlier this year, in the Guardian:

"A new orthodoxy, led by Pinker, holds that war and violence in the developed world are declining. The stats are misleading, argues Gray – and the idea of moral progress is wishful thinking and plain wrong"

John Gray: Steven Pinker is Wrong about Violence and War.

To a lot of people, what this article says is plainly obvious by direct experience.

So Gray criticizes Pinker's

So Gray criticizes Pinker's explanations of the statistics, discusses a number of irrelevant tangents about incarceration and proxy wars, all to paint a poetic dystopian narrative that doesn't discuss any concrete numbers at all. I'm not sure how this is supposed to be convincing to anyone who's even remotely familiar with crime and war statistics and the economic growth of nations and how this correlates with overall quality of life.

re criticizes Pinker's explanations of the statistics,

criticizes Pinker's explanations of the statistics,

I believe that is a fundamental misunderstanding. It is true that Gray criticizes Pinker's inferences, but he also faults Pinker's choice of statistics.

Let me try to lay out the form of the critique a little differently. I think I am faithful to Gray here but if I stray, it is only into my own very similar critique.

1. Pinker's argument begins with an exercise in selecting data sets which, he asserts, measure the violence of society. Pinker's selection is cramped and arbitrary. As you noted, for example, Pinker ignores the skyrocketing incarceration rate in the U.S. as evidence of state violence.

Here is another example: Pinker uncritically repeats FBI rape statistics when he presents his thesis. Per FBI reporting, incidences of rape in the U.S. have markedly declined since the 1970s.

Defending his choice of data sets, Pinker writes, in his FAQ:

I had two guidelines. The first was to use data only from sources that had a commitment to objectivity, with no ideological axe to grind, avoiding the junk statistics commonly slung around by advocacy groups and moral entrepreneurs. [....]

Anyone even modestly familiar with the provenance of FBI statistics on crime knows that Pinker is already off the rails. The FBI statistics are anything but objective and are buffeted by ideology at every step. Why?

Well, local, county, and state law enforcement agencies independently collect and report these statistics. The process goes on with no real oversight. The statics are used in public policy debates to make or break ideological arguments, and to "prove" or "disprove" the success or failure of policing strategies. They are some of the most politicized and uncontrolled data sets out there!

That is why police departments are often found to lie in this reporting. In the case of rape, there is evidence that the supposed precipitous decline in rape since the 1970s (Pinker's graph) is fake and that in fact the fake statistics mask a rape crisis. "How to Lie with Rape Statistics: America's Hidden Rape Crisis" Corey Rayburn Yung, Iowa Law Review V.99, n.1197, 2014

In short, Pinker has simply asserted that his statistics measure human violence, but even a cursory examination calls that assertion into doubt. In the case of rape, for example, Pinker is not using a measure of violence but rather a measure of indirect crime reporting to the FBI -- a critically important difference.

2. Why does Pinker treat the numbers so sloppily? In particular: what is he doing with his avalanche of junk statistics?

Here, Gray points out -- and I agree -- that Pinker is advancing an ideological argument based on pure faith. Gray spends some time getting to the particulars of this argument and pointing out we need to be more critical and sceptical of Pinker's cartoonish "enlightenment". I won't go into that here.

You (naasking) wrote:

I'm not sure how this is supposed to be convincing to anyone who's even remotely familiar with crime and war statistics and the economic growth of nations and how this correlates with overall quality of life.

I am familiar with crime and war statistics. I understand what is popularly called "economic growth" very well.

I do not know what you, personally mean by "quality of life."

I know that in the United States, especially where I live, it is a racist and classist dog whistle that is used to rally people in support of state violence directed against the destitute and the low-income non-white people.

None of that state violence appears in Pinker's statistics.

Certainly some of the data

Certainly some of the data sets are highly politicized, but Pinker's claims don't depend upon only a single data set, they are trends seen around the world. Even if the rates in a single place increased, that doesn't necessarily significantly affect the overall trend.

State violence wasn't covered by Pinker's analysis, but even here, you seem to have a US-centric view. The US indeed as an incarceration crisis, but again, the rest of the world doesn't. Furthermore, you'd be hard-pressed to convincingly argue that the significant reduction of non-state violence has been matched or exceeded by state violence.

By "quality of life", I mean access to basic needs like shelter, food, education, etc. The US has certainly slid backwards on some of these metrics since the 70s, but not on all of them, and that backwards trend again does not apply to the rest of the world.

re Certainly some of the data

The problem is not that Pinker's data sets are "politicized" but that they measure something other than the level of violence in society.

With reference to the modern world, Pinker is studying various administrative categories, not violence itself. With respect to the ancient world, he is studying modern administrative categories projected onto a hypothesized past (without much regard for the human meaning of those modern categories in ancient societies).

Where violence does not take a form that meets the gaze of sovereigns, Pinker defines it away. To his FAQ again:

Ditto for underpaying workers, undermining cultural traditions, polluting the ecosystem, and other practices that moralists want to stigmatize by metaphorically extending the term violence to them. It’s not that these aren’t bad things, but you can’t write a coherent book on the topic of “bad things.”

So far as I know, his list (deprevation, repression of communities, environmental poisoning, etc.) are not simply "metaphorical" violence. They are not condemned by analogy but because and when they harm or kill.

The Bhopal disaster comes to mind as one example that is, for Pinker, not "violence" but merely a "bad thing" outside of his story. Similarly the invisible-to-law violence and deprevation that characterizes the lives of many migrant farm workers.

Personally, I think Pinker is wrong to even begin to claim he might quantify the degree of violence in society. There is no overcoming these kinds of definitional problems he runs into not on one or two data sets, but across the board.

If he had confined his conclusion drawing to something much more literal -- i.e., rapes reported to the FBI have declined a lot since 1970 -- the work would be unassailable and also boring. It wouldn't get him on the NYT best seller's list or as an invited speaker or interview subject.

He has used the data to spin an ideological yarn around some not-too-coherent concept of technocratic enlightment. A certain audience just eats that kind of implicit reassurance right up.

Analogy does not hold

I don't think that your analogy of framing is relevant to the computer industry. Farming has a meaningful split between the ownership and exploitation of resources. The dominant resource in farming is fertile land: all other relevant resources are fungible. The owner of the land will reap the lion's share of any exploitation, regardless of who carries out the work. Two farmers who are to collaborate need to share the resource: they must be physically co-located on that resource. This creates clear benefits for scaling in size.

Why would the computer industry behave the same way? I can't see any dominant resource that is required for exploitation. Talent is not a commodity, even if the owners of capital wish it to be so. I would ask you to consider the games industry as a more relevant example. This is an industry that has arisen entirely within our lifespans, and look at the evolutionary path that it has followed. In the 80s the Ferrari lifestyle was an aspiration for most programming industries: yet almost all development was carried out in small scale development teams that struggled financially.

Even when the industrial base was distributed in a pattern close to cottage industry, there was a winner-takes-all dynamic that caused a power-law distribution in financial returns. The result is Electronic Arts and the other conglomerates that dominate the AAA slice of the industry. But their rise has not eradicated the cottage industry, and this is unusual in the development of an industrial sector. In any industry where there are low barriers to entry talent will dominate capital. The "indie" game market is just the latest name for that constant bottom-up rebirth of the industry.

Something new is happening in the games industry, specifically because it is not dominated by owning slices of a non-expandable resource: there is a small scale bottom-up industry co-existing with an effective cartel of large-scale top-down conglomerates. That is far more interesting as an a pattern of industrial development than agriculture and the serfs moving to the factories. I think it tells more of what we will see in the future of programming. There would seem to be more than one equilibrium in the patterns of activity required to exploit this area. Most interestingly they seem to exhibit scaling effects: hitting the level of polish required on a $2.5B game title requires the combined efforts of 1000 people and the level of capital to fund their 3-year development cycle. Aiming at a lower level of complexity in the product means that 2 guys in a bedroom can develop more efficiently than a large team (Fez?). This is a genuinely new pattern of industrial development, what other industry has cottage-like efficiency in low-complexity producers competing / co-existing with industrial-scale efficiencies in high-complexity producers?

re analogy does not hold

This is a genuinely new pattern of industrial development, what other industry has cottage-like efficiency in low-complexity producers competing / co-existing with industrial-scale efficiencies in high-complexity producers?

Agriculture. (See, e.g., farmer's markets where the sellers include boutique producers.)

Something new is happening in the games industry, specifically because it is not dominated by owning slices of a non-expandable resource

Anyone can make the big time in Hollywood.

there is a small scale bottom-up industry co-existing with an effective cartel of large-scale top-down conglomerates. That is far more interesting as an a pattern of industrial development than agriculture and the serfs moving to the factories.

There is nothing very unique there.

One symptom of the rising disutility of labor is that capitals substitute stringers for regular employees. The so-called sharing economy shows this most directly. It appears as "zero-hour contracts" in the U.K. It appears as casting calls like app-stores and V.C. goat rodeo pitch shows. Next up: Hunger Games!

Agriculture is a low-skill

Agriculture is a low-skill (commodity labour) market of huge size (absolute number of people employed in the industry). Programming is a high-skill, small market. Why would the effects of capitalism on the former predict the effects of capitalism in the later with any accuracy?

Can anyone make it big in Hollywood? I don't know the figures, but I used to spend enough time at the Watershed in Bristol to see that indie-films were not rolling in money.

Agriculture in NOT low-skill market of any size.

You obviously don't have any understanding of agriculture, if you think it is a low skilled field. It has both low-skilled and high-skilled aspects. Programming is no different, it has low-skilled and high skilled aspects.

I have seen many low-skilled workers producing programs for many years. It is why there are those of us who are paid to fix up all those stuff ups that are made by low-skilled programmers.

Many of the incidents that make the news media demonstrate the low level of skills that being used in the development of programming. Buffer overflows are a simple example of a common problem that should no longer exist but is still a major source of attacks. SQL injection is another that is due to the lack of understanding about how to properly design and implement validation code. A simple example of contacting a bank through its web-based customer interface and the failure to properly validate and translate a comment field is my latest foray into seeing such problems.

Farming is much more than most people have any understanding about. I have my own orchard, own stock and gardens and there is much skill required even for the little I have (that's not to say that I have mastered these skills as yet).

Web developers in an

Web developers in an enterprise shop aren't highly skilled but they get the jobs they are assigned done. I've heard the word "code farmer" more often than I feel comfortable with (disclaimer: I live in a developing country where the farmers mostly haven't industrialized yet).

Sure

I can accept that I know very little about farming, but tell me: if farming is high-skill does this mean that only certain people have the natural talent to be farmers and we have an education crisis attempt to teach the other 60% of the population? Roughly speaking that is a measure of the skill-requirement in programming. So as a comparison, is farming really that high-skill?

Skills are found in every field.

I think you are making the common mistake of assuming that high skills are only to be found in high technology. There are those who have a natural talent for some field and there are those who, through much effort, gain the skills for those fields. There are also many who, in their selected skills, become arrogant about their abilities and forget that they are ignorant about many areas.

With regards farming, we have a situation today where multi-national companies are attempting to totally control what farming is and reduce it to their products, processes and equipment. This is really no different to what is happening in the IT world. When we look at companies like Microsoft, ORACLE, Google, IBM, the game is to try and develop a monoculture environment.

To give you and example of a complex skill that is in the farming environment (so to speak), that of butchering a carcass into the various and many different kinds of cuts. I have a wonderful video of the butchering of a lamb. He describes the cuts and then performs them, showing how to do each cut. I know just how difficult it is to do this. It takes understanding, knowledge, training and development of skills to do this properly. My efforts with the three sheep I'll be killing later this year will be nowhere near this level. But I want to learn and one has to do to develop the skills.

As far a farming is concerned, it takes a great deal of skill, knowledge and understanding to be successful. Whether it is ensuring that the right conditions are managed for crops, flocks or herds or ensuring that when adverse conditions occur that you actually can maintain your farm (and associated, crops, flocks or herds), You have to understand feed management, disease control, asset management, infrastructure management, sales, product development, etc.

I suppose what frustrates me about such comments as yours is that all such arguments are, at their core, ignorant of the skills required for fields which one is not acquainted with but on the surface appears to be easy.

Try doing a good weld, or plane a surface flat, or sand a surface smooth, paint a wall, propagate seedlings, sweep a floor properly. Each of these tasks actually requires a great deal of skill that takes time to learn properly (unless you are one of those few who have an innate talent for it).

My own experience says that many who are in high technology jobs (including computing) are there only because there is supporting technology that reduces the skills required to do their jobs. I know that in my country, many of the programming jobs are moving off-shore to programming farms that produce less than adequate code simply to make the bottom line in staff costs lower.

Finally, to answer your question, it doesn't take natural talent to do a good job. It can certainly help. But it does take a desire to do the best and a desire to learn. Programming, like most jobs, can be done poorly with some training.

To say we have an education crisis in developing the necessary programming skills is missing the mark completely. We have an education crisis because we don't teach people to think clearly or to look at problems from different angles. There are many in the programming field that, because they are over the age of 40, are no longer considered adequately skilled in modern technologies, yet will run rings about those who are much younger.

Programming can be taught to anyone, just as cooking and farming and cleaning and engineering can be. The first and major requirement is that there is a willingness to learn and develop the skills needed. The crisis comes when our education systems fail to develop our young people with the desire to develop themselves.

Our education systems are the failure because on the whole society has given up caring about the future. Instead of being positive, society is all about the negative. We care more about peoples rights than we do about people. We care more about our pleasures more than we care about the consequences of those pleasures on the future generations.

We care more about our position on type-checking in programs than on seeing that each side has some valid points even if we don't agree with their stance and that appropriate use is possible. We care more about dogma than about what is the best tool for a specific problem set.

That's enough for now, I want to go and eat my wife's chocolate pudding. Her skill far surpasses mine and I am privileged to partake of her skills in cooking.

Words are difficult as a form of communication.

In particular, while we all associate clusters of meaning with particular words it is often the case that they only overlap weakly with other people's clusters. Rereading what I wrote (and also gasche's fair points below) I can see that I've been clumsy in two regards: "natural talent" has overloaded connotations that I was unaware of, and I've mashed together a relative comparison with an absolute one. Both of which are a result of lazy thinking on my part. Let me try again to frame the comparison that I was making.

In referring to an activity as low-skill or high-skill is a gross simplification of many variables. One of these would include the amount of experience necessary to attain a particular level of result. Another variable would be what proportion of people can convert the experience into knowledge and improvement. Even that is a gross simplification as ability in learning is dependent on acquiring many previous skills, such as clearly thinking about a problem and exploring it from different angles. Even something labelled as a predisposition, such as curiosity, is really a manifestation of previous training and expectations from other domains.

The relative comparison that I was aiming for is the average rate of return (in advancement of a skill) per unit of effort expended. I would label an activity high-skill when this average is low, yet some individuals show a rate of return far from the average. Whether these individuals are demonstrating an innate talent, or expressing unseen returns in previous learning is speculative at best and downright dogmatic at worst. Programming shows a sharp bimodal distribution in this regard, and many studies have found this distribution remains constant across different populations of students. I won't dig out the references to the literature now as your point that this is missing the mark is fair.

Let me conclude by cutting down my previous post to the accurate part: "I don't know anything about farming", and wishing you the best experience in chocolate pudding eating.

Meh natural talent meh

I don't believe much in "natural talent" and I think this convenient idea prevents us from thinking deeper about issues. I am far from a specialist in cognitive sciences, but I have never encountered a real-life situation where skills couldn't be explained by a favourable growth environment and lots of (possibly indirect) practice.

(I would be ready to buy some "natural" determinism for extreme physical activities such as international-level sports, although I wouldn't be surprised if that was over-rated in this area as well. I am strongly sceptical, on the contrary, of anything "natural" about skills in technology, or even about harder scientific fields (eg. maths).)

Also I think that thinking of skills as something acquired, rather than something "natural", gives us a more positive view of people. I certainly couldn't properly teach programming if I believed that it was a "natural talent" -- and I suspect there is some correlation between poor teachers and people holding that belief.

Ability = Talent + Work.

I think you must be good at everything then? I certainly am not. I think people's brains are genetically different which predisposes then to certain skills. Yes you can train in your weak areas, but it is harder and takes longer to gain skill in that area. For example I could train to be a sprinter, but I am never going to win the Olympics, as I don't have the physique for it (stride length, body proportions etc). To be truely great at something you have to work hard in an area you are naturally talented at. If you are weak in an area, you can work harder than everyone else and only be mediocre. We only have a limited time available to us, so those that stand out work hard in areas they already have natural (genetic) ability. Performance is a result of both nature and nurture.

For programming I think there is not a single talent, but several different ways of learning it. I definitely see differences in the ways that people approach problems. Some people think visually, others linguistically etc. I think this means if you adapt the teaching many differently talented people can be good at programming, but they can't all be rock-stars. I am sure in teaching you have come across people who get the concepts first time they are explained, and others that have to go over it several times to understand. Given our limited lifespans, the one that picks it up first time will be able to achieve higher levels of excellence, although I am sure a good teacher can get them both to the level needed to pass a qualification (which is usually graded for minimal competence).

More meh

I think you must be good at everything then?

I think that I could become good at many things given a good teaching and many hours of learning and practice -- and so could anyone else. A barrier may be that we may be better at learning things during youth (that would make sense to me), meaning that once a certain fast-growth fast-learning period is over we are slower to learn radically new skills. In particular, I think that physical activity during youth account in large part for the development of the "physique" required to become Olympics-level sport player (although that is the one area where I would believe in some influence of development).

I think people's brains are genetically different which predisposes then to certain skills.

This argument has been brought forward many times in history, and it never paid for its costs. Many people have made terribly wrong deductions from it, and as far as I know there has been no social value generated from ideas inspired from it. I'm certainly interested in scientists researching (as they have continuously been doing since centuries) ways to verify it, but I would find it silly to make lifestyle choices based on it, given that it's mostly a choice of belief so far, whose odds of impact are largely negative.

I definitely see differences in the ways that people approach problems. Some people think visually, others linguistically etc.

None of that cannot be equally explained by environmentally acquired dispositions. (Maybe your parents used to sing lots of different sounds when you were a child, and you have a good auditive memory.) There are extreme cases (eg. autism) where genetic causes are being considered, but then there are also known extreme cases of cognitive function changes impacted by the environment (eg. the psychiatric literature is replete with examples of the patient that, after some trauma, has this particular symptom that would intuitively seem to be explained by physical aspects of brain development).

Finally I find all this talk of "olympics", "rockstar programmer" and "excellence" rather boring. Some (usually privileged) people approach life with the goal of becoming "the world's best at X", but I doubt that is how you generate social value or even happiness in life. The most interesting comments in this sub-thread are by Bruce Rennie, explaining how to sweep a floor. It's not about which parental matches we should seek to produce the word leader of floor sweeping.

Autism is a great example.

Autism is a great example. Extreme forms of autism make learning difficult, of course, but mild functional forms of autism (Asperger) seems to be more prevalent in engineering and programming. Separating cause and effect is difficult; it could just be the appeal of limited social interaction. But that is also interesting: that some people might choose to do A because they have some innate limitation in doing B. They are not necessarily good at A, it is just the best option for them to spend their time on.

Heritability of Interests

Twin studies are the classic psychological technique for studying this:
http://atavisionary.com/wp-content/uploads/2014/07/Heritability-of-interests-a-twin-study-Lykken-bouchard.pdf

Certainly some studies show enough correlation that it makes being dismissive of the genetic element hard. Recent studies have generally shown genetics contribute to things more than we thought. I find that a balanced view is that we are both nature and nurture.

As for excellence, often what you are interested in and enjoy doing, is what you are good at (and achieve success in). People are happy when they do for a job whatever they would do even if they were not getting paid for it. I don't think it has got anything to do with privilege, find something you are good at and enjoy doing and you can make as success of it. I think happy people doing fulfilling jobs is a much better ideal than some synthetic concept of social value.

I am not advocating any kind of parental matching, that's just crazy, and misreading my position to a deliberately unrealistic extreme. I am taking about each individual finding the best path for themselves. Some people might be very good doctors, but terrible programmers. They won't be happy, or get promoted as a programmer, so why force them to do it? Let them be a great doctor, teacher, or whatever... my view is not that some people are absolutely more talented than others, but that everyone has talents, they just have to find out what they are, and its societies job to give them enough opportunities in schooling to do that.

Doing what you like

With light editions,

People are happy when they do for a job whatever they would do even if they were not getting paid for it. [...] Find something you are good at and enjoy doing and you can make as success of it. I think happy people doing fulfilling jobs is a [good] ideal.

I fully agree, but I don't see why this would require "natural talent" (whether this notion actually has a significant impact on stuff or not). Note that you don't need to be "the best" at what you do for this to work (as in "rock star" or "Olympics"), just to be good enough to be able to make a living out of it. This is an observable quality that is not in any way related to the way people acquired those skills they enjoy and which can sustain them.

We can agree with "everyone has talents" as long as we don't pre-conceive a notion of talent of the kind "either you get it or you don't", but are flexible with the idea that the talents people have may be influenced by their life.

(On the other hand, if someone was thinking of technical education as a job of detecting which students have an innate gift for computer stuff and which don't, believing they are justified in this view by research on the IQ tests of twins, then I would strongly disagree with that person and suspect that they make un-scientific extrapolations from specific results to sustain a subjective belief in how life works.)

Re. privilege: many people live in an environment where they cannot afford to develop their talents and find a job they enjoy to sustain themselves. Having this chance is being privileged -- something, in that circumstance, that should be cherished, but also understood not to be a generality.

We should try harder to give people the opportunity.

Well I think "talent" means something inherent (inherited genetically), which is why I used the word "skill" to differentiate something learned. You can learn new skills, you cannot learn new talents.

See the above paper, interest in a subject may also be inherited, so finding something you enjoy might be more or less the same as finding something you are good at (and enjoying it motivates you to study harder).

I think it is for the student to understand what they are good at and choose subjects accordingly. Some students my deliberately choose to study things they are not good at to get a more rounded education, but when we focus on grades, people don't do that, they choose whichever subjects they think they can get the best grades in.

I don't think giving people the opportunity to develop their talents is a privilege, I think it is something we as a society should strive to give to every student. I don't think its right to rest until every student has that opportunity (if they wish to take it). Is is not a privilege, and I think that way of thinking allows society to excuse any inadequacies in the education system.

It is both about the

It is both about the hardware we are given and how we learn to use it. It is not as simple as everything is nature or everything is nature.

You can't teach anything if you don't buy that nuture has a role, a dominant role, to play, otherwise you wouldn't be teaching (unless you just needed the money)! But in reality, there isn't much scientific evidence to guide us since these kinds of things are so hard to study. But what is clear is that nuture plays some role, so it's worth doing.

Programming is a practical art, so it really depends on the student's willingness to practice it, more than anything else.

Time spent doing something

Time spent doing something is a large factor, and that is very much influenced by interest, but you do need a certain level of intelligence. If you think anybody can do college level math you are very much mistaken. I used to think as you do until I taught some classes at a high school, which prompted me to look into the science. It is a well established scientific fact that intelligence is partly or even mostly heritable. The IQ of kids who are adopted at birth is much better predicted by the IQ of their biological parents than the IQ of their adoptive parents. In fact the correlation of the IQ of somebody and their adopted sibling is ~0.0 whereas the correlation of the IQ of somebody and their biological sibling is 0.6 (1.0 would be perfectly correlated, and -1.0 perfectly anti-correlated). The correlation between the IQ of two identical twins raised by separate families (!) is ~0.86. Of course there are many different studies with slightly different results, but overall this is widely accepted.

Yep, but not all programming

Yep, but not all programming is highly "mathy", and most programming teaching could be enormously improved*. Better teaching could produce many more programmers.

In other words: programming ability is also limited by intelligence, but I believe the bottleneck is teaching. With current teaching, learning to program requires more intelligence than strictly needed.

*Simply the amount of time and attention to pour in for a proper course is amazing — see How to Design Programs. And we've had much less experience teaching programming compared to other subjects.

I don't think heritability

I don't think heritability is limited to mathy things. IQ tests are after all not very mathy as far as I am aware. The focus should absolutely remain on better teaching, tools, and languages regardless of whether aptitude for programming is heritable. That said, I think it's more important to be actually correct than politically correct. The idea that everyone is born with equal potential for every occupation is harmful when politicians and teachers and parents and children make decisions based on this belief.

I have observed "natural

I have observed "natural talent" on many occasions. It is the start of the story in some cases and in others there is not much talent just a lot of interest.

but I have never encountered a real-life situation where skills couldn't be explained by a favourable growth environment and lots of (possibly indirect) practice.

Though you may not have, I certainly have and have seen examples where the starting apprentice has already exceeded the master craftsman - these are rare certainly. But they do exist.

I myself have been involved in certain activities where under testing my own results were removed from the overall group results because the tester felt that they would significantly skew what they were testing for. Not that I cared, I still got paid for being a test subject.

I have also seen great natural talent wasted because of the choices made by the individual.

I have seen average natural talent blossom into extraordinary talent through shear doggedness and hard work (practice, practice, practice).

Skills are acquired but in some cases, the person has a natural ability/affinity for the subject. There are more factors involved than just natural talent and if you cannot get the students interested in the subject matter then that is a failure by you as the teacher and completely useless for them.

Also I think that thinking of skills as something acquired, rather than something "natural", gives us a more positive view of people.

If you don't have a positive view of people whether they have "natural" skills or not, then I don't expect you to have a positive view of people anyway. Everyone is different, just because they have some skills shouldn't affect that view. I have come to the view that everyone you meet can show you something that you never knew before (that even includes politicians, lawyers and used car salesmen).

end of programmers?

I don't think so. Agriculture is not the same as information technology. Application of technology to agriculture .. and of course to offices .. reduced drastically the number of low level workers required in both industries (how many managers have a typist these days?). But agriculture is still big business involving a lot of people, its just that they're not all farm hands. Some are helicopter mechanics (in Australia at least!)

In programming, few people write sort routines these days. But the challenges continue to grow and there is plenty of room for more and more automation yet. Perhaps when 90% of all the solar energy we can possibly collect on Earth and near Earth orbit is used for computing there will finally be a limit. Oh wait, but there is now a serious attempt to get a huge Tokomak fusion reactor running.

So exactly where will the need for software development end? Every part of the process that is done manually and follows a pattern can be automated, but then there will just be new higher level patterns to automate. So long as our gross computing power continues to rise globally there will be not only a need for programmers, but an increasing need.

re end of programmers?

But agriculture is still big business involving a lot of people, its just that they're not all farm hands. Some are helicopter mechanics (in Australia at least!)

The employment statistics count all agricultural job categories. It used to take the vast majority of all available labor power to keep everyone fed -- only 200 years ago. Today, fewer than 3 in 100 people contribute any effort towards feeding us.

So exactly where will the need for software development end?

The human need? I don't think that will end.

The capitalist need? It is collapsing already.

The actual human practice, regardless of human need? It seems on the ropes to me but time will tell.

Do kids program anymore?

Do kids program anymore?

They program minecraft.

They program minecraft. Which is to say, not much at all, but if minecraft was extended with a richer programming experience that added value to the game, I'm sure they would.

what would programming look like?

How would we be able to tell if kids program anymore? :-) Some high schools have classes. But I found one 2014 article just now, reporting such classes were still a rarity. When I asked my younger son if his high school had programming classes about three years ago, he investigated, but the end result was that only writing html pages was taught (and this was described as programming). Seemed a little odd for a San Jose school in an okay neighborhood.

Kids still compete, and there's plenty of encouragement to engage in cut-throat competition, but how much this translates into programming is hard to say. My sons would write games if that was easy to do, but I told them what it took to achieve results (like those they like in triple-A games) would strike them as fantastically complex. The younger son's girlfriend is in CS at a UC school, but evidently they don't talk about it, as programming is not a fun topic.

When we learned how to

When we learned how to program it wasn't 'cause it was easy!! (Boy, do I sound like a grumpy old man...)

I guess that kids that in previous decades might have been into programming are now into arduino and the like.

Maybe my generation is after

Maybe my generation is after yours, but programming WAS easy when we learned via BASIC on a TI/99, C64, Adam Coleco, etc... We didn't have very high expectations, however.

99/4A it was... our

99/4A it was... our expectations were low in a sense, sure, but the point wasn't that it was easy or that our expectations were low. Rather it was that making a computer do your bidding was an amazing thrill, and once you got hooked it became increasingly more challenging to realize all the ideas we had.

Try the raspberry pi foundation (re 99/4a)

The Raspberry Pi Foundation folks are trying to keep alive and expand and reinvigorate learning opportunities like what you describe re 99/4a.

p.s.: I think something like a $25 Mozilla phone will also have a huge positive impact.

+1

+1

I agree with that. The

I agree with that. The problem is that now computers are capable of so much more, so the simple programs of the past are not interesting to kids these days. The problem is that our programs have become so sophisticated and advanced, that "controlling the computer" has a much higher bar now.

Well, that isn't completely true; see minecraft, which is very simple compared to what the computer can do, but this simplicity enables creativity.

Microsoft Windows/Apple/Amazon killed hobbyist programing?

so the simple programs of the past are not interesting to kids these days.

That's become somewhat of a truism. But frankly I don't feel this can be right. I have an iPad sitting on the desk next to me, and if I could just get it to do something not pre-programmed by someone, I'd be delighted. I think we need to think more about how devices became less hackable, and how this relates to warranties, licenses. It seems that to just have a wee bit fun these days requires jail breaking an expensive bit of consumer electronics belonging to mom and dad (or, even more often, on lease from some telco). Computers should be cheap enough you can give one to your kids to play with (I think we are past the days of making them build one themselves from a kit; though, I think the arduino/pi stuff shows that kids [at heart] will do that too).

Kids really don't care about

Kids really don't care about that kind of hacking, at least the impressionable 9-15 set. They want fun, that is why games are the injection method. And you'll find lots of programming based games on the App Store not to mention games with creation and even programming components like minecraft and little big planet.

Now give them a unique goal that they accomplish by hacking some hardware, and they will be all into it. But blinking lights and hello world don't count. Mindstorms is much more accessible (and satisfying) to a kid at that age range than a arduino/PI.

So what changed? That's not

So what changed? That's not how I remember myself at that age.

I think we should be careful to distinguish between "kids" in general, and the sort of kids that were drawn to programming in previous decades.

The thing is, computers were

The thing is, computers were mysterious things back then, you were lucky to have one, and they didn't do much, so making them do more was "fun." Why do you think they booted into BASIC in the first place? But now computers are just so common, mundane, and they do enough that you know making them do more will require lots of effort. There is just so much for geek kids to fill their time with these days, they are more likely to start a YouTube channel showing off their minecraft creations.

So you are saying that it's

So you are saying that it's not that computer are less inviting as platforms but rather that they are just so powerful when you take them outside the box. Of course to some extent it is both, but I still think the former is no less of an issue. Just think of all the things you can get the computer to do for you. Invent stuff! It doesn't have to be complicated. Just goofy stuff is fine as well. Think of all the puzzles we programmed computers to solve for us; think about Lego Mindstorms and robots and how you can make them do tricks; think of the things you could do with all the libraries available (submit your English report in iambic pentameter!). I dunno, I think the reason can't be just that computers have great graphics and multimedia capabilities out the box.

But why play with real

But why play with real expensive tangible Legos when I can just go play minecraft with my friends? The computer is actually filling up all their time with distractions, ironically enough, that they don't even think of these kinds of things.

ut why play with real

why play with real expensive tangible Legos when I can just go play minecraft with my friends?

Because it's cool?

so many things, so little time

Lots of cool things to do, only 16 hours in a day to do them.

remote dialup terminals

I learned mid-70's, because my math teacher was an enterprising type who found resources and started his own new class. Probably he would never be a school teacher today, given other options. I was the oddball who wrote more programs than all other students combined, in all assignments, because it was fun. But this was a place and time when there was nothing to do, in the midwest, unless you liked truly awful network television. My father sold desktop calculators on commission to shop keepers for $500 apiece, which plummeted in price to ten bucks in about four years. I sold some TI-99s a few years later when I was between school gigs.

My guess is being satisfied with miniscule feedback, in the form of trivial amounts of text, is hard for kids to tolerate now, given what they are used to experiencing in immersive UI graphics tech. My older son complains "pointers are weird", and lacks interest in low level mechanics as being interesting in themselves. A taste for low-feedback gnarly puzzles is a help when learning to code. Presenting code tasks as puzzles may help uptake with some kids, while repelling others. An adventure world where you build a machine is likely a good context to learn programming, if little feedback exists other than machine building.

Hey, pointers are weird! The

Hey, pointers are weird! The only place they finally made sense is C. Which, frankly, is a weird language in its own right...

They do

My nephews (16 and 12) both program. The older is building a discreet chip based system. The younger quite happily hacks away on his games machines (including in hex). The older is involved with developing a language translation system (German - English) and writes programs for a company that belongs to one of his fathers friends.

There are many who write robotic control code and do various hacking of control systems using the various small scale (as of today) systems based on the Raspberry Pi and Arduino, etc.

The computing technology of today is only more capable than previously available in the amount of memory and the clock speed. Otherwise. It is essentially no further along than what we had 30, 40 and 50 years ago.

The flip side of this is that the available courses available in schools are very limited and are designed (in many cases) for the lowest common denominator. I think the more advanced classes are in things like robotics, etc,, which are NOT classified as IT/Programming classes.

I had the opportunity some years ago to talk with the senior (grade 12/year 12) programming class at the school that my sons had attended. The attitude of the students was quite varying and (from my perspective) most would have been better off doing some other subject. They were not really into programming. The curriculum for the course (as developed by the State Education Department) was not very interesting. Nor was the teacher trained in programming as a part of his professional development - he did have an interest in it though.

The way that young people get into programming these days are via alternate avenues in much the same way many of use did when we were younger. Though some of us didn't get access until our undergraduate engineering days.

Path from CyberLocalism to CyberTotalism

Edward Snowden at IETF 93 characterized the path from CyberLocalism to CyberTotalism as follows:

“idea of a simple core and smart edges -- that's what we planned
for. That's what we wanted. That's what we expected, but what happened
in secret, over a very long period of time was changed to a very dumb
edge and a deadly core.”

Research directions

It's hard to state concrete questions in a field like programming languages. Asking the right concrete question can go a very long way to providing the solution, unlike mathematics where you have questions like "are there infinitely many twin primes?" which are easy to state but incredibly hard to answer. Instead I'll mention the research directions that I think will be important:

  1. Making dependent & linear types practical
  2. Improving type classes / implicits / tactics / "code inference"
  3. Making a high level language with predictable high performance
  4. Domain specific libraries and sub-languages (collections & data processing & querying, GUIs, parsing, scientific computing & machine learning).
  5. Tooling (editors, debugging, version control)

#4 and #5 are very general, however (1) where a PL perspective has been tried in those areas it has been quite successful, and there's low hanging fruit left (2) if we apply Hamming's principle then rather than working on e.g. some type system extension of Haskell/ML/Java which will soon be made obsolete by dependent types anyway, why not work on something with lasting impact?

About #2: in particular, is it possible to unify type classes & implicit arguments & implicit type arguments & tactics into a coherent whole. Secondly, is it possible to make this more integrated with the dynamic semantics of the language, instead of the current approach as an extra layer on top of the base language.

ML and dependent

If we apply Hamming's principle then rather than working on e.g. some type system extension of Haskell/ML/Java which will soon be made obsolete by dependent types anyway.

I hear people say that from time to time, and I think it is both antagonistic and wrong. You are in good company, this is what Harper says about GADTs, but it may still be wrong.

I have worked with both dependent systems and what you disparagingly call "some type system extensions of Haskell/ML/Java", and in my experience the latter is not subsumed by dependent types, they tackle problems that dependent type systems have not tackled yet, or are also struggling with.

For example, the idea that ML module systems would be subsumed by dependent records is an oversimplification that omits that the really hard thing about module systems (that make them convenient to use) are not realized by dependent records alone. The idea that GADTs are "just dependent elimination" neglects the fact that GADTs are their core are about type equalities, and that the status of equality between types is also one of the big unknown of dependent type theories (so that would be a case of "clean up your own backyard"). The work on type classes has been massively reused by dependent type systems.

I agree that work on non

I agree that work on non dependent type systems may often be transferable to dependent type systems. Of course the usefulness of research isn't solely defined by how applicable it is to dependent type systems either. The poll at the top indicates that in 25 years or less we won't be programming any more anyway (very optimistic if you ask me, but certainly qualitatively true), so in the long run almost all research is short term focused.

However, if we assume that usefulness in dependent type systems is the only criterion, which isn't far from the truth if you think that dependent types are around the corner, isn't it then more efficient to investigate the issues directly in a dependently typed setting? E.g. extending dependent records to be powerful enough for modules, or investigating whether inductive families can be encoded in a similar way as GADTs with HoTT's equality?

Hard to tell

I don't believe that always picking the most general and difficult setting to conduct one's research is the right way to optimize research resources.

Early in my understanding of the space of type system and programming language designs, I used to be (wrongly) skeptical about the work on meta-programming that added specific type system features to reason about bindings (nominal types, contextual type systems, etc.). In my view, we already had very powerful type system features around the corner (dependent types), the Right Way¹ was obviously to leverage them to provide binding-like types as libraries (which was nicely done in Nicolas Pouillard's thesis, for example).

Later I understood that we need both people trying to (1) express this static reasoning about binders as libraries in an expressive system and people trying to (2) seriously use static disciplines to manipulate syntax with binders and see what worked and what didn't. If task (1) had been completed before (2) started, we could have used said libraries from the start, but (1) was not (and is still not, I'm afraid) usable enough for (2) to be effective in this setting. The work on hard-coding binders in the type system that allowed people to effectively work on (2) in parallel with (1) proved invaluable.

And something else happened that I still find very surprising: it turned out that one of the work that started in the "hardcoding binders" category, namely "nominal X" (for X in types, sets, and type theory) developed into a beautiful theory of its own, drawing fascinating connections with topos theory and, now, cubical type theories.

Bottom line: sometime taking shortcuts is the right way to move forward in research. And it's hard to predict what outcomes will result of one particular brand of research. If some people are earnestly interested in pursuing a research direction, it's almost always a bad idea (and rather pretentious in any case) to try to stop them.

¹: another possible option would be to find out that dependent type theories could not faithfully encode static reasoning about binders, but that would be problematic. Are our foundational type systems really foundational if they need tweaking already for the first problem-domain we attack? I felt sure at the time that they would not need tweaking -- that was also my opinion when mathematicians started suggesting things (in Munich in 2009) that later became higher inductive types. (They started using our beautiful inductive types a few months/years ago and already want to change them? they should work harder!)

Nominal types aren't

Nominal types aren't overlapping with dependent types in an immediately clear way, and despite this, hasn't research on dependent types eaten nominal types, as in nominal types are "just" a higher inductive type that equates alpha equivalent terms? This turns 'evaluation respects alpha equivalence' from a metatheorem into a theorem inside the language.

I wouldn't want to stop anybody from researching anything, and I am not under the illusion that my idea on what's important is even nearly as likely to be correct as that of an actual PL researcher. This just happened to be the idea that formed in my mind when I read this thread's intro about Hamming's "you and your research".

The future of PL

There has been a singular focus on the "L" in PL, but this has failed to create programming experiences that don't absolutely suck. The big open problem is how to create experiences that don't suck, or rather, admit that this is a problem and focus on it. Programming experiences must be considered holistically.

Another problem: ML is on the rise and will probably supplant programming in 20-40 years (when machines will be better at programming than humans), there is no point in avoiding that. But in the meantime, can we leverage ML into PL to augment how programmers write code? We have a huge corpus of existing code that goes to waste because we cannot figure out how to "reuse" it directly, but this is ideal for training something that can identify when the wheel is being re-invented and guide the programmer to writing a new solution based on the N existing solutions already written. No one knows how to do this yet, though plenty are working on it.

Related to that, PL is currently suffering from a lack of popularity in funding and uptake by new students: they are gravitating towards areas that are "hot" with new technologies that are rapidly progressing (ML...cough). Can we make PL hot again, rather than fixating on the glories of a past that are well...in the past?

Interim results

So far (~20 replies in), it seems most respondent think programming will remain a wide spread human activity for at least 30 years. That's certainly reassuring. Though I think people on LtU are a bit biased on the topic...

Look at programming 25 years

Look at programming 25 years ago, and extrapolate that into the future. Machine learning is making progress, but general intelligence is still very very far off, and I think for automatic programming you're better off looking in the automatic theorem proving literature than in the machine learning literature. That area hasn't made an unusual amount of progress. Moore's law is slowing down.

How we are programming will change in 25 years, but we will still be programming if things continue as they have been going.

Question is, will we be

Question is, will we be programming in much the same way (e.g., text based languages, that anyone who programmed in Fortran/COBOL would recognize as programming languages :-)

expect pareto principle to continue in spectrum of tools used

I expect the mix of old and new will change, with old coding style hanging on very perniciously for several generations, in some problem categories. A sudden break in better quality of new coding style seems likely, with newer end of the spectrum having a large sea change due to some tactic or other making outcomes better, cheaper, and more flexible.

The way old coding style is formulated might change, fitting new tools better. It might look completely different, while being structurally almost the same in terms of semantics. But I expect very slow progress in AI code making judgments about whether an algorithm works, and what mix of tactics will best suit a situation.

It would be relatively much easier to write AI verifying whether rules are followed with utter consistency, as opposed to AI that is clever about making up things from whole cloth. I don't see general intelligence happening in software for a long time, like not in this century. Progress in change in quality, as opposed to volume of data, seems to be very slow.

We've advanced more in the

We've advanced more in the past 5 years than we did in the last 20. When the rapid progress stops, then we will be able to make better predictions about the future. But if you asked me 10 years ago if self driving cars would be a thing in 2015, I would have said no. Oops.

Automatic theorem proving is the PL mindset answer to ML that is comparatively stagnate. If you believe the future of programs is all logical and well formed by construction, then of course you wouldn't consider ML with their opaque outputs with error margins and theorem proving would be the only option. Perfect is the enemy of good enough.

How much of the self- in

How much of the self- in self-driving car is programmed and how much is learned?

The easy parts are

The easy parts are programmed, the hard parts are learned. There are just some things that we can't program very well, control systems that have to accommodate arbitrarily complex conditions being one of those things.

Of course, a lot of programming still goes into the learning process.

How do you envision

How do you envision programming via ML? It's clearly an AI-complete problem unlike self driving cars which requires almost no intelligence but rather very good perception of the environment around the car. Until we've reached the human level intelligence AI stage, automatic theorem proving remains the main contender for any significant progress into automatic programming.

Personally, the thing I like

Personally, the thing I like about programming is that the computer does what I tell it to do, rather than it deciding on its own... You know, the reason we prefer them over, well... people.

The computer only does what

The computer only does what you tell it to, which is why it is not as effective as...well...people. We increasingly want our software to be smarter than what we can program manually...we want them to be like people in their ability to handle situations that the programmer did not anticipate, which means that more and more systems will have to be machines learned.

Seriously, don't ignore the huge investments that Microsoft, Google, Facebook, Amazon, ... are making in ML right now, and what they are using them for. PL researchers must at least have an idea of what their competition is up to.

I I am deeply interested in

I I am deeply interested in ML (as I was in AI was I was a kid). I am just not sure if in the present context (future of programming) it is really as relevant as you suggest. I used to think as you do, but this morning I am in a contrarian mood.

Put differently: will next generation learning systems be programmed? And if so, will they be programmed using similar tools as today? I think the answer is Yes to both questions for the time period I am comfortable forecasting.

Does the market for new

Does the market for new compilers alone justify graduating thousands of programmers a year?

Not sure I follow.

Not sure I follow.

In the future, just because

In the future, just because some programming is still involved doesn't mean programming is really still a thing that more than a few people do. And eventually, even building training tech will rely on....trained systems.

just because some

just because some programming is still involved doesn't mean programming is really still a thing that more than a few people do


Sure, that might happen. Question is when, so we can decide how much to invest in programming related technologies.

30 years I hope, I can live

30 years I hope, I can live with 20 for my own career. But who really knows?

Supposedly, Engelbart's mother of all demos back in '68 was not well received by many academics because they thought real AI was just around the corner...what good is a mouse and graphical display when the computer will just do everything for you! However, they were off by about 100 years.

Until then, I do see a nice future for PL being integrated more closely with ML (super smart code completion), and maybe debugging. There are plenty of mid points between completely manual and completely automated programming to consider! If I were a new student in PL today, I would look that way, but I've already sunk too much effort into experience.

There are really two

There are really two questions. One about where to invest time if you are a computer scientist, the other about where to spend money if you are a commercial entity. If you see programming becoming obsolete in fifty years you won't give lots of money to projects that will take that long to reach their goals (not that anyone funds things with that time frame).

Right now, you put your

Right now, you put your money where you see visible advancement, so that means ML and not PL.

The biggest problem facing PL today is the lack of meaningful progress. We have become the new theory field, and really...we don't want to be there.

However, they were off by

However, they were off by about 100 years.

Or 200 years ;-)

p.s. Where's my flying car?

I now have …

… "The Future" by the Tinklers running through my head. Thanks for that.

Oddly, though, although it covers flying cars, meals in pills, esperanto and living on the moon, it doesn't mention better programming languages. It should.

I am on the pessimistic side. I strongly suspect we'll still be debugging stack corruptions and off-by-one loop errors way into the 22nd century.

"FOR statement considered harmful"

Time for an annecdote

When I was a small and measly child of about 11 years I had to undergo the British tradition of a "career advice day". This involves filling out a wildly psuedo-scientific questionnaire about personal strengths and weaknesses. Character-building stuff. After analysis (by an expert system no less, anyone remember those?) we had an interview with a "career guidance councillor".

Expert: So the computer suggests the garbage disposal industry.
Me: That's nice, I'm going to be a programmer.
Expert (muffling laughter): Of course you are, what makes you think that you are cut out for that line of work and yet our expert system has missed it?
Me: Because I already know languages X,Y and Z and I've written programs to do...
Expert (with a look of sincere sadness in his eyes): That's great kid, but you may as well forget about all of that now. The computers will be programming themselves by the time you enter the job market. Those things you know are only 4th generation languages, and they're just getting the kinks out of 5th generation languages right now. Sorry kid, do you know much about handling garbage?

Sadly all true, and barely paraphrased at all. The joy of an English comprehensive education, that. So if I am a one trick pony, might be nice to ride it for another 30 years :)

That sounds horrible even by

That sounds horrible even by American standards. No wonder we revolted.

Funny (and heart breaking at

Funny (and heart breaking at the same time).

BTW, I always had a "soft spot" for the language generation thing (i.e. I hate it), which seems like non-rigorous but meaningful scheme up to the fifth-generation thing with the famous Japanese project etc.

Once upon a time,

I told a fellow undergrad I meant to specialize in programming language design, and he told me there was no point because in ten or twenty years all computers would be optical computers that program themselves. I countered that if the computers were programming themselves they'd want a really good programming language to do it in. (I think I've mentioned that incident before on LtU; but, well, it's awfully relevant here.)

I suspect programming will change in different ways than we expect it to. Yet another application for the comment attributed to Mark Twain, "History doesn't repeat itself, but it rhymes."

We'll know computers are

We'll know computers are intelligent when they start programming themselves in functional languages. Surely it will take them a few more decades until they grok monads, but that will be after the singularity for sure.

So it's confirmed

That not even super-intelligent computers will master category theory...

I read it the other way:

the super-intelligent computers, being on the other side of the singularity, would be the ones to master category theory. (The relationship between category theory and god-like intelligence puts an interesting twist on the adjunction between mathematics and religion.)

Isn't well known that

Isn't well known that category theory mavens are travelers form the future try to lead us astray?

I think its a bit arrogant

I think its a bit arrogant to think that they will need to work with monads. They will be much better at reasoning about state than us, and the "programs" that they write will be completely opaque to us.

Truthfully, I'm skeptical

Truthfully, I'm skeptical about the whole super-intelligence thing. It's certainly possible to be more intelligent (in one or another way) than the average human, since some humans are; but I think there are some fundamental limitations on intelligence. I don't know quite what they are, of course; but there seem to me to be some actual, bonafide trad-offs involved. Albert Einstein's intelligence was of a different, er, flavor than Leonard Euler's; both first-rate of their respective kinds, but I don't think either of them could have done what the other did. And at a less elite level, I think there's a functional reason why human short term memory isn't much bigger, or much smaller, than it is. So, maybe they'll be better at reasoning about state than we are; but then again, maybe that could only be acheived at the price of, say, lesser general intelligence. I suspect there's some kind of trade-offs involved, and sooner or later perhaps someone will figure out what those trade-offs are.

It doesn't take a genius to

It doesn't take a genius to write code. Actually, much of the code written today is sort of routine and done in a mostly automatic (to the human writing it) manner. If its just a matter of "finding" code that solves some problem through stochastic gradient descent, then (a) the computer will be able to do that without achieving anything close to real "intelligence" and (b) the code that it comes up with will be completely bizarre and non-modular from our point of view; it won't be an elegant Haskell solution.

If its just a matter of

If its just a matter of "finding" code that solves some problem...

That's a big "if". I'm not so sure programming is such an easy problem. Various tasks that were expected to be "easy" now look likely to require full sapience (e.g., for translation between natural languages, there seems no substitute for actually knowing what the text means). Questions are harder than answers, so that in this case, knowing what problem to solve is a big part of the problem.

We must somehow make code

We must somehow make code differentiable, a challenge to be sure, but not an impossible one.

The Skype Translator today is actually quite useful...not as good as having a human translator but good enough for many tasks. It turns out that, no, you really don't have to grock meaning to get 80-90% of the way there, and we might find that we can even get almost 100% of the way there without intelligence.

Translating a human language

Translating a human language to another is a very different problem than translating some desiderata to a program that implements them. Two human languages are very similar. The words have to be translated, and the order changed a bit, then you're done. This is not at all the case for programming.

I only mentioned it because

I only mentioned it because parent did, I happen to know a researcher working on skype translator so it gets drilled into me a lot.

Programming is definitely different from machine translation and I hope I didn't appear to suggest otherwise.

My experience of translation

My experience of translation is mainly from the context of verifying Wikinews synthesis articles. To verify facts for a news article from reliable sources in another language you need a whole lot more than 80–90% comprehension. You need to know what everything means, especially including nuances. In my experience, working out a good translation for a single sentence can be extremely difficult on those occasions where it's possible, and there are nuances and even first-order meanings you'd never get without advice from a native speaker.

Although superficially I'd of course agree that natlang translation and programming are different, I suspect that ultimately they're akin in that you need full general-purpose intelligence for both. I don't think programming can ever be fully reduced to selecting programs (unless you simply give up on doing things right), because the act of selecting can never be separated from thinking about what you're doing.

DOCTOR: All elephants are pink. Nellie is an elephant, therefore Nellie is pink. Logical?
DAVROS: Perfectly.
DOCTOR: You know what a human would say to that?
DAVROS: What?
TYSSAN: Elephants aren't pink.

Not only are elephants not

Not only are elephants not pink, they don't play chess either. Neither task requires full blown intelligence, because they are much more about search and pattern matching/recognition than we consciously think they are. The logicians are going to lose PL like they lost AI, because elephants don't play chess, and it turns out programmers don't do formal proofs either.

battleship

I'm inclined to agree logicians will lose PL, though (as in the quoted exchange) I see it as because the task requires intelligence.

Long ago, as I was learning some of my first few programming languages (think Turbo Pascal 2.0, MBASIC), I used to find it interesting to write programs to play battleship against the user. I didn't try to optimize the computer's tactics because I was more interested in comparing the modularity of the program in different languages; the computer would place its ships randomly (subject only to the constraint they couldn't overlap), and fire randomly in unexplored areas unless it was following up unresolved information from a hit or near miss. Statistically, any human would mostly win against the computer because their tactics would be better than random guessing. Playing against the computer was boring. It didn't take much consideration to see that one could probably program the computer with tactics that would be statistically unbeatable, and it seemed clear that that too would be boring to play against. This led to a major aha moment for me: the thing that makes playing battleship interesting (to the extent that it is) is interacting with the other human player. After that long-ago insight, I was much less interested in later years to hear about a computer beating a human master at chess, or a computer beating human players at Jeopardy! (although that one may be somewhat interesting for what it says about Jeopardy!, in that apparently the appearance of knowledge from doing well in that game is somewhat illusory).

Programming, I suggest, is more like battleship than like chess.

Programming is neither like

Programming is neither like battleship nor chess. It is a lot like medicine, where we regurgitate from a huge amount of knowledge and experience to solve similar problems over and over again. There is no logic or thought out strategy to it, just experience and remembering what incantations were used previously, or if something new, googling up a new solution. To top that off, we spend much of our time debugging code and tweaking it until it is good enough.

Programming is more like Jeopardy.

One does wonder how the

One does wonder how the different ways people's minds work are likely to affect the way their perceive a complex task, such as programming.

Well, it's good that at least we've reduced our differences of perception about programming to something really fundamental and important, like competitive sports. <is there an emoticon for struggling to keep a straight face?>

[Note: I do find these analogies helpful in understanding where different perspectives are coming from; thanks. Though I'd want a human being involved in diagnosing me.]

Actually, I was talking

Actually, I was talking about this with an ML researcher. He is convinced that we are really just pattern marchers and searchers, that what we see as carefully orchestrated thinking is just lots of fuzzy matching. The only difference is scale; even naive ML methods work when data and compute power is vast enough.

Hm

Interesting. I, on the other hand, don't think scale can ever achieve something qualitatively akin to sapience; I see both as useful but for different purposes.

Those who study this don't

Those who study this don't share that opinion, mostly. But if you hold that belief, then it makes sense why one would think a computer could never program, win at chess, or drive a car by itself.

What's that about chess? I

What's that about chess? I never said that. I said I wasn't particularly interested in a computer's ability to beat a human at chess. I never claimed it couldn't.

Btw, although I'm certainly

Btw, although I'm certainly interested to hear what those in the field believe, I note on the skeptical side that those who choose to specialize in it would naturally self-select to be people enthusiastic about the technology.

Experience suggests that he is right.

I've heard the same from neurobiologists: after a few decades they have not found any magic located in the brain. It is only 4 or 5 simple statistical functions replicated a 100 billion times. It is quite sad that the scruffy approach will probably win eventually. Where is the interesting pattern for us to discern in that result?

On the other hand, people

On the other hand, people who spend a long time trying unsuccessfully to cope with a very hard problem may tend to ascribe the solution to something they don't have access to atm. Such as, truly massive scale. Likely, once they have the scale there'll be a controversy over whether or not they've solved the problem, leading to some researchers — but not others — looking elsewhere for the solution.

Necessary vs sufficient

It seems like the classic confusion between whether the scale is a necessary or a sufficient condition. I imagine that scale alone will not lead a system to program itself. But it does seem plausible that evolution has done a reasonable job of optimising the software to reduce the hardware resources.

Scale is probably integral

Scale is probably integral to the way the human mind achieves sapience, yeah. May be necessary to all ways of achieving it, though I wouldn't rule out some other means being available. As for nature optimizing, though, I'm not so sure. Optimization within the close neighborhood in search space, sure. But there's been a lot of difficulty over the question of why only homo sapiens developed sapience. It's downright embarrassing to folks trying to be humble about our place in the world by ridiculing nineteenth century thinkers who viewed us as the pinnacle toward which evolution climbed. Emphasis has been placed on the need to find a continuous path of gradual development by which sapience arose, and that makes the uniqueness thing more perplexing. I suspect the explanation is that sapience, while it's favored by evolution once it happens, can only be reached by a small (and perhaps unlikely) step from a certain direction, and it's very uncommon for a species to be situationed at the point from which that small step is possible. So, we may be optimized within this little evolutionary pocket we've wandered into, but it was almost impossible to find our way into here, and if there are other greatly different ways to achieve sapience there's a good chance they're not reachable by a small step from anywhere evolution would ever get to on its own.

We are not special. What we

We are not special. What we call sapience is just the result of better brain capacity and a capability for language. We were first, but other animals can evolve there, and some are quite close already; it's just a matter of a few ten thousands year, which seems like an eternity to us but is just a blink in the history of life.

It is the "we are special" mindset that keeps getting destroyed by science. We are not special. We are just capable.

I don't claim we're special,

I don't claim we're special, I claim we're the beneficiaries of a wildly unlikely accident.

The attractiveness of the "we are not special" mantra is what makes our uniqueness so embarrassing. The idea that

We were first, but other animals can evolve there, and some are quite close already; it's just a matter of a few ten thousands year, which seems like an eternity to us but is just a blink in the history of life.

seems to me to weaken under scrutiny, exactly because a few tens of thousands of years doesn't seem like an eternity to me, rather it seems like (as you aptly put it) a blink in the history of life. You're describing a scenario in which life spends billions of years evolving on this planet without developing a sapient species, then suddenly a bunch of species are all developing it practically simultaneously. (I set aside, for now, the variant scenario that sapience has happened here many times before.) In the alternative scenario I've described, advent of sapience is a wildly unlikely event, requiring in effect that a species follow an evolutionary path that navigates a maze (how complicated a maze is unclear, but sufficient to make the event wildly unlikely); in this scenario, any perception that other species are quite close to achieving sapience is merely a result of underestimating what it takes to evolve sapience. If we were altogether wiped off the face of the planet (which might actually be quite difficult to accomplish as the whole species may be a lot harder to get rid of than our current civilization), I suspect it could be some hundreds of millions of years before another sapient species emerged. Or longer.

We are not more evolved than

We are not more evolved than other animals, let's just get rid of that thought right away. We are adapted for our niche, and that includes a set of features unique to us, but all animals have a set of features unique to them! We are no better than an elephant, dolphin, parrot, and arguably a colony of ants that exhibit intelligence collectively if not individually.

I said other animals could evolve our abilities, not that they will, that is not how evolution works. We didn't evolve a feature called sapience overnight, and it was hardly anymore unlikely than any feature that has evolved. Life goes on around us, and is just different from what we recognize as "sapience", not any less special and unique! What that means is that any feature is no less harder to replicate than the other, we just need enough hardware and time to support that feature.

We are not more evolved than

We are not more evolved than other animals, let's just get rid of that thought right away.

You're trying to get rid of something that, not only have I never said, but I've repeatedly refuted. It seems we've a total breakdown of communication here.

I didn't say you said that.

I didn't say you said that. I just wanted to eliminate it from the discussion and make my position very clear.

Perhaps a confusing factor

Perhaps a confusing factor here is that, on careful reflection, while I don't consider us "special", there is a sense in which I do consider sapience special. I'd better expand on that. Each species you mentioned has something "unique" about it. But of these uniquenesses, sapience is qualitatively different from all the others. Reasoning makes it possible to discover strategies and tactics that evolution itself cannot directly access. It's not just a niche, it's a meta-niche; we can reason out new niches for ourselves that don't have to be reachable by gradual migration from anything else viable. I mean, of course, gradual genetic migration; evolution is to some extent able to handle abrupt jumps in phenotype. The phenotypical flexibility of genes has been remarked on, in recent years at least — a small change in genes producing a profound change in phenotype, with at least the seeming of an expanding genetic database of phenotypical tricks that need't be in use by current species. But the proximate driving force is still random; the blind watchmaker is still blind. Reasoning has a different shape; search-tree pruning is focused locally rather than globally (sorry, I've never tried to put this distinction into words before). One person can potentially reason out, in what might as well be zero time from an evolutionary perspective, a solution that evolution couldn't ever arrive at. There's no escape by saying "evolution led to sapience, so sapience is just a way for evolution to access those solutions", either, because that doesn't diminish the qualitative uniqueness of sapience unless there's also some other way for evolution to access those solutions. If sapience is the way for evolution to access those solutions, it's still singular.

Sapience is as different as

Sapience is as different as the others are different, however, which leverage hardware budgets for different activities and optimizations. We are quite self centered and think our reasoning capabilities put us above the rest of the animals, but in reality it just frames us with respect to each other. Our technology is great, sure, but ants are still outcompeting us. Other animals build things, use tools, and communicate using language in ways that we just can't comprehend. We are just slightly different apes who can talk.

So, with that being said, if you believe human reasoning was a grand unusually unlikely not accident, then it could be a very long time before computers can emulate and surpass it. If you believe that sapience is as likely as any other feature, then it is very clear we will get there if we can get to points before that involve less hardware (doing an ant brain, rat brain, dog brain, puts us on the way).

Still, automatic programming doesn't require sapience, so the differences in belief are quite moot to the question at hand.

Sapience is as different as

Sapience is as different as the others are different

My point was that it's different in a different way than the others; I did then elaborate some on why/how.

We [...] think our reasoning capabilities put us above the rest of the animals

I hope you're not trying to put me in that box. I don't see that "above" is a well-defined concept in this context.

Other animals [...] communicate using language in ways that we just can't comprehend.

There's reason to think this is not so. Have you read Deacon's The Symbolic Species?

Still, automatic programming doesn't require sapience, so the differences in belief are quite moot to the question at hand.

Seems to me these differences of belief bear on whether or not it's true that automatic programming doesn't require sapience. Making them quite salient to the quesiton at hand. It seems that if one believes sapience is just a large-scale application of unremarkable elementary search tactics, then one is likely also to believe it is unimportant for tasks to which it is applied, whereas if one beieves there's a trick to it, something that makes it more than the sum of its parts, then one is likely also to believe it brings a qualitatively different perspective to the tasks to which it is applied. (The latter view is more subtle than one technique doing tasks "better" than the other; brute force is likely to exceed quantitatively on its favored ground, sapience qualitatively on its favored ground.)

You're describing a scenario

You're describing a scenario in which life spends billions of years evolving on this planet without developing a sapient species, then suddenly a bunch of species are all developing it practically simultaneously.

That's the way evolution works. Major advances don't just happen in one place; the machinery for them to happen generalises to the point where it's not only inevitable that it will happen once, but more than probable that it will happen over, and over again.

I very much doubt we're the only sapient species to have developed. We may be the first where being sapient has been enough of an advantage to help our survival. We may be the last, but that'll be down to our behaviour, not evolution.

That's the way evolution

That's the way evolution works. Major advances don't just happen in one place

Sometimes; the definition of "major advance" is tricky. Who says sapience is a major advance? A succifiently specific, idiosyncratic development doesn't have to be likely; and if it's located up a sort of cul-de-sac in evolutionary search space, and its benefits don't accrue from nearby genetic configurations, then that development could in fact become something wildely unlikely, happening only very rarely and by accident.

I very much doubt we're the only sapient species to have developed. We may be the first where being sapient has been enough of an advantage to help our survival. We may be the last, but that'll be down to our behaviour, not evolution.

Heh. Re our behavior, there's nothing distinctive about it; we may be individually morally outraged at some of it, but the ability to take that individual perspective is a luxury of sapience. Standing back from the individual perspective, our collective behavior is what it evolved to be, and I doubt another sapient species would behave significantly differently.

The possibility of sapience having evolved before is rather interesting. There's the suspicion that it's enough of an evolutionary advantage that once one gets into the sapient mode, it's exceedingly likely to lead eventually to a neolithic revolution and subsequent explosive memtic evolution to a high-tech civilization. To make the repeated emergence of sapience work as a theory, one ought to explain why one chooses to reject that suspicion.

[...] and I doubt another

[...] and I doubt another sapient species would behave significantly differently.

Neanderthals were, by all accounts, less aggressive than us. Had we not evolved, their behaviour likely would have been pretty different.

I also think your statement is easily falisified given how significantly even human cultures differ. I doubt very much that the Native Americans would have ever developed like the Europeans or the Japanese.

The Europeans ran roughshod

The Europeans ran roughshod over the native Americans. That's not just some random coincidence, it's natural selection at work. The more aggressive culture dominated. The final chapter has not yet been written, of course, but I suggest the brutality in the earlier chapters may at least have been necessary plot development.

(Btw, re how much we actually know about the Neanderthals, have you read David Macaulay's Motel of the Mysteries?)

Being quoted twice...

...probably indicates you said something rather interesting in this sentence:

You're describing a scenario in which life spends billions of years evolving on this planet without developing a sapient species, then suddenly a bunch of species are all developing it practically simultaneously.

Actually I would only find this unlikely if I expected the feedback events in the process to be statistically independent from one another. If I did not know that then I would suspect that there was a catalyst prompting a phase transition from one sort of population to another.

Sure. It could be that this

Sure. It could be that this batch of life is just on the verge of blooming into sapience, and we happened to be first. My intuition is suggests to me this probably isn't so, because I'm inclined to doubt some element necessary to sapience is now cropping up broadly across diverse species in the biosphere that hasn't already been around for some time. But I do acknowledge the possibility; mammals are a fairly recent development, after all. In which case, if we disappeared altogether, another sapient species might come along in perhaps a few million years rather than hundreds of times that long. (Keeping in mind that we were probably running the wetware for a couple of million years or more before our memes underwent their neolithic phase-change.

I don't claim we're special,

I don't claim we're special, I claim we're the beneficiaries of a wildly unlikely accident.

This isn't an easily tested hypothesis. Even if it were common, any better adapted sapient species would quite possibly (even likely!) kill off any other sapient species, just like we did with the Neanderthals, so you're inferring a probability given evidence subject to significant bias.

Yes, "claim" is an overly

Yes, "claim" is an overly strong term. The sort of shorthand terminology one is apt to somewhat inattentively drop into as a timesaver, and then spend more time defusing later than one saved in the first place. I'm suggesting a hypothesis, even one I'm favorably inclined toward; one considers such a thing along with alternative hypotheses, comparing and contrasting, considering differences in implications and predictions, etc.

Engineering battleships

It is a lot like medicine, where we regurgitate from a huge amount of knowledge and experience to solve similar problems over and over again.

Sounds suspiciously like engineering to me.

Well, and everything else at

Well, and everything else at some level. Trial, error, experience, replicated success. We see it more purely in fields where we understand less like medicine.

Math and theory are really exceptional in this regard. It will be a very very long time until a computer is creative enough to come up with new theory and math. It's just that for most programming, that isn't required.

I've been mocked on LTU for saying this before

but I'm unimpressed with mathematicians pontificating on what is good programming. Programming is not mathematics.

Speak for yourself

Sean McDirmid wrote:

There is no logic or thought out strategy to [programming], just experience and remembering what incantations were used previously, or if something new, googling up a new solution. To top that off, we spend much of our time debugging code and tweaking it until it is good enough.

Speak for yourself, pal.

You just think you are in

You just think you are in control and not just pattern matching very quickly based on experience and knowledge. Well, that is the general illusion of sapience.

In part true, in part false

We want computers to what I mean not what I say. We don't want them to go beyond what we want. They must always show us in the best light and when anything arises that causes that image to fracture then they must provide an avenue to direct the blame elsewhere.

Seriously, the only reason that these companies are making their investments is to increase their market share and hence increase the total number of currency units they control and their control of their environment (political, financial,etc).

Any benefits that accrue to the rest of us is incidental.

ML != AI, the notion of AI

ML != AI, the notion of AI complete doesn't really apply (if it ever did even to classical AI).

All we have to do is figure out how to make code differentiable and annotate our corpus correctly with the language we want to describe our problems in, then it's just a matter of using one of many off the shelf algorithms. Now it is true that we are horrible at describing problems, but being able to evaluate a solution to a bad problem description allows the user to iterate on the problem description as well (could you call that programming? Perhaps, but programming is very much a task of just figuring out what you really want!).

We already know how to get character level DNNs to write plausible code without having any goal at all, theorem provers have no such capability to learn from a huge code base...they are similar to constructed AI with a fancy reasoning algorithm. Automatic theorem proving also has the drawback that you have to express your problems formally, which is of course ludicrous since we often aren't even very sure of the problems we are trying to solve.

I guess that's a good test

I guess that's a good test of general AI: translate a requirements document into code that implements it.

But a programmer would

But a programmer would translate it with bias, filling in details that weren't specified. Writing the requirement document itself is half the battle! With ML, people will just half ass the requirements, see what the computer comes up with, refine it, see what the computer comes up with, wash rinse repeat. It is the ultimate agile, and why human programmers won't be able to even compete at that point.

The point of self driving cars is not to free people from driving, but to optimize the use of expensive infrastructure (roads). People often just think the former and miss the big picture, the real disruption will be that in a city like Beijing, we will actually be able to breathe again and get to work on time. Likewise, the point of automated programming is not to eliminate programmers, but to allow stakeholders to actually get what they wanted rather than what they had to settle for. It won't replace programmers so much as it will fundamentally change our relationship with software.

Interim results

With ~60 votes, ~70% think we will be programming in much the same way in 30 years. Well, if that's what you think then investing in programming language research makes sense!

I wonder

Can we take a machine learning system (with deep learning, of course...) and have it learn to behave by observing a prototype or older version, thus eliminating the need to further maintain the original code?

"Programming language professionals: programming is here to stay

With 100 votes in, 71% think programming will remain the same sort of activity it currently is for at least 30 years. Pretty complacent, if you ask me. This is either totally myopic or entirely realistic. Only time will tell.

Wikicode

An interesting case study in the future of programming is now unfolding on the wikimedia sister projects (the most familiar of them being Wikipedia).

Wikis took off in the first place because wiki markup is easy to use and easy to learn — when you edit a wiki page, if you're doing something very simple you probably either don't need to know any markup techniques, or if you do there's probably an example right there that you can imitate; and as you edit such code routinely, you're exposed to nearby examples of other simple markup techniques, so that by the time you want to use them you probably already know how to do them. There's a rudimentary mechanism provided for extending the markup language: "template" pages, which are naive macros. I disapprove of naive macros, of course; I maintain that not only do they not provide good abstractive power, but they actively interfere with later attempts to improve abstractive power; but one thing that can be said for wiki templates is that they provide a continuous learning curve, following within the pattern of wiki markup generally: users can gradually pick it up by osmosis.

A fundamentally conflicting model of how wikis should work is being pushed, at great expense, by the WMF (Wikimedia Foudation, the nonprofit that operates the servers for Wikipedia and its sister projects). For years they've been laboring to reduce the ability of voluteer contributors to access the levels at which wikis are controlled. They're imposing a pseudo-WYSIWYG interface on the wikis, hiding wiki markup from users and thus depriving them of the constant exposure that has traditionally provided a continuous path to increasing technical expertise. There's now an extension that allows template pages to transform code, under the hood, using Lua code; I acknowledge that Lua is a nifty little language, strikingly easy to use for what it's good for — but in a wiki context, procedural programming has a terrific impedence mismatch with wiki markup, very high administrative overhead alien to wikis, and the effect is to further block ordinary wiki users from any continuous path of self-improvement. All this seems to me to be a classic instance of Conway's law: a centralized organization arranges things to maximize central control of software, strongly favoring restriction of programming to an elite, preferably of people in their employ. Which, of course, fundamentally undermines the core reasons for success of a movement based on grass-roots volunteerism.

There's an alternative model for moving forward, in which facilities are wiki-markup-driven; it's all about empowering the users rather than disempowering them. There are editor designs around (the comments systems on some blogs, iirc) that combine writing markup with instantly seeing its visual consequences, which might allow users to enjoy the sort of editing aids the Foundation wants without losing the continuous learning path of wiki markup. Trickier but perhaps also possible, wiki markup itself could be carefully augmented in a way that lets users do easily within wiki markup, and within the continuous learning path, what the centralized model would shunt off discontinuously into procedural languages such as Lua. It's a big question whether that can happen, though, because the Foundation generally treats wiki markup as an evil to be eliminated. Go figure.

I've put some deep thought into what programming would look like if it were done the way wikis grow, by some form of radically open crowd-sourcing. It's safe to say it would require some deep evolution of the wiki model, and fundamental shifts of emphasis in the programming process. If successful, it might produce something that would startle both the current Wikipedian community and the current professional programming community. And it might be very hard to say whether or not the result was "programming" in the traditional sense.

User diversity

For years they've been laboring to reduce the ability of voluteer contributors to access the levels at which wikis are controlled.

I disagree somewhat. I read WMF pushes WYSIWYG over its users because it ended up only attracting "geeks" and rejecting other users, while WMF wants Wikipedia to be for all humanity — and I approve of the latter goal. I agree some non-programmers will slowly learn Wiki markup, but for many others markup will be a barrier. That's why WMF wanted WYSIWYG by default (having it only for logged in users seems a bad idea, for this goal).

I like your ideas, but given how bad humankind is at technical education*, I think WYSIWYG-by-default is more important if you want user diversity. If you can reconcile additional goals with that, great; but I'm skeptical. Of course, one could debate (elsewhere?) how much diversity Wikipedia should actually aim for.

*On technical education: I take the discredited "The camel has two humps" as evidence on how bad programming education often is, together with the amount of work it takes for a proper introduction to programming using, say, How to Design Programs. OTOH, Wiki *is* easier than programming, I just wonder to which extent.

What I always find

What I always find remarkable is that we consider our arcane icons and conventions intuitive simply because we were indoctrinated to consider them WYSIWYG. Read some Iverson, damn you! (This outburst is not directed at the post above!)

The WMF kid themselves into

The WMF kid themselves into believing their plan is justified based on a fictional population of potential users. For the success of the projects, the people they actually have to attract are in fact geeks, because those are the people who will go on to become more serious contributors. A wiki cannot actually be supported entirely, nor even mainly, by drive-by contributions, important though drive-bys are to recruitment and breadth of perspective. (It's also relevant to the character of the folks they attract, that students are key to the community, because they have the time to get drawn in and are at an impressionable age so that, once drawn in, some of them will continue contributing after they graduate.) Ease-of-use matters anyway; but, just as I don't think the WMF has a realistic model of their potential contributors, I don't think they really understand geeks, either, nor the nature of the technical activities needed on the projects. (I wasn't sure at first if that was getting off topic, but on reflection I think it's actually still precariously on-topic.)

Re technical education, I note that instruction manuals are a poor way to pass on knowledge (not to belittle the importance of literate versus oral society). Nobody reads them. It's long since become a joke; "when all else fails, read the instructions"; and, on the other side of the expertise threshold, "RTFM". Wikis, such as Wikipedia, have gone in very heavily for instructional documents describing their internal function for the simple reason that creating documents is what wikis are especially good at. An alternative is to tweak wiki markup, just slightly, by adding a few simple-but-potent primitives for interactivity. Then the wiki community, working entirely within wiki markup, can build interactive "wizards" for performing various tasks on-wiki that require expertise. Crowd-sourcing the development of such "software", which is really the only way to do it because the wiki community are the only ones who know what's needed (the WMF, and the programmers it employs, can never efficiently acquire that knowledge by liasing with the community, just as they can't take over writing the wiki content themselves and merely liase with the community to learn what content is wanted). At that point, wikis cease to be merely documents and become something like "software", and the process by which the "software" grows is downright alien to a professional programmer, to the point where it's hard even to recognize what is happening as "programming".

Censorship and exclusion are larger problems on Wikipedia

Wikimedia Foundation is fiddling with formatting instead of addressing Wikipedia'a more fundamental censorship and exclusion issues. For example, Wikipedia Administrators are now busy trying to censor mention of Inconsistency Robustness and exclude its discussion.

Better technology for "programming" Wikipedia could help eliminate censorship and exclusion to make it a more inviting place and thereby expand the community.

You might also be interested

You might also be interested in research on end-user programming — http://web.engr.oregonstate.edu/~walkiner/teaching/cs609-wi14/ — in particular the surprise-explain-reward paradigm — see new forum topic.

This wizard idea sounds interesting, and could enable such interactivity.

On the WMF, you have a self-fulfilling prophecy: Let "geeks" build Wikipedia, only "geeks" will contribute to the resulting community. But lots of evidence suggests (though not conclusively) that geek communities, left to themselves, will end up discouraging people who could give useful and important contributions*. In particular, Wikipedia also needs, say, archaeology geeks, who might not be computer geeks (so that markup is an obstacle), but could still be contributors. How sure you are such people don't exist?

*For instance, people will flame you to a crisp if you point out (intelligently) that you can't select smart programmers by checking if they are fluent with shell scripting. Here, at least, most will agree there are legitimate complaints against bash. But I still recommend Philip Guo's excellent essay, or even better his The Two Cultures of Computing.

All geeks are computer geeks, Nellie is a geek, therefore...

Not all geeks are "computer geeks". But they do have properties in common. I suspect that while it's a losing strategy to try to make wikis appeal to non-geeks, one can make them appeal to a broader class of geek — or, viewing the same process from a different angle, to make computer geekdom appealing to a broader class of geeks. I'm reminded of a report from someone-or-other in recent times, that (iirc) they'd been testing out the WMF's WYSIWYG interface on a bunch of librarians with no wiki experience, and had trouble getting the librarians to use the WYSIWYG interface because the librarians found wiki markup itself so easy to use (to the librarians' surprise because they'd been told wiki markup is hard to use). I do agree that communities tend to close themselves to outsiders; a community really has to be designed (whether by self- or external will) to remain open.

Re: The Two Cultures of Computing

It seems the two-fold distinction in the essay should be closely related to the one Ehud makes below.

Looking at the two fold culture

users with guis vs. unix with text programs, pipes and byte streams...

I think it's high time we admitted a few display technologies into programming.

People are already gravitating toward xml as a gui description language (all other approaches are disappearing).

We might as well have our source in xml with guis allowing us to embed pictures and controls.

But as they're saying in the feedback thread, having knobs sounds more powerful, more than static description languages are called for.

yeah

the '2 cultures' to me is more an indictment of all our approaches to programming than anything else. We need to be in the Star Trek future. So I would urge everybody to at least profusely apologize to all those new kids in class that our environments are not more consumer-esque. HyperCard, take me away!

two cultures and graphics (btw, xml must die)

Philip Guo's essay on two cultures of computing is a contrast of user culture (i.e. glossy iphone graphics) and programmer culture (i.e. shell command man pages, etc). An older traditional meaning of the phrase "two cultures" is a reference to C.P. Snow's Two Cultures, which was basically an art-vs-tech opposition, characterized by humanities degrees versus science (stem) degrees in colleges. (As a literary topos, it was repeated tediously in the 70's and 80's, until eternal September in the 90's helped wash away any persistent historical perspective in internet venues.) It was originally a "you don't speak my language" meme, caused by the different world views, which Guo is likely re-purposing to contrast graphics vs text in user interfaces.

To get a less text-oriented programming interface only requires pursuing that as an objective, making that "what you want" in the development phase of creating an IDE (integrated development environment). This is an example of what knob tuning is unlikely to manage, because someone must program knobs to encompass the scope of potential graphics layout, behavior, and behavior binding. Of course, once you have some parameterized graphics knobs, you can probably tune them within an expected design range. However, graphical programming languages are not very dense in terms of amount of semantics you can pack within a bit of screen real estate. (I worked on one in '93, running on NeXT stations, but I was struck by how few parse tree nodes you could pack into editable areas.)

It would not be terribly hard to make something like HyperCard run in a browser. But because the number of ways you can do it are so very large, you can expect several competing intermediary design goals to emerge in the form of inter-developer wars of an almost religious nature, about such issues as language, thin-vs-thick, persistence, messaging, OS integration, and processing models like how threads and fibers are used, and even whether sync or async is the default orientation. If you made them all split up into different groups, with a little competition and sharing of components, at least one ought to survive. But not if they waste all their time fighting.

As I see it, the single biggest force preventing such efforts at cooperative invention is a habit and practice of competition, as learned and encouraged in school systems. There's a correlation between folks talented enough to be good programmers and folks who do well in school systems, and cut-throat competition might be instilled at the same time coding is learned. As you move into the business world, it only gets worse, because any product that is not the one you are trying to sell now (even if it's an old product you want to mothball) can only shrink your potential market. So a there-can-be-only-one incentive is definitely present, rewarding any zero-sum tactic to undermine competitors, even your own models from last year.

That said (and I hope this sounds funny), I find xml a horrible kludge that needs to die. Since that's not what I imagined using, it could only get in our way, sapping our precious resources. (Insert Dr. Strangelove reference here.)

On a serious note, you can treat an app like a browser as a dumb terminal, where your programming environment runs in another process (or group of processes). Then you can have a mixture of text and graphics interfaces, depending on what you want. A library of standard stuff can be used to bootstrap newcomers until they learn enough to replace parts that start to constrain them.

monotonic

Two views:

It easily takes 10-20 years for PL research to go mainstream. Most of the design research today adds types to the same old code, or is more-computer-assisted-yet-still-manual. That makes me sad.

The other is that the number of people coding is increasing. They increase at the top of the stack faster than at the bottom, so the relative weight will shift. That seems good. Whether what they do to achieve 'coding' is what we consider coding.. I don't think so. ML + synthesis + VPLs (ex: Tableau) are amazing, and libraries enable stuff like IFTTT. Ex: if cat then that

More like 20-40

It easily takes 10-20 years for PL research to go mainstream.

So far, it has been more like 20-40. (Which leads people with short attention span to mistake PL research as irrelevant.)

Still waiting for those

Still waiting for those parenthesis to go mainstream.

programming languages that don't use parsers

seem like kludges by people who don't know how to program simple things. That's why forth is in reverse polish.

And if lisp macros are so powerful why are they never used to make the language readable?

Perhaps one answer to that is that the idea that you can add anything significant to pre-existing language is an illusion because anything you add won't be compatible with other programmer's code or won't be considered prestigious enough that programming shops will use something non-standard.

Some forms of language

Some forms of language extension are self-defeating; rather than cleanly moving things forward, they twist the language into a mess from which the only way to make further progress is to start over from scratch. Macros are like that. TeX provides macros for extension, and when the LaTeX macro package wanted further extended it required a massive project spanning decades. But just because some forms of language extension are like that, doesn't mean all are. Fexprs lack the foundational defects of macros. Seems they may have the potential to support smooth extension. Or at least be a step in the right direction.

In themselves they don't get rid of the parentheses, though. The parentheses are a separate issue: they exist because Lisp has natively no notation at all for control structure; it only has a notation for data structure. Some Lisp dialects try to pretend otherwise, but only make things messy thereby because they're denying the nature of the language. For a Lisp syntactic strategy to stand a chance of success it has to recognize that and work with it, rather than fight it.

McCarthy never intended S

McCarthy never intended S expressions to be the canonical way of writing LISP code, it just sort of wound up like that. It makes me wonder if there might be a better way to hominicty that just never got explored.

There is, prolog

You can match on structures the way you type them, thus decomposing or substituting, you can also compose them from lists or decompose them into lists with =..
http://eclipseclp.org/doc/bips/kernel/termmanip/EDD-2.html

Since prolog allows you to specifying new operators at run time, =.. works fine with arbitrary operators too. I'm not sure how it handles macros but you can apply macro phases by hand.

That kind of extensibility

That kind of extensibility lives on in unapply/extensible pattern matching.

Never heard of it

where can one find "unapply/extensible pattern matching"

Maybe check out Burak's

In any case

if you weren't aware of this I'd like to make it clearer.

You can match your way through structures in Prolog as easily as you can walk lists in Lisp. Though it may actually require a nonstandard extension in one case.

a(1,2+3) = a(B,C) matches the structure and sets B to 1 and C to 2+3 for instance (note that 2+3 is also a structure also written as +(2,3) )
Note you can break things down in a single match a(1,2+3) = a(B,C+3). That matches C to 2

But a(1,2+3) = A(B,C) where matching happens on the term itself being a variable is a nonstandard extension, supported by the way, by that prolog I linked to above.

In any case it could be done in a standard way with
a(1,2+3) =.. [A,B,C]

Note you can break a term into car/cdr with =.. too
a(1,2+3) =.. [CAR | CDR]

Also to help you talk about it, =.. is called "Univ"

I see no reason for lisp to have lists as the only input format. Prolog demonstrates that there is nothing wrong with parsing to a list rather than requiring the input to be a list. You don't lose any generality, and the transformation is bidirectional and trivial.

Edit: In case you don't know prolog. In prolog upper case letters start variables and lower case letters start atoms.
Also "=" means do unification which is a bidirectional pattern match. If it can't succeed then execution backtracks.

I've implemented my fair

I've implemented my fair share of Prolog interpreters so I'm pretty clear about how unification works. Advanced pattern matching brings some of that to the table, with unapply acting as a surrogate for what otherwise occurs through backward chaining.

McCarthy never intended S

McCarthy never intended S expressions to be the canonical way of writing LISP code, it just sort of wound up like that.

Yup, in effect they blundered into it. Imo, that outcome was pretty much inevitable, because S-expression Lisp is (up to minor variants) a point of resonance in design space, profoundly more powerful than anything else in its neighborhood. Anyone who hit on it, even accidentally, would create something that would continue to resonate down through the decades (though it's too early yet to say "down through the centuries", however much one might suspect it).

But the best syntax is no

But the best syntax is no almost syntax at all? It is a weird conclusion to make, and it is not found to be very usable by the vast majority of programmers. Of course, such hyper-regularity is great for programmatic manipulation of code, but is it worth the cost?

Not every specific

Not every specific characteristic of S-expression Lisp is necessary to the resonance effect. Some characteristics really do just happen to have fallen out the way they did. Some eventually change. Some stay as they are by tradition. Some stay as they are because those who have tried to do better have discovered the hard way that it's actually remarkably difficult to do better. I'm rather fascinated myself by the syntax issue; from all those failed attempts to reform Lisp syntax, one doesn't have to conclude that it can't be done, but one certainly ought to conclude that it can't be done the way people have been trying to do it.

The "is it worth the cost?" question is well-traveled territory within, as well as without, the Lisp community. Lots of answers have been offered.

Lisp has jokingly been called "the most intelligent way to misuse a computer". I think that description is a great compliment because it transmits the full flavor of liberation: it has assisted a number of our most gifted fellow humans in thinking previously impossible thoughts. — Edsger W. Dijkstra, The Humble Programmer, 1972 Turing Award Lecture.

Great quote! To connect this

Great quote!

To connect this subthread to the original question -- if programming (not the "knob turning" variety) becomes less wide-spread, might we not see less pressure on readability by non-experts and a move toward more expressive, if obscure to the uninitiated, notations? APL anyone?

clojure

for sufficiently nunanced values of 'mainstream'.

It was a JOKE... but yeah.

It was a JOKE... but yeah.

so was mine

but yeah. ;-)

Luddite the Ultimate

Looking at the poll results. Just because you don't want something to happen in the future doesn't mean it won't.

I wonder

I wonder how many selected the last option because it seemed the most fun ....

I would have gone for

I would have gone for "longer than 25 years", but I wasn't comfortable with the excess baggage of the last option. Then again, I almost never participate in surveys because I almost always have objections either to the way they're designed or, failing that, to the way they're likely to be interpreted. The last time I recall participating in a nontrivial survey was, let's see, about 25 years ago, and from things said afterward I concluded the people analyzing the results had an agenda that would prevent them from interpreting my answers correctly. (No, this wasn't a political survey; I've never participated in those. It was about how well people understand statistics, with those doing the study having a strong bias to interpret the results as supporting their thesis that people are clueless about statistics. Ah, irony.)

I think this was just meant

I think this was just meant as fun, not a serious survey. Though I really do wonder what people think the answer is. I think a lot of PL people would think that the need for programming would last forever. It is difficult to not think this way, since it is one of the hardest things we do. On the other hand, computation is something that we have been consistently pushing up the chain since the activity started (a bunch of secretaries sat in a room doing math by hand).

Of course it was for fun. I

Of course it was for fun. I think the answers do tell a story, and the discussion so far has been great, but the poll? Really? Cold dead hands and everything? I thought the Onion-like headline was a dead giveaway.

Are we having fun yet?

Sorry if my serious remark about surveys struck a sour note. Even when having fun I tend to stay firmly tethered to the serious side of things. In my childhood, my mother would sometimes call me "Eeyore". In graduate school, I knew this guy who'd grown up in Ukraine. Interesting perspective, he had. I remember he remarked once he was always impressed by American supermarkets because there was stuff on all the shelves. He also once remarked that I reminded him of a character from a children's story that was popular in Ukraine, though he wasn't sure if we had it in English, the character's name being "Eeyore".

The last option simply means

The last option simply means people will be programming longer than our lifetimes (~50 years). That's not so unreasonable.

re The last option means

Heh. I picked the last option because we should resist the centralization of control over computing.

noticing, framing, deciding, and planning

I have a big question related to yours (about how long we'll keep programming in ways we are used to), which delves into phases that compose programming, under a premise that phases will evolve at different rates. I will leave the number of phases as an exercise for the reader. :-) But a couple initial phases are important and perhaps difficult to target with automation to help:

  • Recognize a problem exists.
  • Decide what to do about the problem.

Depending on the problem-solving ideology one subscribes to, this may gloss over a lot of internal steps involved in massaging a problem statement and framing an acceptable solution budget and time-frame. How much of noticing and shaping the problem statement is programming? Actually trying to code up a planned solution is just one part of programming, and perhaps not the principal part of it.

McDirmid notes earlier that math and theory creation are things software addresses very little (and presumably requires strong AI). How much of programming is ideation about theory? About causes and effects? Even statements of fact? If we want automation to turn a crank and generate code for us, presumably we must indicate a result we want, expressing a plan about inputs and outputs in a way capturing an agreeable solution to a problem. We don't expect software to notice the problem by itself and create a plan, do we?

It seems clear that more and more things are being automated, machine learning is improving, systems are becoming harder to tinker with, and so on.

Systems might be easier to tinker-with if actually designed to be easy to alter, instead of merely being the most expedient thing generated by earlier coders' development process. In fact, the growing body of old software is one of the things making it harder to program, because it increases difficulty in perceiving the exact nature of problems, and appropriate means of addressing them.

We can probably automate a lot of the plan-execution phase of coding, but it may be very hard to notice problems, shape plans, and make decisions. The part of programming that amounts to formulating relevant issues may stay with us a long time.

Systems might be easier to

Systems might be easier to tinker-with if actually designed to be easy to alter

My point was that they are now designed to be harder to alter.

The part of programming that amounts to formulating relevant issues may stay with us a long time.

For me, this requires programming just as much as building solutions. Programming is an expressive medium, not just something that comes after understanding requirements. (Come to think of it, this is one reason why waterfall models fail.)

If programming becomes just

If programming becomes just turning some knobs until you get what you want, is it still programming? Right now, the feedback loops are impossibly long, which agile tries to address at the human engineer level. But imagine if you commanded a team of programmers who was really quick and whose weren't expensive. Then you could be really agile about requirements and just try out many solutions until it was adequate. Is that programming on your part? Or just getting what you want?

taste to guide successive trials must come from somewhere

I'm okay with rapid feedback, turning knobs, and affordable programmers. My focus on having someone understand doesn't require a long runway (like a waterfall model), just that a cogent idea of current cause and effect in relationships is roughly on target. Spontaneous and changing plans ought to work, if tools are forgiving and fast, if something like a clear plan for system behavior exists at least informally in one or more minds.

I can imagine a team creating a mess when they don't collectively have a common agenda, or at least grasp what one person wants. When they twiddle knobs and you don't get what you want, how do you correct course without knowing what happens now? Maybe that's not a problem if a valid "you are here" description always pops out of the tools. But I expect chaos or limbo is a result of random walks in digital effects.

There's a lot of ceremony and process in current development that's completely unnecessary -- a total waste of time -- sometimes in the name of following a plan ("doing stuff" in order to "provide value"). Tools to short circuit that sounds fine. It's just that computing tech acts like an amplifier much of the time, and it's just as easy to amplify poor objectives as good ones. Faster doesn't need to yield smarter.

People will create a mess

People will create a mess when they don't have a common agenda. Agile only partially solves the problem of not having good requirements, you actually still need good requirements, good developers, time for them to work...that code is expensive and you can't just throw it away.

But if a machine is writing the code, then it is cheap and disposable, you can just have it redo the program with refined requirements until you get what you want. It is completely different model than how we build software today. It is not automating programming, it is making software engineering much more fluid and viable. The process would just be totally different from what we recognize today.

Automated programming without a tight feedback loop is stupid: the programming isn't going to magically know what you want, and likely you won't either! But it gives us a chance to iterate: I might not know what I want, but I sure as hell know what I don't want! So it produces something, you tell it what is wrong about it vs. telling it what should be right about it! This is actually how much of the real world works. We are the weak link in the process, and this would allow us to be much more effective.

re: what you want

(Yes, I reply only to clear the acid after-taste of genocide politics, so my questions are largely rhetorical, and don't need answers unless yours are remarkably good ones with useful insights.)

Will there be modules and replaceable parts? Or just a big organic lump of spot-welded monolithic executable? Jules Jacobs recently noted the importance of being able to check whether parts obey contracts at module boundaries. Will interactive knobs also include module boundary identification and characterization?

One problem of interactively tuning (code until you like it) is not exercising the whole state space, so what might have happened with other inputs (or another configuration) may not be visible at all, encouraging an out-of-sight-out-of-mind complacency.

Surprising things can hide in complex code, even if it seems to behave last time you looked. For example, one hard thing to see is internal processing that happens more times than necessary, because it's idempotent and extra repetitions have no effect beyond burning a little more fossil fuel. Doing something twice instead of once is hard to see in profiling, even if a thousand times would show up (if you looked). In the 90's I would only find these walking through code in a debugger, when I was sure I had done it exactly right.

A common sort of bug I have to find these days is when nothing happens. You send a message, or an event, or do whatever it is that trips a reaction, and it's just ignored as if it never happened. Often there are up to dozens of places where input can be rejected as noise, as invalid, as not fitting the config, as being inconvenient just then, or some other kind of stimulus not expected in the current runtime state. A user just sees silent unresponsiveness, which is not conducive to diagnosis and correction.

If getting what you want is limited to observing what is visible in a user interface, undesirable things can lurk under the surface. They can happen (or not happen) without drawing attention, or lurk until a different combination of future conditions is achieved. It's hard to turn empirical observation into an accurate gauge of future potential without a theory about internal mechanisms. So automated machinery ought to have a transparent mode where you look inside at the wheels turning, to see which gears jam and which don't turn at all at surprising moments. Realistically there are too many of them for a human to grasp without some kind of hierarchical breakdown and summary.

natural modularity

Knowledge is already modular. The process of programming is entangling knowledge with other knowledge to implement something that actually does something. So if the computer was programming, the modules would be in its training set, but the output code would be an optimal tangled mess of concerns! Think of the original goals of AOP: to separate concerns via meta-programming. Well, if you have a tool that is generating code from modular higher level descriptions, no point in making the generated code modular...in fact the whole point is to have it tangle knowledge so you don't have to!

I think it was only with great effort and interpretation that the Google researchers were able to identify a "cat neuron" in their image recognition work using deep learning. My colleague who works on SR basically calls the resulting DNNs as pretty opaque, they are the ultimate of black boxes. But think of it as an evolved system: you'll be able to identify subsystems (circularity, nervous, respiratory) that emerge organically.

If there is one thing computers are good at doing, it is iterating methodically through a state space. In fact, this probably has more to do with generating the code in the first place! But at the end of the day, you'll only get a program with N% chance of being correct and optimal, where N is high but not 100. Uhm, but how is that different from the way development is done now?

I doubt just because the computer is generating the code that we will be able to stop testing on users. I mean, that is part of the knob turning process. Oops, nothing happened, something was missed, have the computer do it again with additional liveliness constraints! We could look inside, it would be much like medicine is today, but why bother if we could just rebuild it with additional requirements? We can't do that with life since...well, we are attached to living things, but programs are disposable since they are so easily recreated.

Skeptical

I am skeptical programming can be reduced to knob turning, it is a complex non-linear multi-dimensional activity.

Testing only works remotely well because we can do corner case analysis. With a black box like a neural network everything is a corner case, and the failures are unpredictable. Why one image of a cat is recognised as a cat and another not is unpredictable.

I think a specification for a search to find a program that is accurate enough to define the problem, is itself a program. You just move to writing the specification (a bit like programming in Prolog).

I think the best we can achieve would be something lime a bank of algorithms, where you search for a solution to a problem specification using a Prolog type search, each algorithm having a mathematical specification. The solution to the initial problem is effectively a proof search of the algorithm bank. I don't know how useful it would be, but it seems interesting.

(1) we will never absolute

(1) we will never absolute never ever specify programs. The logicians have lost. You will just keep refining some set of vague requirements until you get what you want.

(2) computers are much better at corner cases than humans.

Logicians will win

Logicians will win because our artificial neural networks will eventually be smart enough to completely specify programs. Oh you wanted the programmers to be humans?

[insert bemused emoticon here]

The logicians have already

The logicians have already lost AI, NN's have nothing to do with ligic. The logicians will also probably lose PL sometime in the future, though that hasn't happened yet.

Logicians and PL

I suppose I should read the thread more carefully, but there's a huge difference between "proofs about programs" and "programs that rely on provers or logic methods".

Ie, proving that your program is correct is torture. Using logic to run your program is a pleasure.

So I hope that logicians get more say at a direct level in programming.

I don't like languages that force programmers into a narrow paradigm, logic or functional. And I think that what's nicest about logic programming may have nothing to do with logic, it might be a matter of stealing the logicians favorite algorithms and pointing out that they're useful even when they're not working in a logic domain.

NN's don't involve proofs or

NN's don't involve proofs or provers. They are as far from a logical method as one can get! There is only corpus and what is generalized from that corpus. So the unreasonable effectiveness of ML is what got the logicians out of AI...

As for whether the selected code in a corpus is based on logic, I don't think it matters to the computer, at least.

Right.

But fully specified programs are useful.

I'm saying that if we make NNs more reliable than humans with better memory and organization then THEY can write fully specified programs for us.

And who knows, maybe humans will make NN AIs and NN AIs can make logic AIs.

Or to put it another way

NNs require a different sort of architecture and much greater computing power than programs. So we will always need programs to solve problems using less silicon and watts.

But we still cannot (or will

But we still cannot (or will not) write specifications, so even if the computer could act on full specifications, it's not a very interesting capability.

Bayesian Propositional Logic

NN are statistical machines, and can be reduced to Bayesian decision trees (you can extract the tree from the trained network). This is really just a probabilistic logic (the Bayesian interpretation of probability is an extension of propositional logic).

So the logicians have won, they just don't know it :-)

extracted tree

Does this mean that Google can train some ungodly huge network on every picture in the world or translations from Korean to Urdu, then convert that network into a tree, simplify it to take out the nonsense connection and then convert that tree into a chip more efficiently than as a neural network?

Network Chips and Sequential Computers

Whilst I am sure you can simplify the chips, I think the parallel implementation is optimal. The Bayesian tree representation is more useful for execution on a sequential computer, and for reasoning about the decision making.

Predictability

My point was a trained network will have a predictable failure rate say 1/1000, but that is the limit of predictability, we have no way of knowing which cat will fail to be recognised.

With source code, we can see the corner cases (if x < 0 then...), we read the code and find all the decision points, then construct all the branching paths through the source code. We can then construct a small test suite to test all the code paths for correctness. We can thus be reasonably sure the code is correct.

Computers can analyse source code to find the corner cases, but they cannot do so with neural networks. Any simplification of the network becomes an approximation, so it is not predicting the exact failures.

When I write programs I start from requirements. Refining the requirements to a concrete specification is programming. This is an iterative process, and the requirements may change and not be well defined, but the process of coding that specification into a rigid formal language is what exposes all the inconsistencies and problems that need to be dealt with to have a reliable process.

With source code, we can't

With source code, we can't detect cats.

The computer is taking requirements and refining it into a specification, this just happens to be the program. And as in real life, this translation is where all the problems are exposed; I.e. you not getting what you want because you didn't really ask for the right thing. That doesn't change, just the feedback loop is tighter and the middle man is eliminated.

Anyways this is all futurology, so we really don't know how this will play out. We all have very different ideas about the nature of programming and even intelligence, which effect how we each see the future. I also get now why the PL field is so stagnate, as it seems to be a popular consensus that the future will be like the present and past.