The Future of Programming according to Bret Victor

Bret Victor's The Future of Programming looks at the promising future of programming as it presented itself in 1973 and what we should expect it to be 40 years later, i.e., today. A lot of things that seemed crazy (GUI, Prolog, Smalltak, the Internet) became reality but we might be still held back today by the same skepticism over what constitutes programming as in 1973. At the same time, engineering seems to have carried us a lot farther than Bret Victor is willing to admit. Victor advocates four changes to move programming into the future:

1. from coding to direct manipulation of data
2. from procedures to goals and constraints
3. from programs as text dump to spatial representations
4. from sequential to parallel programs

If nothing else, this an entertaining and well-produced video of his presentation.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Link to the worrydream note

Link to the worrydream note page for the talk:

http://worrydream.com/#!/dbx

A cheaper link

http://worrydream.com/dbx/

Bret does a lot of expensive JavaScript things with that # anchor tag.

Procedures

People haven't changed. People think in terms of procedures. Nothing proposed as the new paradigm of computing matches the fine degree of control that people think procedures give them. That's why C is still being use for new development; it's procedural.

Re: Procedures

Why do you believe that 'people think in terms of procedures'?

Have you tried to explain driving home safely with procedures? Or juggling? Or answering questions on a history test? Or poetry? How do procedures account for an on-the-fly decision to pick up some gas or groceries while driving?

It is my impression that procedures are very simplistic and rigid compared to how people think. Many tasks humans take for granted - like standing upright, walking, recognizing objects, or predicting the consequences of throwing a rock at someone's window - are very difficult to explain procedurally.

While C being procedural might explain part of its success, I would attribute this to procedural being a good fit for individual CPUs. Plus the network effect. Many libraries in use today were developed years ago in the context of procedural languages.

I've seen "Foo is how people think!" claims again and again and again - for OOP, for procedural, for spreadsheets, for rules-based programming, for fuzzy logics, for soft constraint systems, etc.. The most common aspect of these arguments is the shallowness of the argument, the unspoken assumption that 'how people think' is good for programming, and the lack of empirical evidence. Get some empirical studies. Try to explain complicated human behaviors and relationships. Or, if you're unwilling or unable to have real knowledge, just don't make the claim.

Recipes and instructions

Recipes and instructions work well for describing tasks "precisely," which is why they are used when such tasks need to be and can be described as such. The examples you are pulling out are mostly hard to teach and require subjective interpretation by the learner, and are not very good for telling a dumb computer what to do!

And computers are not humans. Even if we figured out hard AI, procedure might still be desirable when precision is needed, for the same reason we resort to them when instructing fellow humans.

Get some empirical studies.

I was taking your argument seriously up until this point. If you are sure you know something about human nature that is so relevant to programming, the onus is on you to do the experiment, not complain about other peoples' failures to experiment on connections they did not see themselves. Empirical data on human behavior is hard to collect and extremely unreliable, it just can't be collected randomly.

Procedures can come in many forms

Yes you're right, procedures can be very useful especially in emergency situations. Take chapter 1 of Kowalski's book "Computational Logic and Human Thinking"
http://www.doc.ic.ac.uk/~rak/papers/newbook.pdf
where he looks at the emergency notice of the London Underground as a logic program and its computational interpretation

Emergencies
Press the alarm signal button to alert the driver.
The driver will stop if any part of the train is in a station.
If not, the train will continue to the next station, where help can more easily be given.
There is a 50 pound penalty
for improper use.

Kowalski's got a knack for explaining advanced logic programming without using any symbols. This book is the easiest read that still manages to deal with inductive, deductive, and abductive reasoning.
I think the PDF file is a draft and I've only read the published book.

Programming using Assertions, Goals, and Plans

Precision is orthogonal

Precision is mostly orthogonal to paradigm. Recipes and instructions can be varying degrees of precise or imprecise. So may be other programming models.

The examples I'm providing are hard to communicate to a computer in terms of procedures even if you use imprecise procedures (pseudocode, etc.). Humans think in many ways that don't fit the procedural model (or modern CPUs). That's the point of the examples.

If you are sure you know something about human nature that is so relevant to programming, the onus is on you to do the experiment

And if Shawn H. Corey claims that "people think in terms of procedures", is not the onus on him? Or do you have double standards? Sure, the knowledge he claims is difficult to collect and unreliable. But shouldn't that make the burden of proof even greater?

My own observation of human nature most relevant to programming is that humans are adaptable. Humans can easily meet machines half way. For example: riding a bike, or driving. Humans don't naturally move around by turning a wheel and pushing pedals, but they adapt. This adaptability is aptly demonstrated in context of programming by our ability to work with so many models and concepts - procedural, OOP, spreadsheets, recursion, dataflow, esoteric languages, etc.. - even in those cases where we initially find them awkward.

With such adaptability, why should a programming model seek to match human thought? It seems to me there are more useful, verifiable improvements to be had - e.g. to equational reasoning to support refactoring, local reasoning to support composition and decomposition, externalizing state to support extension and live programming. Pure functional programming is one of the few models where rarely does anyone claim it's "how humans naturally think". Functional idioms and intuitions are learned. Instead, they point out how FP simplifies specific (and often shallow, syntactic) forms of reasoning, refactoring, and reuse.

Empirical data on human behavior is hard to collect

If you focus on shallow behavior, it is not hard to collect. It is not difficult to watch people and find which mundane tasks are consuming their attention and energy - e.g. whether it be refactoring code, restarting a multi-process application, searching documentation, selecting libraries for download, or integrating independently developed frameworks. Why bother with deep concepts like human's psychological nature or 'how people think' when we can achieve so much by simply looking at what humans do and seeking to automate, simplify, or optimize those?

Precision is mostly

Precision is mostly orthogonal to paradigm. Recipes and instructions can be varying degrees of precise or imprecise. So may be other programming models.

My point was that when humans want to instruct in a unambiguous non-subjective way, they fall back to procedural/imperative language.

The examples I'm providing are hard to communicate to a computer in terms of procedures

Yes, because we do many things that are not easy to communicate procedural. Take juggling as your example, even getting a computer to juggle requires machine learning; programming is useless here. Want to "program" a self driving car? Think you can do it all with declarative rules? Nope...a machine learned system coupled with a kernel of hand written code (procedural or rule-based, probably both) to handle high-level control, input and output.

Humans think in many ways that don't fit the procedural model (or modern CPUs).

Actually it is "humans CAN think in many ways that don't fit the procedural model." We obviously can think procedurally when necessary. We can also think in other models, thankfully, and our programming models can also be equally diverse (though they invariably hit a limit where machine learning must take over).

But shouldn't that make the burden of proof even greater?

I think his statement is too strong (missing a "can"), but your reaction is just as strong, if not stronger. Moderation was missing. Procedural programming was chosen at the beginning because it was easy to understand (a) how to instruct the computer and (b) how the computer could interpret these instructions. Whatever comes later, we have yet to bake that into hardware probably because procedural still makes sense at the lowest levels. It is not an "accident."

With such adaptability, why should a programming model seek to match human thought?

The bigger problem with a model is actually Norman's evaluation and execution gulfs: how can I instruct the computer to do what I want it to do, and how can I be sure the computer is actually doing what I wanted it to do? A higher-level programming model is no good if it is opaque in either direction. With procedural code, I have a high confidence in each direction even if the model is more difficult to work with...but in Haskell...do they even have a debugger yet? Now, we can definitely get to higher levels of abstraction in our models, but let's just not forget why procedural was successful in the first place.

If you focus on shallow behavior, it is not impossible to collect.

Fixed that for you. Actually, it is still hard to collect and analyze the data, but just in terms of time and money. Any behavior more complex will be too unreliable and expensive to collect empirically, and subjective methods must be used instead.

Or, if you're unwilling or unable to have real knowledge, just don't make the claim.

Ok, now this begs the question: how many scientific empirical studies on human subjects have you actually performed?

Shallow Semblance of Procedural

Consider: A planning system comes up with a plan. This plan is relatively linear, with just a few contingencies. Does this plan resemble a procedure? If you communicate this plan, might your language be imperative? Does this mean that the thinking leading to this plan was procedural?

Modification: During execution, the plan is continuously revised and refined based on observations that were not anticipated in the original planning phase (e.g. a phone call asking you to pick up some milk on the way home). How similar is the execution of this plan to what we traditionally call 'procedural programming'? Does such a refinement process exist in human behavior?

Consider: Procedural programs often consist of tens of thousands of lines or more, larger procedures being well defined in terms of sub-procedures. But when we try to shape human behaviors at the scale of tens of thousands of sentences, what kind of language do we use? Laws, contracts, trade, rules, obligations, expectations, and more. Very little of this seems to be procedural.

We obviously can think procedurally when necessary.

A skilled procedural programmer can think procedurally. A skilled functional programmer can think functionally. But to what extent do untrained humans naturally think procedurally? How do you distinguish actual procedural thinking from the shallow semblance thereof? How well does human thought match procedural programming at larger scales?

Norman's evaluation and execution gulfs: how can I be sure the computer is actually doing what I wanted it to do? A higher-level programming model is no good if it is opaque in either direction. With procedural code, I have a high confidence in each direction

My confidence with traditional procedural code becomes very low once I introduce concurrency, network disruption, timeouts, complicated callbacks, etc.. How about yours?

I agree this 'evaluation and execution gulf' is a relevant concern, but I think that matching untrained human thought patterns isn't the answer. I think that composable programming models are the answer, especially reactive models that can integrate frameworks without callbacks.

how many scientific empirical studies on human subjects have you actually performed?

Back in university, over a dozen, usually serving as a participant or assistant observer.

A typical experiment would involve multiple batches of fifty or so participants (due to limited room size), and assistants would be tasked with watching for specific kinds of behavior (rubricks, score sheets) and recording them. The ratio of participants to observers was typically 5 to 1. Participants would have a list of tasks to complete, might or might not be allowed to ask questions, and would be payed some for their participation and a little more for completion or performance. (College students are cheap. :)

Modern ML is getting pretty good these days, and could potentially replace the observers for a lot of behavior tracking.

After leaving university, I haven't participated in these studies, but I have on a few occasions worked with a group in the company that performs such studies. This group is how I understand that intelligent context-based menus are problematic, and a bunch of similar useful little rules of thumb.

it is still hard to collect and analyze the data, but just in terms of time and money

Studies certainly aren't free. There is a cost in time, money, and facilities. But the alternative - operating with a few subjective opinions and assumptions, occasionally overhauling a design after a bad assumption, taking blows to reputation - is also not free.

There do seem to be some cultural issues. Budgeting for such studies is often frowned upon. Developers or engineers are expected to simply 'know' the sort of things you'd expect to learn. Worse, often they do know, the results are predictable, and therefore many studies seem to be wasted. Future studies become more difficult to justify. Further, it's those cases where you predicted wrongly that prove the worth, but wrong predictions can look a lot like incompetence.

The 'science' can be difficult to sell, depending on management.

expensive to collect empirically, subjective methods must be used instead

User experiences can be useful, but they aren't very reliable. The number of biases and blind spots a human has for his or her own behavior is appalling. I never trust a claim that a person "knows how he thinks" - they only consciously experience a small aspect of it, remember even less, and fabricate to fill the gaps the moment they're asked.

Consider: A planning system

Consider: A planning system comes up with a plan. This plan is relatively linear, with just a few contingencies. Does this plan resemble a procedure? If you communicate this plan, might your language be imperative? Does this mean that the thinking leading to this plan was procedural?

OK, I'll give you this: we probably don't think procedurally, but we often instruct procedurally as this is an efficient way of transferring knowledge; procedural is more like a marshaling protocol and at either end it is transferred into some other kind of thought (or perhaps electrons).

Consider the recipe vs. the thought and experimentation that actually went into designing the dish. As the follower of the recipe, I'm definitely free to improvise to my tastes, and there are gaps in the instructions that I fill in with my own subjective thoughts. But until we've developed Vulcan mind melds, we'll still rely on procedural instructions for transferring many kinds of information.

My confidence with traditional procedural code becomes very low once I introduce concurrency, network disruption, timeouts, complicated callbacks, etc.. How about yours?

I can still debug the code, which is more than can be said for many declarative techniques where code just exists outside of time.

Back in university, over a dozen, usually serving as a participant or assistant observer.

That is better than most who call for empirical experiments. They are not cheap, especially when you have a couple of hundred questions to answer with data. The move toward data mining might save us, at least Google is playing with this, but many interactions can only be observed on person. But then who knows, perhaps we can just mass deploy sensors to measure everything, we are going there anyways with the NSA.

But the alternative - operating with a few subjective opinions and assumptions, occasionally overhauling a design after a bad assumption, taking blows to reputation - is also not free.

Designers rarely field empirical tests, and they seem to do well enough. Only the experimental psychologists do real empirical testing, and then it is only for something fundamental and shallow.

The number of biases and blind spots a human has for his or her own behavior is appalling. I never trust a claim that a person "knows how he thinks" - they only consciously experience a small aspect of it, remember even less, and fabricate to fill the gaps the moment they're asked.

Are you following this method in the design of RDP? I would be very surprised if you could afford to "know" without bias.

Designers rarely field

Designers rarely field empirical tests, and they seem to do well enough.

Sure. But success is relative, right? And you're comparing these designers to other designers who rarely field empirical tests. :)

Would empirical tests allow designers to do much better? Probably not; a good designer will predict most of the results correctly. Would empirical tests help designers avoid a major mistake? Possibly. If one's reputation is on the line, it might be worth the investment.

more than can be said for many declarative techniques where code just exists outside of time

Declarative techniques are not especially difficult to debug, but the idioms for debugging are different (and tend to be more declarative). Besides using QuickCheck to test ad-hoc properties with automatically generated inputs, I happen to like rendering things as the basis for debugging. One can render functions using graphs, fact sets using clustering algorithms, images or diagrams in a straightforward manner. Renderings can be interactive, setting some of the inputs with sliders or whatever.

Declarative techniques can be temporal, too, e.g. representing sounds and motions over time or as a function of user input. In these cases, time can be debugged using a spatial representation (time on one axis) or using a slider, or using color to highlight recent change, or a number of other ways.

But the 'step debugger' traditionally used for debugging procedural code isn't as useful since declarative expression doesn't really have 'steps' in that sense.

Are you following this method in the design of RDP? I would be very surprised if you could afford to "know" without bias.

No. I don't rely on assumptions or knowledge about how people think. I am not confident that matching human thought will automatically lead to a better programming experience, and I am not confident that I could recognize 'matches human thought' when I encounter it. So I don't try.

The design method behind RDP focuses only on consistent formal properties that are useful for simplifying specific forms of reasoning, optimization, interaction, or patterns that I observed to be useful or desirable in previous generations or historical systems I've studied. My hypothesis - unproven, but based on my observations of FBP, FP, spreadsheets, and other models - is that these objective properties will contribute very effectively to a good programming experience.

Sure. But success is

Sure. But success is relative, right? And you're comparing these designers to other designers who rarely field empirical tests. :)

Most designers work for companies and have limited resources. They are on schedules, just like programmers. Is it great to prove your code works? Maybe, but who can afford to do that in the real world unless they are working in a critical field?

If one's reputation is on the line, it might be worth the investment.

Usually, you have to do something, so make the best choice possible and if it turns out to be wrong through real life experience, correct it using real data.

One can render functions using graphs, fact sets using clustering algorithms, images or diagrams in a straightforward manner. Renderings can be interactive, setting some of the inputs with sliders or whatever.

This is basically an experimental probing method that I leverage also, especially great for when you want to treat the program as a black or gray box. But really, both techniques are orthogonal; you can't claim that B is better than A & B! The problem with declarative techniques is that they haven't figured out how to get "A" back yet...well, you can debug the declarative-procedural translator (interpreter) which is what I often end up doing.

I am not confident that matching human thought will automatically lead to a better programming experience, and I am not confident that I could recognize 'matches human thought' when I encounter it. So I don't try...My hypothesis - unproven, but based on my observations of FBP, FP, spreadsheets, and other models - is that these objective properties will contribute very effectively to a good programming experience.

Most of us follow the exemplar approach to design also. I just don't think it's very fair to throw the "empirical study" card around unless you are successfully using it yourself. Argument by example is a bit more productive.

Rendering can be useful for

Rendering can be useful for intermediate steps. I wasn't assuming a black box for the program. One can 'show the work' - the intermediate values, the search graphs, the derived facts. This relates to what Conal Elliott was doing under the name 'tangible values', or what Roly Perera is doing under the name 'self explaining computation'.

That subsumes your "A" techniques for declarative programming. Step debugging can be understood as a really inefficient and ephemeral way to 'show the work' in procedural.

I just don't think it's very fair to throw the "empirical study" card around unless you are successfully using it yourself. Argument by example is a bit more productive.

The rule for everyone is: Try not to make claims that you can't defend, and be prepared to defend or retract what you claim. It's a fair rule. Not every claim needs to be defended by empirical study, of course, but I don't believe there is any rational non-empirical defense for 'people think in terms of procedures'.

The position you seem to be suggesting - that we cannot ask for empirical studies without being involved with them - seems both unfair and elitist. Is that really what you intend?

I'm not fond of "argument by example". Examples chosen for such arguments are often biased, simplistic, not representative. Where examples serve well is communicating and explaining an argument, not proving it.

his relates to what Conal

his relates to what Conal Elliott was doing under the name 'tangible values', or what Roly Perera is doing under the name 'self explaining computation'.

All of these techniques apply to imperative computations also...though a bit of live programming might be necessary. You have to argue that A techniques are not applicable to procedural code.

The position you seem to be suggesting - that we cannot ask for empirical studies without being involved with them - seems both unfair and elitist. Is that really what you intend?

I am not in the category of people who could demand a user empirical study (or theoretical proof) to back up a claim. The quickest way I've found to shut down arguments for such is to ask if they do any of these studies at all. People who do empirical studies for a living actually shut the arguments down much faster than me, as they are much more knowledgeable about what is involved and what they can show reliably.

It is possible to have reasonable arguments without resorting to the "I want scientific proof!" demand. Your examples were good enough in getting me to weaken my claim (that humans can think procedurally vs. can communicate procedurally).

No lab rats were harmed in this argument.

The quickest way I've found

The quickest way I've found to shut down arguments for such is to ask if they do any of these studies at all.

There is nothing wrong with shutting down an unsupported argument.

Of course, a person doesn't need to answer with his or her own studies. It's sufficient to point at a study someone else performed. (This is why I said "get some empirical studies" and not "do some empirical studies".)

Any claim about humans (how many limbs they have, average height at age 11, how they think, etc.) can only be supported based on empirical observations. There is no logical argument that can tell you the median number of limbs a human has; you must actually look at humans. If you make a claim about how humans think, you've put yourself in a much more difficult position because that sort of information is not casually observable.

It is possible to have reasonable arguments without resorting to the "I want scientific proof!" demand.

This is true. But the "I want scientific proof!" demand can also be part of a reasonable argument. Asking that a claim be defended is not unreasonable (though we can do so without the exclamation mark). Expecting a surprising claim to slip by without being questioned is unreasonable.

I don't need a study to make a claim

I don't believe that OOP is "natural". That is my belief. I don't need to commission a study to make that claim...geez.

If your claim is "I believe

If your claim is "I believe OOP is natural" or similar, then you don't need a study. You're just stating what you believe.

If your claim is "OOP is natural" or similar, then you should be prepared to put up (provide an appropriate defense) or shut up. Without sufficient evidence, such a claim should receive just as much respect as "God wants you to program in OOP."

The difference is pretty big. You are entitled to your own beliefs, not to your own facts. IIRC, there are modal logics that properly track the distinction between beliefs and facts.

Or shut up?

Maybe you need one of your self-imposed sabbaticals again. I don't know why you think you gain by your argument style except to piss people off. Do you communicate this way with your colleagues or did they already send you down to storage room B with a can of a Raid?

To everybody else, I apologize for the rudeness, but sometimes this guy needs it.

Speaking Vulcan

In vernacular English, I could say "Obama is the best candidate" without having scientific evidence of that fact. Surely, it is qualified with "I believe", but almost everything I say in an informal conversation is. Now if I say, "OOP is natural" in a scientific paper, I might have more problems, but even then...you are allowed some license to leave out the "I believe" qualifier, especially if it coincides with the beliefs of the community; i.e. saying "OOP is natural" might be ok for OOPSLA but will get one a beat down at ICFP.

Taking the Vulcan "everything must be stated as fact or explicitly as beliefs" approach is quite inefficient in having a civil argument.

Oh, and OOP is natural with respect to its resemblance to the noun focus of natural language.

One can easily clarify that

One can easily clarify that they meant 'believe'. And often this can be determined from context. What matters is the actual (intended) claim, not how it was phrased and spelled. There isn't much value in attacking how things were worded if you have reason to suspect something else was intended.

Shawn's claim goes on to explain the success of a language, etc. based on procedural being how we think. (This also implies a position that no other programming model is closer to how we think, lest it would enjoy C's success.) AFAICT, he's presenting it as strong fact.

There are no strong facts,

There are no strong facts, just strong beliefs. I might believe my evidence is strong, and as far as I'm concerned, the fact is true. Even if I'm wrong, that doesn't mean I'm being intellectually dishonest.

I might believe my evidence

I might believe my evidence is strong, and as far as I'm concerned, the fact is true.

That's fine. Just be prepared to provide your evidence if your (what you believe to be or presented as) facts are questioned.

Even if I'm wrong, that doesn't mean I'm being intellectually dishonest.

I agree.

Of course, if evidence for a claim is obviously weak or non-existent, then either you'll have suspected this at the time of making the claim, or not. Only one of those conditions could be considered dishonest, but neither is flattering.

OTOH, if your evidence is strong but complicated or incomplete, such that it could be interpreted a few different ways... that's the domain of many interesting arguments.

Oh dear

Sean McDirmid: Oh, and OOP is natural with respect to its resemblance to the noun focus of natural language.

I assume this was tongue in cheek, but since this utterly ridiculous argument regularly comes up in PL "discussions", here is my tongue in cheek reply:

1. Do you have evidence for the "noun focus" in natural language?

2. In all of them?

3. Do you have evidence for the "noun focus" in practical OOP?

4. Why do I redundantly write x.GetHeight in OOP all over the place when I would write just Height(x) in FP and never use verbs? Does FP perhaps have an even stronger "noun focus"?

5. As a consequence, is FP perhaps even more "natural"?

6. What is your definition of "noun focus"?

7. Do you have any evidence that your definition of "noun focus" makes any sense?

8. Do you have any evidence that "noun focus" according to this definition has any cognitive relevance?

9. Do you have any evidence that the whole fuzzy argument of "naturalness" has any relevance at all?

Off topic and too nested

But happy too oblige.

1. In the languages (English, Chinese) I know? Sure, nouns mostly precede verbs :)

2. Well, its best to take a look at Greenberg's linguistic universals. There are some dominant VSO languages, but they always have SVO alternates.

3. In OOP, we think about objects that do things. In FP, we think about...functions?

4. I would write x.Height (in C#, which supports real properties unlike Java), which just means the "High of x", this is still noun focused. "Height(x)" is not even sensical, "Height" is a property, not a function!

5. FP wasn't designed to be natural (in terms of natural language), it was designed to be math-oriented. How much more accurate can you get than proof by design goals?

6. Objects do things. Objects have things. Objects are things.

7. nope.

8. NA

9. We have dedicated hardware through an evolutionary fluke to acquire language and communicate with virtually no explicit training; the same is not true for learning math, which at any rate comes about 50K years after language in our evolutionary lifetime.

I believe someday, someone will acquire real empirical evidence that OOP is superior to FP in terms of how most of us think. But that won't be me, I don't really care that much; I am happy to exist in a world where people do not believe the same as me.

In the Real World (tm), I

In the Real World (tm), I perform some routine to determine an object's height, I don't query the object for it. Height is a function.

In case I've mistaken this for a serious discussion, please feel free to thwack me ;).

Are properties and

Are properties and attributes functions? Interesting philosophical question.

Imperative

As far as I can tell, the imperative form does not normally have a subject (in most western languages at least). And "real world" OOP is all about imperative statements, not assertive sentences. Alas, how is its formulation "natural"?

There is nothing evidently natural about OOP. It's an ideology, nothing more.

Lets start a new topic

And I have some great fodder for the cannon.

Great Talk! But deployment of the ideas has stagnated :-(

Bret gave a great talk!

He presented three key ideas for the future of programming that have developed over the last forty years:
1) Direct interaction with displays, e.g., 3D displays with speech and 3D gestures
2) Higher level programming using goals, assertions, and plans, e.g., Direct Logic
3) Large-scale concurrent programming, e.g., Actors using ActorScript

Of the three, the one with the most impact over the last forty years has been Direct Interaction.

Unfortunately, progress on the other two has been stymied by the lack of funding to create proofs of concept that could be commercialized. Venture capitalists and large companies will not invest until prototypes are demonstrated. In the US, DARPA (which provided funding that bootstrapped US industry) dropped out and NSF funding structure has not be suitable. So far other nations have not picked up the slack.

Programming productivity has not improved in decades. However, this stagnation cannot last.

An, there is more to the

Ah, there is more to the talk.

Tongue in Cheek?

I suspect the "Leaked transcript of censored Bret Victor talk" that you linked is a tongue-in-cheek addendum. It explains the status quo by alleging a conspiracy of hackers tries to hold back regular people from programming.

Well...uhm...

It was communicated by Jonathan Edwards...

Meaning what?

I guess I'm too much of an outsider as Jonathan Edwards doesn't ring a bell here.

I am not certain, but I

I am not certain, but I suspect Sean McDirmid is having a bit of fun.

Internet humour is difficult :P.

Here is another one of

Here is another one of Jonathan's classic satires as a reference:

What if Smalltalk were invented today?

If you are interested in PL, you should check out some of his posts.

Assembler programs

It's curious to me to hear his description of the introduction of assembler programs, because it doesn't jibe with the account of that time I've had from my mother (who started programming around 1951). Probably a mixture of different parts of the elephant, and different emphasis in the accounts. But my understanding has been that programmers wrote in mnemonics, and then turned the mnemonics over to "coders" (not to be confused with programmers) who converted the mnemonics into machine code. This is not really the same thing as 'programming in binary'.

Interesting. I wonder if any

Interesting. I wonder if any of these mnemonics exist today? I'm curious what the "language" looked like when second-party "coder" humans were doing the translation vs. the compilers. Also, is it possible for the coders to not understand what the programmer wrote and get back to them for clarification?

I feel that a large part of our history is somehow left dark, I hope it is documented somewhere.

Oh (duh). Ask and ye shall receive.

Silly me. Yeah, now you mention it, we don't have to rely on my slightly flawed memory of what my mother told me. At least some of it is documented. Someone interviewed her a few years ago (and used the material in a book that I take it came out late last year: Recoding Gender: Women's Changing Participation in Computing by Janet Abbate). The interview is archived, here.

assembly mnemonics.

I wasn't there in the fifties, but I understood it to mean that the mnemonics probably looked a whole lot like a macro assembly language. And assembly language programmers are probably still using most of them.

The 'coders' if this is true would be playing the part of macroexpander/assembler in translating the mnemonics to pure binary. But that job was eventually automated....

At least that would be my guess.

The mnemonics thing is only

The mnemonics thing is only obliquely referred to in the above interview, alas. I really need to interview my mother about a bunch of this stuff.

One would think there'd be a qualitative difference between having one's mnemonics translated blindly by a machine, versus having them translated by a human being who understands what you're doing. (This is not unrelated to why I dislike the idea of computers driving cars: human drivers deviate from norms of behavior, but usually other human drivers successfully compensate because, as humans, they can. Computer drivers would probably not do well in a mixed human/computer driving environment, which would be blamed on the humans of course, and in a computer-driver-only environment, I'd expect things to go very smoothly most of the time, with the occasional 10,000-car pile-up and attendant massive loss of life.)