Why is there no widely accepted progress for 50 years?

From machine code to assembly and from that to APL, LISP, Algol, Prolog, SQL etc there is a pretty big jump in productivity and almost no one uses machine code or assembly anymore, however it appears there's been almost not progress in languages for 50 years.

Why is that? Are these the best possible languages? What is stopping the creation of languages that are regarded by everyone as a definite improvement over the current ones?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

re: No Definite Improvement

The essential issue: machine code, such as x86, is such an AWFUL programming language (from perspective of a human society) that it's TRIVIAL to improve on it in every imaginable way - safety, reusability, reentrancy and concurrency, consistency, composability, modularity, portability, debuggability, testability, optimizability, extensibility, etc.. For a broader list of potential dimensions for improvement, see the "list of system attributes" and "cognitive dimensions of notation" on wikipedia.

However, PL designers are usually pretty smart and not too rushed, and those from today aren't any smarter than those from fifty years ago. At best, we have a different perspective and different priorities (e.g. due to memory access becoming more expensive, concurrency becoming more widespread, access to FPGAs and GPGPUs and cloud computing networks, introduction of Open Source package distribution models vs. hide-everything-to-protect-IP, etc.). So, after a PL is designed, developed, and has matured a little through actual use (at which time we can assume there are no obvious incremental improvements), it becomes a lot harder to improve on it in EVERY way. Instead, different PL designs constitute different tradeoffs between system properties, or different visions of the system, or just the different aesthetics of PL designers.

Which isn't to say that finding a better-all-around language is impossible. But it will likely require a revolutionary vision of 'the programmable system' (e.g. the sensors, actuators, storage, and networks) and how programmers/users should extend and interact with it. The idea of PL and UI being separate things, or of applications and services being walled gardens, for example, are not essential.

I've got a little list...

Interesting. If I'd tried to make a list of imaginable ways to improve a programming language, readability would likely be at the top of my list; if, indeed, I went on to attempt a lengthy list rather than, say, just remarking on the desirability of readability. Thought-provoking, that readability wasn't on your list (sic: neither good nor bad; thought-provoking). To my mind, the purpose of a programming language is to express what we want done, hence lucidity is a core value.

re Readability

Readability is not a property that I would attribute to a language, because it's as much a function of the reader (familiarity with language, libraries, patterns, problem domain, etc.) as of notation or tooling.

But those cognitive dimensions of notation do contribute to readability. I mentioned one of them, consistency, then the set as a whole.

Aside: There is also much that tooling can do to enhance a human's ability to explore and understand a codebase, e.g. good fonts, syntax highlighting, jump-to-definition, progressive disclosure, in-place tutorials or spreadsheet-like unit-tests. How much should be under the umbrella of 'readability'?

Why is there no widely accepted progress for 50 years?

Because we're clueless.

Don't ask why until you answer whether

There's been plenty of progress on many topics related to PL evolution. It's quite silly to suggest that nothing has progressed in 50 years.

That said, achieving noticeable gains becomes harder in each generation, because the bar of entry gets raised fairly dramatically over time. It used to be that you could create a breakthrough language with amazing whiz-bang features, and the entire compiler and/or runtime would be a few thousand lines of code -- small enough that one person could hold the entire thing in their head.

No longer. Such a language today would be considered a toy, even by the low standards of a college student working on a term project.

Toy languages

Just taking one point there that I found thought-provoking: I suggest the perception that anything simple must be a toy may be a consequence of the paradigm we're imposing on the subject.

This aspect of scientific paradigms can be tricky, so pardon if I repeat familiar territory (from Kuhn's The Structure of Scientific Revolutions). Paradigms at best are immensely valuable because researchers within the paradigm can apply laser focus to exploring the research space within the paradigm, spending absolutely zero resources worrying about alternatives to the paradigm. A tremendous advantage at best, this inevitably suppresses alternatives to the paradigm even if they have merit. Iirc (can't instantly conjure it up), Kuhn remarked in the book that at the time a paradigm is adopted there is already ample evidence that it's wrong. How useful is it to focus research this way? Seemingly, as long as it is advantageous to continue to explore the space of research within the paradigm, the paradigm should continue to be useful. One of the things that can go wrong with this pattern is that a paradigm may continue to look plausible from the inside, while standing back from it one may suspect that really we might do well to look for a better approach.

At the level where we perceive anything simple as a toy, it seems we may really have just one paradigm for programming languages, despite the many so-called "programming language paradigms". If we really need to break out of this paradigm (or mindset, whatever you call it), the thing we're looking for may indeed be extremely simple. (Recalling, notionally, (profundity index of idea) = (difficulty of discovery) times (difficulty of finding a really good explanation) times (obviousness once explained well).)

Finally, a chance to use the word profundity

I do believe you are onto something, and yes, I think that is what I was getting at. In some ways, our industry has repeatedly converted the profound to the obvious, and as we have accreted those advances, the cost of incremental profundity has grown exponentially.

What we've learned

In about the mid-1990s, I tried to make a list of ideas we'd come up with about how to use computers that seemed likely to still be perceived as important ideas in another century or two. I came up with two items:

  • The idea of an operating system.
  • The distinction between control and data.

Operating systems aren't really my thing; they did strike me as pretty fundamental, though, so I listed them.

Some folks like to try to collapse the control/data distinction, but I'm not convinced that helps. The distinction between action and participant recurs wherever you look: control and data, verb and noun, energy and matter. Closely related also to time and space. Yes, there are advanced theories that tamper with each of those distinctions, but in practice they're all useful distinctions.

I never did come up with a third thing for my list, though I struggled with it, and have revisited it from time to time in the years since. What else dominates the whole field? I share the sense that we're not making progress; if we're not, something that dominates the field might be holding it back. The other thing that occurs to me —and that I've never been sure enough of to put it on the list— is types.

I might add Composition and Environment

I might add:

  • Composition: hard to see how composition would ever go away. Build larger components by composing smaller components.
  • Environment/context: Every computation runs in an environment, and many recent advances have been about moving the environment closer to a first-class value, ie. scoped bindings in languages, hardware/software virtual machines, delimited continuations.

FCVs

Thought-provoking additions to the list. I think I agree about environment/context, which, though its roots may belong to logic, seems to have something of the stamp of programming upon it. Honestly unsure about composition, which might be deemed of an earlier vintage, so not exactly a programming notion. Tracing the origins of foundational ideas can be tricky. E.g., our modern notions of function and variable go back, afaik, to Frege in the latter half of the nineteenth century and belong to logic rather than computation, though admittedly refined by Church who is more of a transitional figure (a former student of his later described him as "logic incarnate"). Now that you've directed attention to these sorts of ideas, another with clear PLT origins that I've lately come to suspect may be crucial far more widely (in mathematics both pure and applied) is first-class value.

Ah, you're looking for

Ah, you're looking for concepts that originated from computer science. First-class value is definitely a good candidate then. I think computing merely refined our understanding of the other concepts I mentioned. Other ideas:

  • Computational power: the computational power of a system being designed or used as a blackbox can be a meaningful property, ie. whether it's domain-specific/purpose-built or general purpose/Turing complete.
  • Encapsulation: some distinction between inside and outside views of an entity seems inescapable, although there is some relationship with environment/context here. Every computer clearly has an internal view that we can only interact with via an external facade.
  • Computation's close connection with logic, inference and synthesis: we have type inference and program synthesis (extracting OCaml programs from Coq proofs), and so on. I expect inference/synthesis to become increasingly important over the next century.
  • Induction and Coinduction: induction has received a lot of attention and originated outside of CS, but coinduction is only just starting to receive attention and arguably came from CS. Having an abstraction and its dual seems pretty important for expressiveness.

It isn't what you don't know ...

If it was easy or obvious I expect we would have seen more progress. I would guess that it isn't what we don't know that is holding us back but what we think we know that isn't true.

Unknown unknowns

Seems likely. This also seems related to my predilection for exploring exotic hypotheses that challenge conventional wisdom, and perhaps to why my list of fundamental things we've learned is short.