Almost everything happened in the Golden Age, right?

When writing CTM I was struck with how many of the good ideas in programming languages were discovered early on. The decade 1964-1974 seems to have been a "Golden Age": most of the good ideas of programming languages appeared then. For example:

  • Functional programming: Landin's SECD machine (1964)
  • Object-oriented programming: Dahl and Nygaard's Simula (1966)
  • Axiomatic semantics: Hoare (1969)
  • Logic programming: Elcock's Absys (1965), Colmerauer's Prolog (1972)
  • Backtracking: Floyd (1967)
  • Capability security: Dennis and Van Horn (1965)
  • Declarative concurrency: Kahn (1974)
  • Message-passing concurrency: Hewitt's Actor model (1973)
  • Shared-state concurrency: Hoare's monitors (1974)
  • Software engineering: Brooks's mythical man-month (1974)

It is a sobering thought that not much new stuff has come since then. Hindley-Milner type inferencing in 1978, constraint programming in 1980, CCS (precursor of pi-calculus) in 1980. What revolutionary new ideas came since 1980? Most of the work since then seems to have been in consolidation and integration (combining the power of the different ideas). Right?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Another good idea

Please add this one:

  • Relational databases: Codd's relational model (1970)

I agree wholeheartedly

You could say that the university system of publish-or-perish is broken and encourages many researchers to duplicate effort in order to find something "new." And that the current educational-economic system encourages you to graduate (and beyond, to MSc and PhD), thus creating a mass of information that most computer scientists simply can't digest.

This all leads to (a) CS people not knowing about the brilliant achievements in past decades, and (b) recent variations of old ideas receiving public funding, while old ideas don't get that funding anymore (even though it would be interesting to see those ideas implemented in modern technologies from time to time).

And I should add that IMHO lots of CS research isn't nearly fact-based enough, but turning more wishy-washy by the year. And the focus is more and more on just teaching people Java instead of computing *science* (which at least in Germany is a problem of inflation in the education system; today's high school grads simply don't cut it anymore for many businesses, because almost everybody makes it through school; and of course, publicly funded education also means that instead of setting up their own Java education system, businesses will use the public education system, simply because it's free; yet they ask that that public system change according to their wishes and demands and focus more on "practical" "science".)

To offer a constructive comment as well, I really like modern textbooks for this reason. They offer a neat way to teach the interested CS student (student as in "likes to learn stuff") what decades of research papers contain, but in a digestable form.

Yes, and no...

I believe we have to accept that certain fields have reached a certain level of maturity while others haven't. The above list could also be extended with operating systems, for instance. Rob Pike gave a good talk on that topic and the slides are available here:

http://herpolhode.com/rob/utah2000.pdf

In it he expressed the view that all concepts relevant to systems engineering have been invented (mostly in terms of UNIX) and new things are not on the horizon. And I do share some of his concerns.

However, if you take the last example in the above list, software engineering, then I wholeheartedly disagree. A lot of things have changed or are new since Brooks. Software is not written in terms of structured programming (M Jackson) anymore, or from scratch each time, or by the same ad-hoc means as 30 or so years ago.

We now have come a long way with design patterns, code generation, refactorings, great IDEs like Eclipse, and moving towards verification. And yes, take model checking which is a good example of what happened since the 1980s. Now, 20 years later the technology is more usable than ever, and research on it everything but dead. We're seeing great tools spawned (NuSMV, SPIN, MiniSAT, MathSAT, etc.) and an active community.

Secondly, a lot of new research will be concerning parallelism with multi-core CPUs available. Most languages and paradigms cited in the above list support the future hardware architectures that operate parallel only marginally if at all.

Software engineering

Rob Pike's talk is a classic, IMHO.

I don't quite agree on the software engineering side, though. Design patterns and code generation seem more oriented to work around problems in current languages (specifically, Java and C++) and oriented towards churning out code, not towards reuse or modular programming. If anything, design patterns make us NOT design better languages, and code generation makes us NOT develop better abstraction and reuse mechanisms.

All the better code reuse we see (including the great library ecosystem in Java) is mostly something that comes from the open-source culture and constant refactoring/rewrites. Libraries are usable and being used because of open licenses and because people constantly maintain them with lots of effort.

I totally agree on verification and model-checking; there's lots of work in that area. My diploma thesis is also going in that direction :-)

functional programming and parallelism

There was an awful lot of work done on parallel architectures in the 1970s. That was actually one of the big reasons for the interest in functional languages.

Not really

Two points:

1) The calculus can be only invented once. In that sense, no decade ever surpassed the 1660s in science and technology. Of course, that's not true. Where were computers concretely in 1972? Where are they now? What do we know about, say, relational database performance or strictness analysis for lazy languages?

2) Intellectual / academic / scientific progress often takes time to trickle down to the concrete computing world. How long since some form of OO has been massively adopted? It could be argued that the discovery -> application time is increasing in computers because platform effects are increasing as user base increases -- network externalities, switching costs, etc. But, as you said, type inference? 1978. Lazy evaluation? 1980s. Bird-Merteens formalism? 1980s. Monadic programming? Late 1980s. Theorems for free? Early 90s. More general categorical programming? 1990s. Maybe it'll take longer until these things make a concrete effect, partly because in the late 60s/early 70s universities drove a lot of the technological adoption and now we basically depend on what Microsoft does (it *is* embracing FP though) and partly because some new ideas, like categorical programmng, require a stronger background to grok.

As far as I'm concerned, there has never been a better infrastructure for writing programs that can be formally thought about. Golden age schmgolden age, it doesn't hold a candle to the 1660s when Newton invented the world.

It's true

I've been reading through the Haskell literature (or at least the highlights), and I've been impressed by the work that occurred between the late 1980s and the current day.

But it's hard to sell people of the importance of this work, because so little of it has appeared in mainstream software. But in 20 years time, I bet we'll be looking back fondly on the 1990s. :-)

$0.02

hope this is not absolute junk, verification is _essential_ for widescale FP adoption. unless we are able to automatically prove theorems about programs, the inertia will have its way. I love Haskell. I would very much like to see 'Total Functional Programming' support in it. then comes the question of a general approach to whole program optimization based on the theorems I prove.

most innovations around Haskell had been on the type front. it's a good thing. but a Golden Age of FP is still a high possibility, and it lies in the future. what is really needed is to forget practicality (as in, C# can adopt as much as functional constructs it wants but purity pays). exploit purity! that is the way.

and to the OP, I think the great ideas of the '90s are yet to be packaged to be delivered.

which ones are those great ideas, you ask? well, the '60s didn't know how much they achieved. this 'Golden Age' is more like an afterthought. CS is just ~50 years old, you see. ;-)

PS. S. Peyton-Jones _did_ say that laziness wasn't Haskell's greatest achievement. it was purity and static typing. that's my bet. it's so obvious that we don't appreciate it now. but look at the '60s, '70s, '80s. purity/typing wasn't so dominant.

macros

Syntactic macros (as in lisp or scheme) are a great idea from the golden age. They were invented in 1963.

Not quite

Firm theories get developed at some point in the field of science and then usually if these theories are very good they stayed as the foundation of further scientific research. Looking at Newton's Three Laws of Motion, even though they have been disproved when reasoning about things travelling at very high velocity or things at quantum level, the laws are so good they still get used now and I can imagine they will be used for a very long time.

Some look at the macroeconomics

Anyone interested in working with this economist in researching the R&D/economic environment relationship in programming ideas? I'm thinking now that the quoted Golden Age ends with the 1973 oil crisis and resumes with great spark in the 1990s boom with full categorial programming.

Of course, we'd have to figure out objective metrics for R&D. Maybe something related to density of quotation over time -- great papers are still quoted decades later while less transcendent ones are forgotten.

Looking for a partner ; )

Golden age of the golden age

The sixties and seventies? As everyone knows everything interesting about computation was discovered in the 30s and 40s by Church, Kleene, Curry, Turing, von Neumann and others... ;-)

But seriously, I think we can draw two general conclusions from this apparent golden age:

  • Computer science is completely independent of technology: the good ideas can exist before the technology to use them does, and it can take a while for the latter to catch up.
  • When looking back in time and conveniently forgetting all the bad or so-so ideas that have fallen by the wayside, every age is a golden age.

My favorite

Regular Expressions (Kleene 1950s)
Pipeline/UNIX style programming (Ritchie et al 1960s)

that's a depressing outlook

Kinda like saying "nothing worthwhile has happened in engineering since when bridges were invented". We're less than two centuries into informatics and we're bemoaning the fact that we're still consolidating our foundations? I personally find that rather reassuring.

Getting a handle on data

Getting a handle on data abstraction and parametricity leaps to mind as one of the big ideas from the 1980s. That's kind of hard to pin down as a single event, though, because it's a line of work containing Reynolds's 1983 paper, Mitchell and Plotkin's 1988 paper on existential types. Dave McQueen's 1986 work on using dependent types (ie, strong sums) to model modules with type components, and Cardelli and Leroy's 1990 integration of the two.

In the nineties, the Felleisen-Wright approach to proving syntactic type soundness seems like a big deal to me. It means that you don't have to have a denotational model to prove the soundness of a type system, which means that you can actually prove type soundness for languages like Java (the thought of whose denotational semantics honestly fills me with dread).

At the end of the nineties, O'Hearn and Reynolds breathed new life into Hoare logic with the invention of separation logic. IMO it's the first formal treatment of aliasing that doesn't suck.

Consider the infrastructure of the time...

When looking at this "golden age" and wondering why it happened when it did happen (as well as looking for other important breakthroughs) it is important to consider the context in which these advancements were made. Take a timeline of important CS advances and put it next to a timeline of hardware/networking advances and you will see a lot of correlations. Before the 50s most of the advances and discoveries were in the realm of pure mathematical insight because that was all they had, the hardware to actually test and tinker with did not exist. Once reasonably bright associate profs had access to pdp-11s and the like there was a surge of insights and discoveries based upon this access.

Looking at what started to become available in the 80s and beyond I would guess that the "good ideas" that we are looking for will be found in areas related to supercomputers (and large datasets) and distributed systems. Distributed data structures, sync and concurrency, and advances in large-scale/ad-hoc networking are the sort of items that I would assume someone talking about the past 10-20 years would point to and consider the good ideas of this particular age of computing...

Software Engineering

Andreas Bauer made an interesting point about the advances in Software Engineering that we've seen in the past 25 years. Now that the mainstream is entering the stage of OO/FP hybrid languages via C# 3.0, what does that mean for things like design patterns? Are we going to see a slew of new books covering how to design software in a OO/FP hybrid way?

I'm also wondering if we need more research into the cognitive aspects of software development. Given a clean slate, what is the most natural way that some people approach programming? And can we have tools that will allow different programmers to have their more preferred view of a software model without it breaking the conceptual view that another progammer has and also keeping the bits-on-the-machine unified. Maybe Intentional Programming is working on some of these issues. A paper like Psychological Representation of Concepts might give some insight for future language and tools designers.

STM

Software Transactional Memory is only from the mid- to late-nineties. From what I understand, it'll be making its way into Java and C# shortly.

STM in early 1980s

Gemstone Smalltalk had STM in the early 1980s.

Barking up the same tree

I largely agree with peterwong1228 - the pioneers in a field have the greatest opportunity to discover the broadest theories in the field, and those of us who follow later must be content with working on subtler, but much less expansive problems and answers.

However, there is still the problem of why is this field so dang hard? Why is there still such a gap between our expectations and our ability ? Why, for example, can competent programmers and designers still find tricky logic even in something as seemingly simple as a shopping cart?

I hope there is opportunity for great leaps here. I just wish I had an idea of where to start jumping. I can only assume that the solution is changing our understanding of the the problem, not finding a better answer to what we think the problem is.

Perpetual crisis

Why is there still such a gap between our expectations and our ability ?

Because our expectations are wrong? The reality is what it is, which leaves only our expectations to be at fault here. Thus, a shopping cart is apparently not so simple.

Our expectations are also at fault for diagnosing crises in software engineering. Contemporary software sucks by definition. The next big [language, data exchange format, OS] had better not suck, or we're in trouble. I don't think any of these is true.

Sketchpad introduced objects and contraints

Ivan Sutherland built SKETCHPAD, a vector drawing system, in 1963. With it, he invented direct graphical interaction. That's not really relevant for this discussion, but a couple of important PLT concepts came along for the ride:

- He scooped Simula. His system allowed you to draw "master" drawings which could be "instantiated". Changing the "master" (in modern parlance, class or prototype) would change all the instances. Alan Kay was specifically influenced by both Simula and Sketchpad when he designed Smalltalk.

- He introduced dynamic constraints. You could specify that, say, two lines are parallel or meet at a certain point, and the system would solve for that constraint and then maintain it throughout further editing.

Odd that relational databases didn't make it to your list

As a practitioner, the single biggest invention for me is the relational database. Languages come and go, but the relational database is the real workhorse of the information age. Your list unwittingly demonstrates the academic bias of the CS world, except for Fred Brook's book & Codd's relational database (both of them are from IBM). What in the world is axiomatic semantics? (I play with scripting language design & compilation on the side, which kind of explains why I visit here)

If you want a modern day wonder, the technologies behind internet search should surely take their rightful place. Large scale mining of textual data was born almost as an accidental byproduct of the world wide web.

Sridhar

Databases...

Svembu, I think axiomatic semantics refers to Hoare triples; every program statement has a pre- and postcondition. There are special rules for function calls or loops, and step by step you can verify properties of whole programs (general properties, such that a variable is always positive, or that a variable always decreases, so that a program will terminate).

It's just very hard and tedious to do (though I'm not aware of software tools that might actually ease the process a lot). Probably, nowadays most program verification takes place in specification languages like Z, or in functional language with advanced type systems.

As to databases: yes, relational DBs are cool, but IMHO the idea is very simple. While that makes it really great, maybe the simplicity means that many people do not recognize it as a great invention.

Foundational Proof Carrying Code

Uses Hoare logic to minimize the trusted computing base of a system to merely the proof checker, which can verify arbitrary program properties. Very important for future growth of CS IMO.

http://flint.cs.yale.edu/flint/publications/

reminded me of the Stephen Jay Gould essay

where he maintained that the greatest diversity in an ecosystem appeared at the start of the ecosystem, and the longer the system stayed stationary, the more rigid the niche specializations became.

While a lot of current evolutionary theorists don't love Gould (I believe that essay has been ripped to shreds many times - Gould was looking at one fossil upside down, confusing legs with spinal protrusions on the beastie's back), this one idea also falls nicely into Dawkins' thesis in "the Extended Phenotype".

Dawkins' idea is that mutations which survive are mostly the ones that work "within the system". A mutation that makes a gazelle stand up against the lion will not survive, but a mutation that lets gazelles warn each other better which way to run will survive.

The parallels are interesting .... C to C++ to Java being the running gazelles, Scheme or Haskell or Forth being the gazelle that tries to gore the lion ...

More recent work

I wonder if this phenomenon is more an illusion than reality. It is natural that the first endeavours in a new field will be in those areas of most common interest. Many people today work with relational databases, or program using OO techniques, or have read The Mythical Man Month, so the development of these things stand out as major achievements.

On the other hand, the compilers of today get much better results than the compilers of yesterday because of what goes on behind the scenes. There is a whole world of optimisations that can be performed on code in SSA form, which only really started around the late 1980s. We have developed the concepts of run-time virtual machines and just-in-time compilation over the past decade or so to the point that programs written with high-level language features can run at the kinds of speed previously reserved to the lowest-level languages. These sorts of advances are much more subtle -- if you like, they're part of the implementation rather than the interface -- but the programming world is much richer for having them.

I do agree about the programming language concepts, though: we don't really seem to have advanced much over the past couple of decades. I suspect things will be much more interesting over the next 5-10 years, as the increasing dominance of multi-core chips forces the industry to realise that working with practically effective but theoretically weak tools (i.e., almost all mainstream programming languages today) isn't going to cut it much longer. This may be the catalyst we've needed for a while, to move the mainstream to radically different programming languages with a sounder theoretical base (= fewer possible classes of programmer error, quicker development times, ...) but without the awkwardness of many of the academic favourites today.

infrastructure

I've noticed this too. There was a serious burst of discovery in the 1960s and 1970s, but there has not been an awful lot new since then. As a friend of mine put it, "Every new project seems to be a memory exercise." I'll go with evgen's statement about infrastructure. Before the 1950s, computer science was largely theoretical. There weren't very many computers to play with. Starting in the 1960s, there were lots of computers for experimentation, so people discovered things. (For example, it was well known that people would communicate more openly when typing to a computer than when typing or talking to a human).

There was a similar discovery effect with particle accelerators. They were very low energy until the 1950s, but grew in power through the 70s. That's when the standard model was developed and confirmed. Since then a lot of gaps have been filled, but existing equipment hasn't been able to find any new data to challenge the theory. You can probably find similar effects with steam engines and thermodynamics, microscopes and cell biology, gene sequencers and genetics, telescopes and planetary theories and so on.

Why have things stalled out in computer science? The hardest part of computer science is the specification problem, figuring out what we want the computer to do. This is a human problem more than a technical problem, though the easier we make it to convert human wishes into computer actions the easier this problem becomes. Consider how few fundamentally new movie plots have been developed since the invention of motion photography.

I'll offer three hopes for progress. 1) We'll move forward when we design a fundamentally new type of computer. That's a long shot. 2) We'll move forward when computers are good enough for people to study what they are actually doing. There is an opening for a systems approach here that might prove fruitful in the medium run. 3) We'll move forward when everyone is a programmer and doesn't know it, and someone tries to figure out how we did it.

Need fuels progress

Programming languages today are good enough for people to fully express themselves. What need do we have now in, programming languages, that will benefit greatly from a significant breakthrough? (not a rhetorical question...I'm genuinely interested)

Correctness, Robustness,

Correctness, Robustness, Efficiency, Modularity, Maintainability, Modifiability, Efficiency of Coding, Verifiability, Usability

These are all things that can definitely stand to be improved. I don't know why you feel the need for "breakthroughs." What would significantly change by a breakthrough? No way of predicting (otherwise it probably wouldn't be a breakthrough.)

Too much computer power leads to sloppy thinking

I think we're suffering through a period of marketing innovation that substitutes for real innovation. This combines with the effects of Moore's Law, and means you don't need to squeeze every last ounce of performance out of a system.

This ultimately leads to a situation where sloppy thinking being 'good enough', and nobody is as concerned about efficiency as they could be.

When a person or organization gets too comfortable, complacency follows.

Part of the reason the Apple I and Apple ][ had such elegant designs were the fact that Woz (Steve Wozniak) designed systems in his head, several times, optimizing it before committing it to hardware.