An interactive historical roster of computer languages

A very interesting site dedicated to the history of programming languages.

The navigation is a bit clunky, and I am still not sure I understand all the features of the site, but the database is impressive (use the "search" button on the right hand side of the window to open the search frame).

The site includes amusing statistics (e.g., which decade produced the most programming languages?)

It would be interesting to see what LtU readers think about the taxonomy used here. It's quite fine-grained.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

or not...

From this web site: "I am using facilities that IE has which other systems don't, to maximise the amount of information I can get on a screen."

Which on my computer (Mac OS X, Safari browser) means zero information.

Favorites

If you click through the favorites link, you can see part of the database in Safari.

Glyphic Script

Aren't you the Mark who invented Glyphic Script?

(Belated) updates

I've updated the taxonomy keyset page, so it's now appropriate to the dataset.

Browsers

To people who don't want to use the same browser I am currently using, sorry. How about going to a cybercafe where they have a suitable browser, grabbing a cup of coffee, and looking at it there instead?

The material has come first for me, and when I have finished the rest of the data-input (18,000 more reference, 1,700 more languages) I'll fix it up some. (and the pages are more or less equally buggy cross-platform-wise)

Well...

I'm a total taxonomic junkie (my mom's a mycologist, I've been collecting programming language curiousa since I was in high school), but I'm also a web browser collector, and of the four browsers I can use, I can't even get around the site in any of them. I get not wanting to get wrapped up in any flamewars, but I have everything except Windows... it's a little frustrating, you know? Especially when what I can see looks so tantalizing...

Frustrating

It is frustrating in the extreme to have spent so much effort in gathering these resources to have the only comments presented refer to clunkiness, invisibility, ...

I avoided publishing the site's existence here and other places precisely for this reason. Shall I just hide it again?

Frustration

Readers are frsutrated since they can't get their hand on the data, so they raise their concerns. Obviously, if they can't see the site, they can't give more meaningful comments.

I suggest trying to publish some of the information in simpler form (even text files). You should realize that many people don't have any access to IE (it came as a surprise to me too, but there are such people).

It's not a personal attack.

Site size and frustration

The entire archive is 67 Gb, which is too much for text, as is the 38 Mb database which is normalised to 85 tables (plus lookups). Denormalised and dumped into a list it comes to about 9 Mb, but then is largely meaningless when it comes to interconnectivity.

I also don't want the entire thing to be published as a list until I am happy with it. It is after all first and foremost my ongoing research database.

And the taxonomy keyset loads directly from your link.

Majority don't necessarily use IE

I suggest trying to publish some of the information in simpler form (even text files). You should realize that many people don't have any access to IE (it came as a surprise to me too, but there are such people).

While IE may be the browser with the biggest single "market share", IE's share on a given site is often significantly less than 50%, depending on the site. This means that targeting IE specifically may be a barrier to a majority of potential readers.

Alternative entry point for the website

I have prepared an alternative entry point at http://hopl.murdoch.edu.au/backdoor.html

I have added a "click here for simple version" reference at the top of every language display page to enable you to see the details in one long page.

I have made a simplified version of the examples to show on their own separate page.

This should enable the complainants to see the information without frustration.

Thanks.

Not got very far into it yet, but I have a couple of comments.

Due to a very recent thread on LtU, I was looking up the relationship between Haskell & Miranda. The Haskell page starts out by stating that it was an extension of Miranda. And a Miranda page is also found. But there's no linkage given between the two on either of the pages (other than in the explanation).

Speaking of Miranda, the Amanda compiler (which is mentioned on your page) is available at ftp://bells.cs.ucl.ac.uk/functional/, though the compiler never really got beyond the experimentation phase.

Also, I notice on your pages that you give some samples of the language from the "one-problem, many-languages" type of pages. There are some additional ones that you could use, if you were so inclined, like Pixel's, Doug Bagley's Shootout, or mine OOP one. (There's other ones like these as well, but that's just off the top of my head). And while I'm mentioning my stuff, I also put together a list of PL inventors.

Some more inventor pictures

I notice you are missing some pictures for language inventors - there are some more here for those you have missed

Muchos Gracias

Looked all over the place for Radin (PL/I) and Schwartz (JOVIAL), and couldn't find them.

CLIST

The CLIST page doesn't include the fact that it's an IBM language. It does say TSO, but if you search the notes for IBM, CLIST does not appear on the result page.

Amended with thanks

All amendations gratefully received - any missing languages as well. I am particularly keen on tracking down languages and matching references from countries besides the US and the UK.

Prolog descendants and multiparadigm languages

Since you asked for more references, here are some. For Prolog-like
languages up to 1993, check out this survey article:

1983-1993: The Wonder Years of Sequential Prolog Implementation,
Journal of Logic Programming, Elsevier, Vol. 19/20, May/July 1994,
pp. 385-441.

You should also check out the other articles in that issue of JLP; see
the Table of Contents for more information. For example, the area
of constraint programming has many interesting languages, up to the
present day.

Another article with some history is Logic programming in the context
of multiparadigm programming: the Oz experience
, Journal of Theory
and Practice of Logic Programming (TPLP), November 2003, pages 715-763.
This has two detailed history sections: one on multiparadigm languages
and one on Oz itself and its ancestry.

Where does Oz fit in the taxonomy?

I am curious where the Oz language would fit in your
taxonomy. Should the taxonomy be a graph instead
of a tree? Oz was designed to bring together on equal
footing concepts which are usually treated on unequal
footing.

The Oz design exposes interesting new axes for the
taxonomy. One axis is different approaches to
concurrent programming (declarative, message-passing,
shared-state). Another axis is different approaches to
data abstraction (object style, ADT style). A third axis
is the semantics used for reasoning, which depends on
which concepts you choose to program with
(denotational, operational, axiomatic, logical). All the
members of these axes are well-represented in Oz.

For these reasons, Oz was a good choice for our new
programming textbook. For a quick overview of Oz
check out the talk on Teaching Programming.

A partial answer...

Ok, after thinking for a bit I'll propose a partial
answer to my own question. In my view, an
important part of the taxonomy should be based
on programming concepts (in a precise sense
that is made clear in my textbook). Languages can
then be classified as to how they treat the concepts.
Usually, languages will make it easy to use certain
subsets of concepts, for example, because they
simplify mathematical analysis (like functional
languages) or because they simplify building
modular programs (like object-oriented languages).
A language that emphasizes a certain subset of
concepts over other subsets can be given a label,
such as "lazy functional language". Not all possible
subsets are useful; experience shows that some
subsets are more useful than others. And the
subsets are overlapping of course. When you
finally pick up the pieces, this gives a lattice
structure where each node is a useful subset, and
languages live on this lattice.

The nature of cultural taxonomy

Well, monothetic classification schemes serve properly to classify, rather than polythetic ones which map instead. If we want to group languages polythetically (which we might) then we need to have an acceptable set of values which apply equally to all (not something monothetic systems need worry about).

Then, as we know from taxonomy and explanation theory, we can't deduce the structures used bootstrap-wise. Declaration of an intent to merge approaches may be genuine, but that does not necessarily mean that a language created is a genuine hybrid. (They do exist - the classic ones are the merges between discrete and continuous simulation systems. However, they always have a comfort zone for one or the other). The question is one of applicability, or of prime appositeness, rather than absolute ordination.

And I worry about the mixing up of values, as category errors. It may (again) be the case that distinctions between syntax, structure, level etc are separable, but it has to be argued for, not stipulated or presumed.

Some of you have suggested a mathematical/cladistic approach - again, it has to be argued that the saime rules for distribution of features in the biologcal realm apply to the things in the cultural. This is an ongoing debate in prehistory of history of ideas, but has to be argued for.

I haven't finished writing up the justification for the taxonomy yet (a bit surprised by the overwhelming response) but I am trying to approach the task after the fashion of sociocultural taxonomies, by considering context, telos and inheritence, as well as documentation, form and intention. Also try to follow the art-historical way of investigation, proceeding with these as art-cultural objects.

Main reference for systematic cultural taxonomy is Dunnell, R. C. "Systematics in Prehistory" 1971 Free Press

So in summary, I think a graph or a network only works for influence chains, rather than typology - there is certainly no natural (prebioengineering anyway) equivalent. And a lattice doesn't work for locating (and would still have to be weighted mathematically)