Two books

While downtown doing something else entirely I managed to find myself in a bookshop. One of the few bookshops not belonging to a chain; in fact one that was established in 1908. They had some used books, and I managed to find two that were both interesting and cheap (around $6 each): Pascal User Manual and Report by Jensen and Wirth, 1974 Springer (alas only the 3rd edition from 1985) and Performance and Evaluation of Lisp Systems by Richard Gabriel (1985, MIT).

Here's a taste from both.

Wirth and Jensen:

Upon first contact with Pascal, some programmers tend to bemoan the absence of certain "favorite features." Examples include an exponentiation operator, concatenation of strings, dynamic arrays, arithmetiac operations on Boolean values, automatic type conversions and default declerations. These were not oversights, but deliberate omissions. In some cases their presence would be primarily an invitation to inefficient programming solutions; in others it was felt that they would be contrary to the aim of clarity and reliability and "good programming style."


Benchmarking is a black art at best. Stating the results of a particular benchmark on two Lisp systems usually causes people to believe that a blanket statement ranking the systems in question is being made. The proper role of benchmarking is to measure various dimensions if Lisp system performance and to order these systems along each of these dimensions.

Gabriel includes a pertinent quote from Vuaghan Pratt: Seems to me benchmarking generates more debate than information. How true...

I enjoyed the discussion of the various Lisp implementations in chapters 1 and 2. The Tak, Stak, Ctak, Takl and Takr series of benchmarks is enlightening. It shows how easy it is for benchmarks to measure "overheads" you haven't intended to measure, and how to engineer benchmarks to overcome this fundamental problem.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Scientific approach to analysis

Christophe Rhodes (a lisp implementor and physicist) gave an interesting talk, about analyzing benchmark results to understand his implementation better.

As I understand, he was acting as a scientist rather than a traditional CS theoretician; he used benchmarks as little programs which gave some arbitrary quantification. While the numbers themselves weren't interesting, the correlations between them were. This allowed him to use statistical techniques to determine similarities, as well as letting people use their eyes to detect patterns (which are perhaps too good at this role).

This may pay off when codebases become too large or ad-hoc to analyze with conventional static techniques. It may also provide engineering answers, like "How often does fixing bugs decrease performance?" (It would be nice if it increased performance, but the cynical part of him opined it didn't. And I'm reminded of how many Fortran/Cray/multiprocessor environments are said to sometimes trade off correctness for performance, with the idea that the programmer must accept a tradeoff if certain kinds of correctness are needed.)

Here's his slides, and his paper. It was given at the recent Lisp/Scheme workshop in Oslo.

RPG's book online

And Richard Garbriel recently posted a PDF version of Performance Evaluation of Lisp Systems on his website.

Not everyone agrees..

With Wirth.