Misc Books

Practical Common Lisp

Practical Common Lisp by Peter Seibel was mentioned here in the past, but not on the home page if I am not mistaken.

You can download all but three chapters from the website, and seeing as Lisp is an important and somewhat unique language, you might want to do just that.

The chapters I read were well written and funny at times. What's not to like?

The OO chapters offer a nice intro to CLOS, which might interest those with OO experience seeing as CLOS doesn't resemble your average OOPL.

I must say that it's nice to see "practical" how-to books written for non-mainstream languages.

Google Print (and Computable Functions)

LtU-ers should be aware of google's new Google Print initiative to digitize books and support full text searches.

To get a feel for the new service, and for the sake of the subject matter itself, you might want to take a look at the book Computable Functions by Nikolai Konstantinovich Vereshchagin, Neal Noah Madras.

Goedel's Theorem and Theories of Arithmetic

Logic is the cornerstone of computer science in general and much of programming language theory in particular. Goedel's results are fundamental for any real understanding of modern logic.

This book by Peter Smith might serve as an introduction to Goedel's incompleteness results. Twelve chapters are online, and seem quite readable.

Derrida's Machines

PDF Link

Came across this in my referrers log and thought it might be of interest to others. Reminds in scope of GEB, meandering through Number Theory, Category Theory, Combinatorial Logic, Metaphysics, AI and the Semantic Web, touching programming language issues along the way. Lots of loose ends and uneven in flow but there's various gems scattered throughout (I thought the section that explains CT was good).

What have we learnt on our trip around the fascinating perspectives and problems of a Dynamic Semantic Web? It is all about dynamics and structures. This brings us back to the central topics of DERRIDA'S MACHINES: Interactivity between structures and dynamics, that is, to the interplay of algebras and co-algebras, ruled by category theory and surpassed by the diamond strategies leading to polycontexturality and kenogrammatics.

Guess I'll have to work on my vocabulary as I've never heard of polycontexturality and kenograms.

Haskell Functional Programming Bookstore

John Meacham has set up a print-on-demand bookstore for Haskell-related books at CafePress. There are only a few books there at the moment, but they tend to be things that are otherwise out of print, for instance Simon Peyton-Jones and David Lester's Implementing Functional Languages. In this particular case, Amazon has one used copy for $204, but you can get a brand new one from the Haskell bookstore for $18.50.

Two books

While downtown doing something else entirely I managed to find myself in a bookshop. One of the few bookshops not belonging to a chain; in fact one that was established in 1908. They had some used books, and I managed to find two that were both interesting and cheap (around $6 each): Pascal User Manual and Report by Jensen and Wirth, 1974 Springer (alas only the 3rd edition from 1985) and Performance and Evaluation of Lisp Systems by Richard Gabriel (1985, MIT).

Here's a taste from both.

Wirth and Jensen:

Upon first contact with Pascal, some programmers tend to bemoan the absence of certain "favorite features." Examples include an exponentiation operator, concatenation of strings, dynamic arrays, arithmetiac operations on Boolean values, automatic type conversions and default declerations. These were not oversights, but deliberate omissions. In some cases their presence would be primarily an invitation to inefficient programming solutions; in others it was felt that they would be contrary to the aim of clarity and reliability and "good programming style."

Gabriel:

Benchmarking is a black art at best. Stating the results of a particular benchmark on two Lisp systems usually causes people to believe that a blanket statement ranking the systems in question is being made. The proper role of benchmarking is to measure various dimensions if Lisp system performance and to order these systems along each of these dimensions.

Gabriel includes a pertinent quote from Vuaghan Pratt: Seems to me benchmarking generates more debate than information. How true...

I enjoyed the discussion of the various Lisp implementations in chapters 1 and 2. The Tak, Stak, Ctak, Takl and Takr series of benchmarks is enlightening. It shows how easy it is for benchmarks to measure "overheads" you haven't intended to measure, and how to engineer benchmarks to overcome this fundamental problem.

XML feed