The Little Books in Oz

Translating code from one programming language to another is a black art. Even if successful in capturing functionality, each PL has its own styles, idioms and community morals. Doing automated translations (which I have done) has more misses than hits. Doing it manually gets you closer but it can require an inordinate amount of time to get it just right. Even so, PL translations are something that I personally enjoy as it is particularly instructive in teaching the strengths and limitations of expressing different concepts (though I usually catch flak for violating the social values of the target language).

My latest postings into this gray area are translations of the remaining Little Books to Oz - consisting of The Little Schemer, The Seasoned Schemer, The Little MLer and A Little Java, A Few Patterns (previous LtU post on The Reasoned Schemer in Oz). The Little Books are the antithesis to recipe books. There's not much code here that can be plugged into a project. The aim is to systematically teach programming thought processes. The books are useful for those wanting to learn Scheme (or ML). But the lessons are also useful even if those are not your particular language(s) of choice. Such didactic material may not be everyone's cup of tea, but they do represent a unique manner in which to teach (and still hoping for The Little Haskeller).

Along a similar line, I've started in on Introduction to Algorithms in Oz. Previously, I made a weak attempt at Knuth which I'll get back to one of these years, but found that translating MIX to higher level languages was tedious and time consuming. The CLRS book is a bit easier to translate, but the language they chose to express algorithms in doesn't seem to map to any exact known programming language in the universe. The language is concise, which was their aim, but it takes some shortcuts and has some peculiarities. Also, like Knuth, the algorithms are very much oriented to having mutable state. (Purely Functional Data Structures is in my queue). Anyhow, I find it interesting that the authors of the two best known book(s) on algorithms chose to invent their own language rather than use an existing PL.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Proviso

My translations (including CTM and SICP) are not intended to supplant or be a substitute for the various books. Rather they are supplemental material that may be useful for those that have the books. The code is not meaningful without reference to the books. These translations are mostly for my own benefit, though they might be useful in allowing others to get a different perspective by seeing the code expressed in other programming languages.

I put the code on a Wiki with the thought of basically open sourcing the translations - hoping others might correct or refine the translations. But few have the fortitude for such (tedious) translation work. Mostly just a hobby of mine, to take away the day to day drudgery of the more mundane programming languages that I use in my day jobs.

It's The Little Haskellist

It's The Little Haskellist not Haskeller...

Syntax error

I'll probably never grok the syntactic rules for stress endings in the English language - I want to see the EBNF notation so I can get an objective proof. So what are the -er versus -ist rules? Or should it really be The Little Haskellate?

Rules?

Are there rules? I was just going by what sounded better to my ears (come to think of it, that's also the method I use most often when judging PL semantics...)

Programming Languages and Thought

Being in a semi-reflective mood... :-)

In answer to my question about why the two main algorithm books invented their own language (mix and the pseudo code of CLRS)... (Ok, I didn't frame it as a question...).

If I had an infinite number of clones (and no resource drains such as kids, jobs, girlfriends, etc), I could probably translate a book like SICP into almost any language. The results wouldn't be completely satisfactory, but Turing equivalence means that it probably is feasible. Yet, if I were to go back in time and force Abelson and Sussman to use another programming language (say ML), the book would have come out with a totally different emphasis and a much different arrangement. Although the principles in SICP are fairly universal, the programming language very much effects the method and manner of the text.

In the same way, the programming language that we chose to express our code in will have a dramatic effect on the results. We will, at most times, choose the path of least resistance that is directed by the syntax and semantics of the language. If you are constantly trying to express models where you have to fight the language at each step, it will be a losing battle. Now, we can fight being a slave to a PL in a couple of different ways. First, we can study a lot of different languages which will help us get better control over the language the we choose. That is, by studying more languages, we can get a less myopic view. Unfortunately, this can lead to expressing ideas from other languages in our code that are out of place (writing Lisp in Java). Or it can make us frustrated that we don't get to use a better language than the one we're stuck with. (Or for the cynical, it can lead to the conclusion that all languages suck).

The other way to fight being a slave to a language is to write your own. Either a full blown language or a DSL. The language then becomes a slave to whatever thought you happen to be trying to express. Which leads me full circle back to the question on the Algorithm books. Knuth developed MIX because he wanted to express algorithms in the most concise language possible. In this sense, the algorithm is independent of the language representation... but the language should be in the form that is most amenable for expressing it.

Anyhow, just a thought...

"Looping"

So in general, the person chooses or invents a language based on personal style and biases, and languages influence programmers. The feedback here is more complicated than simple positive feedback though, and is one of the driving forces behind language communities and their sociology.