Icon Language Implementation and Unicon News

Icon has always impressed me as the most elegant and useful string processing language. In fact, I still use it. The speed impresses me to this day. Lately the out-of-print Implementation of the Icon Programming Language book went online as a PDF download. Not only that, the copyright is public domain.

The successor language Unicon has published the first issue of The Generator, a journal devoted to that language. Unicon is an open-source project. Therefore code from the Icon implementation book lives in Unicon's public code repository. What Unicon adds to Icon is OO, POSIX, and other goodies.

My own plea is for language folks to study (Un-)Icon's string scanning mechanisms. How I wish they were more common. No matter what my task or language du jour, I always find myself longing for Icon when it comes to strings. Doug Bagley offers some OCaml libraries which imitate Icon string techniques.

A Methodology for Generating Verified Combinatorial Circuits

A Methodology for Generating Verified Combinatorial Circuits. Oleg Kiselyov, Kedar N.Swadi, Walid Taha.

This is the final version of a paper accepted for Embedded Software Conference 2004. The paper doesn't show any circuits but the straight-line C code should be implementable easily.

There is a significant difference from FFTW in that the authors don't do any intensional code analysis -- the generated code is black box and can't be processed nor manipulated any further. Moreover, the generated code can't even be compared by equality. Oleg tells me that the paper is somewhat obsolete: it says that they approach FFTW in the number of operations in the generated code. That is no longer true: the power-two FFT generated code has exactly the same number of floating-point operations as that in codelets of FFTW.

Abstract interpretation is used to fix several problems in the generated code. This makes some optimizations possible (e.g., avoiding code duplication).

Multi-stage programming fans, enjoy!

Derrida's Machines

PDF Link

Came across this in my referrers log and thought it might be of interest to others. Reminds in scope of GEB, meandering through Number Theory, Category Theory, Combinatorial Logic, Metaphysics, AI and the Semantic Web, touching programming language issues along the way. Lots of loose ends and uneven in flow but there's various gems scattered throughout (I thought the section that explains CT was good).

What have we learnt on our trip around the fascinating perspectives and problems of a Dynamic Semantic Web? It is all about dynamics and structures. This brings us back to the central topics of DERRIDA'S MACHINES: Interactivity between structures and dynamics, that is, to the interplay of algebras and co-algebras, ruled by category theory and surpassed by the diamond strategies leading to polycontexturality and kenogrammatics.

Guess I'll have to work on my vocabulary as I've never heard of polycontexturality and kenograms.

Grady Booch on software archeology

Booch's Rational User Conference presentation on software archeology is available online.

This is related to our previous discussions about reading code, mining code for patterns and about the history of programming languages and software systems.

Obviously, many of the issues mentioned in the presentation (e.g., SLOC counts) are programming language dependent. The major case study is Eclipse, which should make this interesting to IDEs buffs as well.

See here for more on Booch's work on documenting software architecture.

Haskell Functional Programming Bookstore

John Meacham has set up a print-on-demand bookstore for Haskell-related books at CafePress. There are only a few books there at the moment, but they tend to be things that are otherwise out of print, for instance Simon Peyton-Jones and David Lester's Implementing Functional Languages. In this particular case, Amazon has one used copy for $204, but you can get a brand new one from the Haskell bookstore for $18.50.

It's the language, stupid. Or is it?

In a forum dedicated to programming languages is it is easy to get carried away, and forget that choosing a programming language for a project is not just about finding the best or most expressive language possible, but often very much dependent on the platform for which the software is developed.

An interesting blog post about web applications, AlphaBlox and Oddpost should help drive this point home.

It describes the struggle it took to develop applications with rich user interfaces for web browsers, especially early versions of IE. Javascript was the programming language used.

Along the line browsers evolved (as well as the browser-language interface called the DOM), the language matured, and programming techniques were discovered.

So, yes, it is the language. But it is good to keep in mind that things are not always a simple as they may seem on first sight.

The C++ Source Journal

via Digital Mars C++

C++ [is] indisputably the most powerful programming language in the world. For reasons understood by many, it has, however, become a very complex language, and many a novice has followed after the siren song of lesser languages.

So is launched The C++ Source, the authoritative voice for the C++ community on the web. The board of editors is a veritable Who's Who of C++. Articles are peer-reviewed. There is not much content yet, but it looks like they offer live feeds. The journal's premiere article, "The Spirit of C," offers a potted history of C.

Like BCPL, B was a typeless language with a rich set of operations on machine words... How well does B embody The Spirit [of C]? Almost perfectly, in my opinion. "Trust the programmer" and "Don't prevent the programmer from doing what needs to be done" are obvious characteristics of a typeless language. [Meanwhile, however, the] C++ language is very nearly completely type-safe, which is a good thing, as its type system is arguably the most complex of any language.

Put that in your pipe and smoke it.

Streaming Representation-Changers

Jeremy Gibbons (2004). Streaming Representation-Changers. To appear in Mathematics of Program Construction, July 2004.

Unfolds generate data structures, and folds consume them. A hylomorphism is a fold after an unfold, generating then consuming a virtual data structure. A metamorphism is the opposite composition, an unfold after a fold; typically, it will convert from one data representation to another. In general, metamorphisms are less interesting than hylomorphisms: there is no automatic fusion to deforest the intermediate virtual data structure. However, under certain conditions fusion is possible: some of the work of the unfold can be done before all of the work of the fold is complete. This permits streaming metamorphisms, and among other things allows conversion of infinite data representations. We present the theory of metamorphisms and outline some examples.

More origami work from Gibbons.

This paper is related to several previous papers by Gibbons that were mentioned here (e.g., spigot pi, arithmetic coding), and it could be said that it summarizes the main results. I don't think this paper was linked to directly, so here goes.

Routine Maintenance

Later today we are going to do some routine maintenance. During this period posting new
item may be disabled. If all goes well, the the time window in which posting is disabled is going to be short.

Wobbly types

Wobbly types: type inference for generalised algebraic data types


Simon Peyton Jones, Geoffrey Washburn, and Stephanie Weirich.
July 2004
Postscript

Generalised algebraic data types (GADTs), sometimes known as "guarded recursive data types" or "first-class phantom types", are a simple but powerful generalisation of the data types of Haskell and ML. Recent works have given compelling examples of the utility of GADTs, although type inference is known to be difficult.

It is time to pluck the fruit. Can GADTs be added to Haskell, without losing type inference, or requiring unacceptably heavy type annotations? Can this be done without completely rewriting the already-complex Haskell type-inference engine, and without complex interactions with (say) type classes? We answer these questions in the affirmative, giving a type system that explains just what type annotations are required, and a prototype implementation that implements it. Our main technical innovation is wobbly types, which express in a declarative way the uncertainty caused by the incremental nature of typical type-inference algorithms.

Edit: Made the title into a hyperlink, as the "Postscript" link could easily get lost in a sea of blue text...

Edit 2:Quoted the article with a plain-ol' <blockquote> instead. Better?