Irresistible programs

Here's the deal: Let's design a short booklet of irresistible programs, sure to intrigue and delight programmers and geeks. The programs should be short (one page), real (not made up for this occasion) and preferably of historical value, and must look interesting (bonus points to those suggesting programs written in amusing glyphs).

Here are a couple examples: McCarthy's Lisp (1960), Quicksort in Haskell (and for comparison: Hoare's definition of Partition), first winners of the Obfuscated C Code Contest.

Now is your turn. Which programs do you find simply irresistible?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

David Turner's solution of the Hamming problem

This is, I think, widely recognised as a classic:

ham = 1 : foldr1 merge [mult 2 ham, mult 3 ham, mult 5 ham]
      mult n x = [n*a|a<-x]
      merge (a:x) (b:y) = a : merge x y,     if a=b
                        = a : merge x (b:y), if a<b
                        = b : merge (a:x) y, if a>b

From David Turner's example Miranda scripts. An inessential variant is in the Haskell wikipedia article.

probably not a classic

but I prefer the lucid version,

     h = 1 fby merge(merge(2 * h, 3 * h), 5 * h);
     merge(x,y) = if xx <= yy then xx else yy fi
          xx = x upon xx <= yy;
          yy = y upon yy <= xx;

Hamming problem redux

Xref: a previous lengthy discussion of the Hamming problem.

Haskell's little fib

The infinite Fibonacci sequence in Haskell is where I decided I just had to know Haskell. Others I've talked to said the same thing.

fibs = 0 : 1 : zipWith (+) fibs (tail fibs)

Duff's device

I learned C very young (16 or so). As I started writing interesting programs I started feeling cocky. I thought I *knew* that language and its library. Then I saw Duff's device for unrolling loops and realized I didn't know squat. Here's the original which sends the contents of an array to a register for IO.

	send(to, from, count)
	register short *to, *from;
	register count;
		register n=(count+7)/8;
		case 0:	do{	*to = *from++;
		case 7:		*to = *from++;
		case 6:		*to = *from++;
		case 5:		*to = *from++;
		case 4:		*to = *from++;
		case 3:		*to = *from++;
		case 2:		*to = *from++;
		case 1:		*to = *from++;


Much as people say "well, can a language do what lisp can do?", I tend to judge procedural languages on whether they can do Duff's device. It's a pretty short list.... :)

A different kind of "irresistible"

Oh, I don't think Duff's Device is an irresistible feature for language designers. But it is an irresistible puzzle to a programmer - at least it was to this programmer back when. How could it possibly mean anything?

The intriguing part to me is that you can generally understand things in an abstraction "black box" way or an implementation "white box" way. Normally the former view is more useful for composing things because you can ignore irrelevant details. But Duff's Device seems like an exception. When you think of switch and do/while as black box control flow operators Duff's device appears to be gibberish. "Case 1 falls through to a 'while' without having gone through a matching 'do'...WTF?" Yet when you think of switch and do/while as syntactic sugar for certain patterns of conditional jmp then it snaps into clear focus.

That might be why Duff said

I feel a combination of pride and revulsion at this discovery.

The quote I like is “Many

The quote I like is “Many people (even bwk?) have said that the worst feature of C is that switches don't break automatically before each case label. This code forms some sort of argument in that debate, but I'm not sure whether it's for or against.”

Keeping in mind that I wasn’t asking just for language design examples. And in fact the device surely makes one curious about language semantics.

game of life

game of life in one line of APL.

it is what made me want to learn array|vector based languages.

Cellular automata

Conway's Game of Life isn't just irresistible for the opportunities to play golf in APL, but also because it let's you create irresistible "programs" in the game - initial configurations that lead to beautiful long running or infinite executions. Gosper's Glider Gun is particularly irresistible.

Hacker's Delight

I have a soft spot for bit twiddling, especially bit-parallel SIMD-within-a-register implementations of Life.

Prolog's append

When I started college I knew C. At the time, Pascal was the school's primary language for undergrad's. I was too new to programming to understand that Pascal and C are virtually the same language so I started getting cocky again as I learned this "new" language very quickly.

Then we took six weeks of Prolog and I realized, yet again, that I didn't know squat.

One irresistible bit of Prolog is "append"

    append([X|Y],Z,[X|W]) :- append(Y,Z,W).

What I find irresistible about logic programming is that you get so many "functions" for the price of one.

Is the concatenation of [1,2,3] and [4,5] equal to [1,2,3,4,5]?

    ?- append([1,2,3],[4,5],[1,2,3,4,5]).

What do I get when I append [4,5] after [1,2,3]?

    ?- append([1,2,3],[4,5],A).
    A = [1,2,3,4,5]

What do I need to append to [1,2,3] to get [1,2,3,4,5]?

    ?- append([1,2,3],Y,[1,2,3,4,5]).
    Y = [4,5]

What do I need to prepend to [4,5] to get [1,2,3,4,5]?

    ?- append(X,[4,5],[1,2,3,4,5]).
    X = [1,2,3]

What are all the possible pairs of lists that, when concatenated together, yield [1,2,3,4,5]?

    ?- append(X,Y,[1,2,3,4,5]).

    X = []
    Y = [1,2,3,4,5] ? ;

    X = [1]
    Y = [2,3,4,5] ? ;

    X = [1,2]
    Y = [3,4,5] ? ;

    X = [1,2,3]
    Y = [4,5] ? ;

    X = [1,2,3,4]
    Y = [5] ? ;

    X = [1,2,3,4,5]
    Y = [] ? ;

When it comes to Prolog and

When it comes to Prolog and examples, I prefer the Sequence Program discussed by Apt here. However, I’d prefer to have examples that come from specific historical context, and that tell something about the relevant era.


N queens certainly admits elegant Prolog realisations, and has a history, but I don't know enough to cite a particular piece of code as classic.

Google search throws up an implementation in Yield Prolog that is certainly concise, but I think can be made more elegant.

Come to think of it: the

Come to think of it: the classic here would be Wirth's version from Program Development by Stepwise Refinement (1971).

Prolog challenge

Prolog is a challenge. The language as a whole has historical significance and creates a rich vein of intriguing programs for obvious reasons, but since logic programming has lived so far off center stage it's hard to think of Prolog examples that are "classic" (in the sense of well known and archetypal) other than the standard pedagogical intro of father/ancestor/sibling, but that's just not historically interesting.

Maybe there's something in language processing.



Sequence Program

Ehud missed the link in the parent post. I think he means the example on p20 of The LP paradigm and Prolog.

There's certainly some appeal to the example, but I think it falls short of being arresting.

True and true.

True and true.

Prolog and Difference Lists

Your Prolog example really took me back to my days at Uppsala University, where our professor, Sten-Åke Tärnlund, was mainly known among the students as one of the "inventors" of "difference lists" (d-lists). (Apparently, the use of incomplete data structures in Prolog was common, but Clark and Tärnlund were the first to publish this particular example.)

The difference list is a really neat programming trick that uses unification of logic variables in Prolog to achieve a non-recursive, constant-time append:

append(A-B, B-C, A-C).

That's it!

A difference list A-B represents a list A followed by a tail B. For example:

[a, b, c | []]-[]

would represent the list [a, b, c]. If we use a logic variable for the tail part:

[a, b, c | X]-X

we can unify the logic variable X with another list to append the two.

Prolog's logic variables are kind of used to implement pointer manipulation, but with a logical interpretation.

As an example

append([a, b, c | Z1]-Z1, [d, e, | Z2)-Z2, X-Y).

would result in X-Y being [a, b, c, d, e | Y]-Y

Prolog meta-interpreter with difference lists

Difference lists are no doubt of historic significance for Prolog: It was previously unclear how to implement DCGs efficiently. A very nice use of difference lists is the following meta-circular evaluator ("meta-interpreter") for Prolog:

mi([G|Gs]) :-
        mi_clause(G, Rest, Gs),

Here is a simple example program in the required representation, making use of list differences to efficiently join remaining goals:

mi_clause(natnum(0), Rs, Rs).
mi_clause(natnum(s(X)), [natnum(X)|Rs], Rs).

The interpreter can evaluate this program:

%?- mi([natnum(X)]).
%@ X = 0 ;
%@ X = s(0) ;
%@ etc.

The interpreter's own code can also be represented:

mi_clause(mi([]), Rs, Rs).
mi_clause(mi([G|Gs]), [mi_clause(G,Rest,Gs),mi(Rest)|Rs], Rs).

mi_clause(mi_clause(G,Rs,Rs0), Ls, Ls) :- mi_clause(G, Rs, Rs0).

And now the interpreter can be applied to itself. Here is an example of the interpreter, interpreting itself as it evaluates the example program:

%?- mi([mi([natnum(X)])]).
%@ X = 0 ;
%@ X = s(0) ;
%@ X = s(s(0)) ;
%@ etc.

Contrast this with the comparatively verbose meta-circular evaluators of other languages.

Difference lists are a very

Difference lists are a very cool trick, no doubt. Can someone supply a historical reference for difference lists? (were they introduced in a paper?)

Reference to Difference Lists

I haven't found any online source, but this seems to be the reference to the original Clark / Tärnlund paper:

L. Clark and S.-A. Tärnlund. A first order theory of data and programs.
In B. Gilchrist, editor, Proceedings of the IFIP Congress, pages 939–944, Toronto, Canada, 1977. North Holland.

The catch

The catch is that you cannot append to the same list twice, right?

Functional difference lists

The catch is that you cannot append to the same list twice, right?
With this representation, right.

Difference lists are known in FP, using lambda binders, so in scheme we have:

 (lambda (x) `(a b c d e . ,x))

which can be reused.

With higher-order unification, we can do this in LP too. In lambda-Prolog, Dale Miller has talked of "functional difference lists"; cf. section 11 of his Collection Analysis for Horn Clause Programs from 2006.

The many uses of append/3

Other uses of append that we don't want to miss out on...
Is 3 a member of [1,2,3,4,5]?


Enumerate the members of [1,2,3,4,5]


...also since lists and unary numbers share the same structure, we can use append to perform arithmetic.

X = 2+3


X = 5-4


Find the pairs of numbers that add up to 4


While append in prolog is

While append in prolog is magical, it is more the uses of the predicate than the definition that are irresistible. Second, I'd prefer to use code that had significant impact on programming, programming culture etc.

Great suggestions!So here

Great suggestions!
So here is the current list: McCarthy, Obfuscated C, David Turner's solution of the Hamming problem, Duff's device, game of life in one line of APL (and Gosper’s glider gun), some Prolog exampe. That’s six. Can we come up with four more? Perhaps things that come from "real code", not from a puzzles or examples?


Quines are irresistible and historically important, and even live in "real code" - viruses. :-)

Pick your language.

I thought of quines. Heck, I

I thought of quines. Heck, I love quines. But historically important?

Self reference

Maybe I over spoke. It's more accurate to say the general concept of indirect self reference is important (huge) and quines are one example of it.

Or I could just say that computer viruses are historically significant so I'm covered there as well. :-)

Quines are the most historically important programs of all!

If "quine" means "an automaton using its own source code as input", then Gödel used a quine, more or less, to prove his incompleteness theorem, Turing used one to prove that the halting problem was uncomputable, Thompson used one to show that access to the source code of all of your software and compilers is insufficient to find backdoors in it, Steve Russell approximately invented functional programming languages by encoding McCarthy's Lisp quine into (709?) assembly, John von Neumann predicted that the structure of self-reproducing life forms in general would turn out to be quines, and was proved right with the discovery of DNA.

So if quines are responsible for the existence of life, Gödel's incompleteness theorem, the proof of the uncomputability of the halting problem, and the existence of functional programming languages, I nominate them as the most historically important category of programs of all.

Probably OT but still interesting

I don't think that it totally fit with what you're asking, but this DDJ article blew my mind the first time I read it:

Too bad, the full article is too long, but the optimised code for the game of life is nice, no?

A favorite from the world of crypto

While recent attacks have caused it to lose a bit of its lustre, one of my favorite crypto algorithms is RC4. It's clean and simple enough to commit to memory, and straightforward enough to compute with pencil and paper, while also being quite efficient on a computer. I don't know if it "looks interesting", I'm guessing it might look cooler in something like K or Q (anyone feel like taking a stab?) than it does in the OCaml below, but beauty is in the eye of the beholder, and some of those Obfuscated C entries... yech!

Anyway, here's a sorta clean, kinda practical implementation:

external get_byte : string -> int -> int = "%string_unsafe_get"
external set_byte : string -> int -> int -> unit = "%string_unsafe_set"

class rc4 =
  object (self)
    val state = Array.init 256 (fun i -> i)
    val mutable x = 0
    val mutable y = 0
    method private swap i j =
      let t = state.(i) in state.(i) <- state.(j); state.(j) <- t
    method private mask =
      x <- (x+1) land 0xff;
      y <- ((y + state.(x)) land 0xff);
      self#swap x y;
      state.((state.(x) + state.(y)) land 0xff)
    method init key =
      let state_index = ref 0
      and len = String.length key in
      for i = 0 to 255 do
        state_index := ((!state_index + state.(i) + (get_byte key (i mod len)))) land 0xff;
        self#swap i !state_index
    method crypt src =
      let dest = String.copy src in
      for i = 0 to String.length dest - 1 do
        set_byte dest i ((get_byte dest i) lxor self#mask)

The implementation of

The implementation of semaphores is also a classic (especially the published one that had a bug...)

Talking about a classic with a bug

I remember one sort program from the 'programming pearls' which was used to introduce loop invariants and the like because many people would have the sort wrong, but the 'sort' was slightly buggy itself: to find the middle of [I,J], it used (I+J)/2 and in many langugage (I+J) can overflow!

I was quite amused when I found this.

I don't get this one

If we were to use a standard of rigour for which the possible overflow of (I+J)/2 is considered a bug, then I can hardly think of a paper that isn't buggy. Certainly in the field in which I work (3D graphics) almost every expression computed in almost every single paper I have seen would be considered buggy because it will cause overflow for some input.

In the process of taking an idea in a paper to production code a programmer ought to handle potential overflows. But I don't see that it is the job of the paper author to highlight the potential problems of every particular programming language in this case, especially when it might clutter an otherwise clear idea. As far as I can tell, it is standard procedure to consider an idealised machine when presenting academic work, unless the paper is specifically about non-idealised machines.

Of course my calibration may be off because the standards in graphics publications are considerably different from those in PLT.

The thing is simply that one

The thing is simply that one of the benefits of formal methods is that you catch these "extreme" cases. In this case it could be an unreasonable criticism, but there are definitely cases where it is not. My impression is that most programmers would not even realize that there was this error case without having it pointed out and wouldn't immediately know how to rewrite it (though they could probably figure out a way quickly enough.) Combined with the fact that overflow errors cause real problems, the criticism is probably not unreasonable.

In this case, i + (j-i)/2 is one way to avoid overflow.

i + (j-i)/2

I think it can overflow if i and j are oppositive sign and both big.


positions in a list (i and j here) are natural numbers, not integers.
(can't be negative :) )

Depends where

While (I+J)/2 is reasonable in math/research papers, 'Programming Pearls' is a book which is about teaching programming, so here it's not adequate IMHO: the author should have used the opportunity to introduce I + (J-I)/2 to avoid potential overflow (or use a Middle function to hide it during the main explanation and only at the end of show that Middle's correct implementation in C-like language is not the obvious formula).

Overflow isn't a bug if it

Overflow isn't a bug if it gives the right answer, or even a right answer. If you're using saturating arithmetic, pixel value overflow may look just fine. But if you get a negative array index and it crashes your program, or just an incorrect unsigned array index, and it fails to find a key that was present in the array, then it's a bug.

Assembly Gems

I'm a sucker for assembly language. Many of these are cool even as x86 code.

Assembly is a good place to

Assembly is a good place to look for irresistible code, but can you point to a specific example worthy of inclusion in our list. Remember that it would be best if it comes from a specific system, and has historical value.

Two of my favorites are

Two of my favorites are KarL/NoooN's 40 byte sinus generator and Pascal's 20 byte starfield. If you followed the demoscene back in the day these 2 guys should ring a bell.

Unconditional snippets

In an assembly language, it is sometimes possible to express a certain computation unconditionally, although doing the same would seem to be impossible in most other languages. Such things are often both impressive and efficient. Here are several examples in the i8086 assembly language, from my personal collection, about 20 years old now. Most if not all are, I believe, well known idioms.

Decrement by 1 the content of ax if positive but keep it 0 if already so (∸, dot minus, monus, or floor subtraction of 1):

    sub  ax,1
    adc  ax,0

In addition, the following is true after the execution of the above:

  • CF (the carry flag) is set iff ax was already 0;
  • ZF (the zero flag) is set iff ax is 0 after.

Increment ax by 1 iff currently the content of ax is less than that of bx (increment ax preserving ax≤bx):

    cmp  ax,bx
    adc  ax,0

Absolute value of an integer in ax:

    xor  ax,dx
    sub  ax,dx

Signum (-1, 0, or 1) of an integer in ax:

    add  ax,-1
    mov  ax,dx
    adc  ax,dx

The following is perhaps the most impressive, as it makes use of a BCD-arithmetic command, DAA (Decimal Adjust after Addition), in a rather unanticipated way. Given a number 0÷15 in al, convert it to its ascii hex representation, i.e. {0,1,…,15} –> {'0','1',…,'F'} (note that while 0,1,…,15 is an (integer) sequence, '0','1',…,'F' is not an ascii sequence):

    add al,90h
    adc al,40h

How about 6502 figForth?

It's a little long, but I think the figForth Forth interpreter for the 6502 qualifies as "irresistible".

Scheme in 90 minutes

Scheme in 90 minutes qualifies, IMHO.


Two candidates in Icon – the Fibonacci generator:

(i:=j:=1) | |((i+:=j):=:j)

and the 8-queens program from the Icon book:

procedure main()

procedure q(c)
  suspend place(1 to 8,c)

procedure place(r,c)
  static up,down,row
  initial  {up := list(15); down := list(15); row := list(8)}
  if  /row[r] & /down[r+c-1] & /up[8+r-c]
    then  suspend row[r] <- down[r+c-1] <- up[8+r-c] <- r

I thought about Icon, but

I thought about Icon, but felt that the historical significance was probably not high enough. On second thought, perhaps the significance of backtracking and Icon’s role in the development of ideas about backtracking might make this example worthy of inclusion.
A few unique ways to express fib may also be a nice idea.

Indeed, Icon is of the very

Indeed, Icon is of the very few languages with constructs for explicit backtracking.

Other genuinely Icon's contributions to programming are its generators and co-expressions. A generator is an expression which – in a suitable context – produces a (possibly infinite) sequence of values rather than just one. A co-expression is (a form of) a lazy expression, and is particularly useful when that expression is a generator.

The Fibonacci generator from above can be turned into a co-expression, and then any number of calls of this co-expression can be made, themselves being generators if necessary. (The following program makes three such calls.)

procedure main()
  fib := create (i:=j:=1) | |((i+:=j):=:j)
  every  write(|@fib) \20
  every  write(|@fib) \10

In the realm of functional programming, I believe Strachey's 1966 Cartesian product function in CPL:

let CProd[L] = Lit[f,List1[NIL],L]
  where f[k,z] = Lit[g,NIL,k]
    where g[x,y] = Lit[h,y,z]
      where h[p,q] = Cons[Cons[x,p],q]

should be considered classic, and is irresistably elegant. Lit is what in ML, Haskell, etc. is called foldr, so in Haskell the same function is:

cprod = foldr f [[]]  where
  f as zss = foldr g [] as  where
    g a yss = foldr h yss zss  where
      h zs xss = (a:zs):xss

Perhaps this is the earliest example of higher-order programming using a ‘fold’. Or am I mistaken?

The SECD machine

The SECD machine, of course.

Fast inverse square root.

This small piece of C is well known. It has been made famous by the release of the Quake 3 source code, but its roots are much older.

The most interesting bit is the magic initial guess. Someone even wrote a paper about it:

float InvSqrt(float x){
   float xhalf = 0.5f * x;
   int i = *(int*)&x; // store floating-point bits in integer
   i = 0x5f3759d5 - (i >> 1); // initial guess for Newton's method
   x = *(float*)&i; // convert new bits into float
   x = x*(1.5f - xhalf*x*x); // One round of Newton's method
   return x;

[BTW, hi all, first post after years of lurking]

Bresenham's line and circle plotting alorithms

I marvelled at the Bresenham circle drawing algorithm at the tender age of 14. Links:

  1. J. E. Bresenham. Algorithm for computer control of a digital plotter. IBM Systems Journal, Vol 4, No.1, pp. 25-30, January 1965.
  2. J. E. Bresenham. A Linear Algorithm for Incremental Digital Display of Circular Arcs (ACM paywall). Comm. ACM 20(2), 1977. Algorithm described, together with some C code, in Wikipedia's Midpoint circle algorithm article.

Bitwise tricks

Bitwise tricks - ranging from simple elegance to gross inscrutability.

Hacker's Delight

The standard reference on this sort of thing is an amazing book called Hacker's Delight, by Henry S. Warren Jr. of IBM, who also wrote an important early theoretical paper on this topic (Functions realizable with word-parallel logical and two's-complement addition instructions, Comm. ACM 20(6):439-441, June 1977.)

I totally forgot about this.

I totally forgot about this. And how could we fail to mention HAKMEM?!


Combinator reduction, and compiling the Lambda Calculus to combinators.

Concurrent agent using dataflow and fold

Here's a little program in Oz that I find irresistible (section 5.2.1 of CTM). Calling the function NewPortObject creates an active concurrent agent with an internal state that is transformed by the messages it receives. The initial state is Init and the state transition function (state x msg -> state) is Fun. The sequence of messages received by the agent is Sin. When this sequence terminates, the final state appears in Out.

  fun {NewPortObject Init Fun} 
  Sin Out in
     thread {FoldL Sin Fun Init Out} end 
     {NewPort Sin} 

This harmoniously combines concurrency, state threading using fold, and asynchronous communication using ports with dataflow synchronization.

A more cryptic example: contract net protocol

To get the bonus points for cryptic glyphs, I propose the following program fragment in Oz that implements a contract net protocol in three lines:

PRs={Map Providers fun {$ P} P#{Send P query(price $)} end}
Pb#Rb={FoldL PRs.2 fun {$ Pb#Rb P#R} if Rb>R then P#R else Pb#Rb end end PRs.1}
for P#R in PRs do {Send P if P==Pb then ok else no end} end

The full program is explained here. This program also combines asynchronous communication with dataflow synchronization and uses functional combinators as concurrency patterns.

META-II? The modern

META-II? The modern OMeta-in-itself equivalent? (It's 55 lines, but including the compiler backend that compiles it to JavaScript brings it to 83 lines.) The eForth model? (It's a few pages, sigh. My StoneKnifeForth is less than two pages, but not historically significant and not quite ready for publication.) Perl golf entries? Entries in the the-5k DHTML competition?

Original J interpreter prototype and some Forth

Original J interpreter prototype.

A screenful of Forth - short implementations of a BNF generator and object orientation in Forth.

Original J

I was looking for that! Thanks.

More J and Forth

I also found the source for an early version of J.

And another couple of Forth possibilities:
Colorforth IDE driver
FSMs in Forth (you essentially 'draw' the table)

Binary matching and comprehensions in Erlang

How about this:

parse IP packet(<<Version:4, IHL:4, ToS:8, TotalLength:16,
                Identification:16, Flags:3, FragOffset:13
                TimeToLive:8, Protocol:8, Checksum:16
                SourceAddress:32, DestinationAddress:32,
                Data/bits>>) when Version == 4 ->


uuencode(BitStr) ->
  <<(X+32):8 || <<X:6>> <= BitStr>>.

uudecode(Text) ->
  <<(X-32):6 || <<X:8>> <= Text>>.