Programming (language) puzzles

Like math puzzles and physics puzzles, there are many different kinds of programming puzzles. One way to classify programming puzzles is by the skills they exercise or the concepts they illustrate: knowledge of a language, designing a new algorithm, understanding a specification, etc.

On LtU, I'm interested in puzzles that are not specific to a language yet rely on a PL notion. Here's one attempt of mine, inspired by my student Jun Dai:

Your stair-climbing robot has a very simple low-level API: the "step" function takes no argument and attempts to climb one step as a side effect. Unfortunately, sometimes the attempt fails and the robot clumsily falls one step instead. The "step" function detects what happens and returns a boolean flag: true on success, false on failure. Write a function "step_up" that climbs one step up (by repeating "step" attempts if necessary). Assume that the robot is not already at the top of the stairs, and neither does it ever reach the bottom of the stairs. How small can you make "step_up"? Can you avoid using variables (even immutable ones) and numbers?

What do you think? What is your solution? How are the solutions related to each other? (If "step" fails with a fixed probability, then how many times does "step_up" expect to call "step"?)

More importantly, what is your favorite PL puzzle that is not (terribly) language-specific?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

step up

redacted. I don't want to be a spoiler.

Solutions, and another puzzle

What do you think? What is your solution?

Edit: I originally posted my solutions and analysis here, but took James's lead and decided to stick them somewhere else. Here they are.

How are the solutions related to each other?

If you CPS-convert and then defunctionalize the second program I think you'll see that the continuation data-type is isomorphic to the natural numbers. So the two programs should be roughly isomorphic from this point of view. In more operational terms you could say that the chain of activation records is being used as an implicit counter: a non-tail call (frame push) corresponds to a counter increment and a non-tail return (frame pop) corresponds to a counter decrement.

If "step" fails with a fixed probability, then how many times does "step_up" expect to call "step"?

Let the success probability be p and q = 1-p be its complement, the failure probability. The probability of succeeding in 1 step is p. You cannot succeed in 2 steps. The probability of succeeding in 3 steps is q*p^2. You cannot succeed in 4 steps. The probability of succeeding in 5 steps is q^2*p^3. The general pattern (easily provable) is that succeeding in 2n steps has probability p*(p*q)^(2n). So we have to sum the series sum 2n*p*(p*q)^(2n). First, let's rewrite this as 2p sum n*((p*q)^2)^n. Setting r = (p*q)^2, this reduces to sum n*r^n. Since r is constant with respect to n this can be summed using the usual differentiation trick for geometric series taught to freshmen.

Edit: Made some silly mistakes in this analysis. Even assuming my overall analysis was correct the terms would be (2n+1)*(stuff) rather than 2n*(stuff). But I missed something else too: a factor to account for the different permutations. Once you put that in the series gets messy, so I withdraw my analysis. You can find a solution with Google by looking for a derivation of the "mean return time" for random walks. This isn't an instance of the canonical random walk problem but it's pretty close.

More importantly, what is your favorite PL puzzle that is not (terribly) language-specific?

I wouldn't say it's my favorite, but Danvy often talks about the problem of testing the balance of a Calder mobile in a single traversal. Not very difficult, but it's kind of cute.

Balancing a Calder mobile

Is this code I just wrote a solution to "the problem of testing the balance of a Calder mobile in a single traversal", or did I miss something else? If the tagging and untagging with Just is undesired, one can Church-encode the Maybe type into continuation-passing or exception-throwing code...

I'd like to see even more puzzles.

Mobiles

Yes, that looks about right. I said it was easy! The relevance to programming languages is that there are some nice ways of deriving and relating the different solutions that involve continuations.

I believe Danvy has discussed this problem in a number of places. This note (in French) is dedicated to it: Sur un exemple de Patrick Greussay, http://www.brics.dk/RS/04/41/BRICS-RS-04-41.pdf.

The expect number of calls

The expect number of calls to step() will be, in a recurrence,

   X = p*1 + (1-p)*(1 + 2X)

Solving for X, you get

   X = 1/(2*p -1)

This looks right since X=1 when p=1 and it goes to infinity when p falls to 0.5

Transforming a recurrence equation to direct style?

The original problem was to compute this X, which I'll denote with f(1), because I use f(n) to denote the expected time to move n steps up. Now, my first attempt at computing f(1) was to specify
f(0) = 0
and
f(n) = 1 + p * f(n-1) + (1-p) * f(n+1)
for all n > 0, but these two equations don't have a unique solution. To my surprise, it helps to notice (as you did) that
f(2) = 2f(1).
But I still don't understand: what is missing from the equation that corresponds to the CPS code that makes it harder (or impossible) to solve uniquely than the equation that corresponds to the DS code?

Modeling error?

In your solution, I believe you are counting those where you go up 2 steps then fall back down 1, to get up a step. But any implementation would have stopped already! So you get infinitely many solutions because you look at all those possible futures.

My favorite...

This is clearly one of my favorites:

You have a simple linked list with more than 15 elements and should return from it the 15th element. And don't even dare thinking about iterating the list twice!

15th element from end?

Do you mean the 15th element from the end of the list? If so, is it ok to keep a running buffer of the last 15 elements seen so far?

15th last element

Eric Bodden wrote:
``You have a simple linked list with more than 15 elements and should
return from it the 15th element. And don't even dare thinking about
iterating the list twice!''

As others noted, you probably mean the 15th element from the end. The
problem doesn't seem to be too hard; it can be solved in constant
space. I guess the trick is to traverse the same list using two
pointers, one of them is suitably delayed.

-- Nth element of the list, from the end, assuming the list is longer
-- than n.
lastN n lst = loop lst (drop n lst)
 where loop (h0:_) [] = h0
       loop (_:t0) (_:t1) = loop t0 t1


test1 = lastN 15 (reverse [1..30])
test2 = lastN 15 [1..100]

This is just the generalization of Haskell's standard function `last'.

Simplification

Your function loop is equivalent to \xs ys -> last (zipWith const xs ys).

Not Quite.

You are off by one.

One of my favourites...

You have a labeled tree, which (if my Haskell is not too rusty) is defined by:

data LTree a = Leaf String a | Branch String [LTree a]

That might correspond to a normal file system, ignoring . and .., where the leaves are files and the branches are directories. You are to produce an outline view of this tree, showing only those leaves which satisify some predicate function and the branches necessary to reach those leaves. You should assume that the tree is wide and shallow.

I find this problem interesting, in part because it is practical (my original implementation was to produce a view of an Apache configuration file showing only directives which pertained to a given module), but mostly because the nature of the solution seems to vary considerably between languages. For example, the C solution (or at least a C solution) requires no heap allocation.

I was musing on the issue of language-specific solutions while watching Chris Rathman translate SICP into various languages, including Lua. Lua does not have a primitive datatype for linked lists; instead, it has a primitive mapping type (tables). In Lua, the 15th last element problem above would be trivial:

function nthlast(t, n) return t[#t+1-n] end

which rather avoids the elegance of the intended solution. The (well, my) Lua solution to the filtered tree problem uses a single temporary table and thereby avoids a certain amount of list reversing which seems to show up in solutions in other languages.

Transform laziness to stack allocation?

I'd be interested to see whether/how a Haskell solution like the following can be transformed into a C-like solution that performs only stack allocation:
module Prune where

data LTree a = Leaf String a | Branch String [LTree a]
    deriving (Eq, Show)

-- A simple solution
prune :: (a -> Bool) -> LTree a -> Maybe (LTree a)
prune pred tree@(Leaf _ a)
    = if pred a then Just tree else Nothing
prune pred (Branch label branches)
    = if null pruned then Nothing else Just (Branch label pruned)
    where pruned = [ branch | Just branch <- map (prune pred) branches ]

-- For testing
binary :: Int -> Int -> LTree Int
binary n k = if k > 0 then Branch bit [binary n' k', binary (succ n') k']
                      else Leaf bit n
    where bit = show (n `mod` 2)
	  n' = n + n
	  k' = pred k

Solution to pruned tree outline

This is indeed an interesting problem; it is also the excellent
illustration of `stack marks' or stack traversal facility described in
the paper `Delimited dynamic binding' (see, in particular Section 6.2
of the paper and the `nub' example). The OCaml code of the solution
is available here


http://pobox.com/~oleg/ftp/ML/pruned-tree-outline.ml



assuming the Dynvar library introduced in that paper, and whose source
is available from that web site. The source code contains a few tests
and their output. Posting a lot of source code at LtU
doesn't seem suitable (nor appropriate).

It is easy to see that the problem can indeed be solved without any
heap allocation, requiring only O(tree-depth) stack space -- provided
that Dynvar library (actually, the underlying shift/reset
implementation were a bit smarter -- like that in Chez Scheme).

Two (+1 (+1)) K Solutions

I've posted two K solutions to this clever little problem here:

Subtree where leaf x is f x

Solution 1 is what I would call "semi-recursive". Here's the idea. We represent a tree with a nested dictionary, which is one of K's recursive datatypes (the other being a general list.) Then (recursive part) we compute a list of paths to the leaves. From here we proceed non-recursively to (i) extract the data at the leaves as a vector, (ii) apply the predicate to the vector (an array operation, which returns a boolean vector), and (iii) construct a new dictionary consisting of just those paths to leaves which satisfy the predicate.

Solution 2 is non-recursive, and uses a more economical representation of a tree as three equal-length vectors: T, where T[i]=j means that node i is a child of node j (T[0] = 0 = the root of the tree); D, where D[i] = the data at leaf i (if i is not a leaf, then D[i] is NaN); and N, where N[i] = the label of node i.

Solution 2 is presented as "open code": t, d, and n together represent the input tree, f is the predicate, and T, D, and N represent the output subtree. (Wrapping the code in a function is trivial, but I assume that readers who execute the solution will want to examine the intermediate values.)

Solution 1 is shorter, and probably easier to understand. (well ...) It was certainly easier to write, being quasi-idiomatic, and familiar to all seasoned k programmers. Probably no more than 10 minutes work, most of which went towards the nasty-looking (recursive) pretty-printer (and the comments).

Solution 2 contains a few subtleties, and required more time and testing to get right. But solution 2 scales quite well. I invite others to test their implementations on larger trees having varying width/depth. e.g. 500,000 nodes, 10,000 of which are leaves.

Note added. I've included a fully recursive solution for comparison:

st:{(./)._n,+{:[5=4:z;,/_f'[x,/:!z;y;z[]];y z;,(x;:;z);()]}[();y;x]}

st descends until it finds a leaf z. Where y is the predicate, if y z is true, st returns (x;:;z), where x is the path to z. If y z isn't true, st returns null. The result is amended to produce the desired subtree.

Note added. I've also included a non-recursive K4 solution, which is half the code of the K2 version.

And finally, a solution in the Q dialect of K4:

select from t where n in 
  raze({exec p from t where n in x}scan)each 
  exec n from t where f d

Further details in the paper linked above.

Josephus Flavius problem

I gather that this particular puzzle is older than dirt, though it's new to me. I first came across the Josephus Flavius' problem in the CTM book. Actually I like it not so much for the problem itself, but rather for the declarative solutions in Oz and Alice ML.

(I stumbled across the cut-the-knot website that looks to be an interesting resource for puzzle pontification).

My solution

Here's a cute little Haskell solution:

josephus init nth = last alive
   where
   alive = init ++ kill (length init) alive
   
   kill    0  line = []
   kill (n+1) line = take (nth-1) line ++ kill n (drop nth line)

Then you can solve the canonical problem instance by calculating, e.g.

josephus [1..40] 3

Haskell solution

Here is a link to my solution in Haskell. I could shorten it by making it less readable (and I might just do that as an annotation.

http://paste.lisp.org/display/31376

Another solution in Haskell

http://paste.lisp.org/display/31383

That looks like one pass to me.

For 'erl'

A quick answer, for the Erlang emulator. :)

"Step" gives 4/5 chance of success, "Step_up" returns a history of attempts. Try "Nontrivial_steps(Steps(100))".

http://paste.lisp.org/display/31419

java solution


void step_up() {
 if(!step()) {
  step_up();
  step_up();
 }
}

Of course...

...when the going gets tough, this'll blow the stack. (Cue "The Importance of TCO" from Anton...)

It's non-deterministic by

It's non-deterministic by nature of the problem, but if the first recursive call succeds it's just as likely the second one will too. So, your point was?

My point

As the success probability of step() goes down, the expected number of times we'll need to call it goes up (see recurrence analyses above). In this implementation (as in others) step() is called once per call to step_up(), so the expected number of calls to step() is related (though not identical to) the expected stack depth. (Another interesting exercise might be to precisely characterize this relationship.) In any case, if the success probability is low enough, this Java code will certainly result in a StackOverflowError.

The point of this was that the equivalent Scheme code (linked from one of the first comments above) is guaranteed not to suffer from this problem (since Scheme implementations are required to execute tail calls in constant space). However, now that I think about it more carefully, I realize that I was wrong about this. TCO doesn't matter in this case, as the first recursive call is not in tail position. So the Scheme version will also likely run out of space sooner than a version using a loop and explicit counter. Perhaps this is what you meant... I'm having a little trouble understanding your comment.

Big stacks are big good

TCO is a very useful tool. Another excellent and underappreciated tool is a very big stack, i.e. being able to shove a heap worth of stuff onto your stack when it suits you.

For example, here is the standard non-tail-recursive map function in Erlang's lists module:

map(F, [H|T]) ->
    [F(H)|map(F, T)];
map(F, []) when is_function(F, 1) -> [].

and here's mapping over a million-element list:

6> L = lists:seq(1, 1000000), ok.        
ok
7> lists:map(fun(X) -> X + 1 end, L), ok.
ok

No fuss no muss. But try it in your language!

No problem

My language Kogut handles large stacks just fine. They are silently resized on overflow, silently shrunk by GC, and they aren't scanned as a whole by minor GC.

What I meant

Perhaps this is what you meant... I'm having a little trouble understanding your comment.

Yes, that's what I meant. Even in a language without tail call optimization, if the first recursive call - which is not in tail position - succeeds, its just as likely that the second recusive call - which is in tail position - also does.

Converting self-tail calls to loops :)

void stepup ()
{
       while (!step()) {
              stepup();
       }
}

Now 50% better!

Doesn't work

If it falls 1 step, it should go up 2 steps.

It's correct.

Notice the while loop. Yes, I know, it's not natural!

It's more than 50% better

This version consumes stack space proportional to the number of steps it needs to go up, as opposed to the number of steps it attempts. This is a substantial difference if the probability of success is only somewhat greater than or equal to 1/2.

This solution seems unstable...

Unfortunately, sometimes the attempt fails and the robot clumsily falls one step instead.

For every return of false by step(), the number of needed successful calls to step() increments.

procedure step_up;
var
  level: integer;
begin
  level := -1;
  while level < 0 do
    if step then
      inc(level) else
        dec(level);
end;

or with standard tail-recursion:

procedure step_up_from(level: integer);
begin
  if level = 0 then exit;

  if step then
    step_up_from(level+1) else
      step_up_from(level-1);
end;

I would generally not assume that the robot will never fall twice in a row, since the problem does not say that it never will. There's no guarantees of the sort in real life.

Uh?

pkhuong's code is correct, even if the robot steps down twice in a row..

As for the stack consumed, I would say that it is proportional to the "maximum" number of failed step up.

My bad,

I didn't realize he used a while() instead of an if().

Java redux

void step_up() {
while( !step() ){
step_up();
}
}

yes stack overflow is a problem, but then so is battery life, so forget about it.

A practical example

I'm rather partial to the problem of forward references as it has a very slick solution using laziness and circular programming. Usually, I use it as an example using a simple program that does the following:

The program takes lines of the form
define <var> ([*]<var>)+
i.e.
define y a *y
The intent is that y is defined as being the literal 'a' followed by the contents of y, i.e. y = a a a a a a a ... in this example. A very simple version can be written in about 5 lines of typical Haskell code. A somewhat more involved version and some discussion is located at http://www.haskell.org/hawiki/TyingTheKnot. One way to make this more of a "puzzle" is the following: You can change the straightforward backward references only solution into the full solution with just a very minor change... if your language is lazy.

Use loeb

I have a nice solution to this using the function 'loeb' I defined here. 'loeb' is a cycle fixing function that can be used with all kinds of containers. (The type of 'loeb' corresponds to that of Loeb's theorem in the modal logic GL). So here's a solution (minus the parser):


import Maybe

loeb :: Functor a => a (a x -> x) -> a x
loeb x = fmap (\a -> a (loeb x)) x

data Dict a = Dict { unDict :: [(String,a)] } deriving Show

instance Functor Dict where
fmap f (Dict ((a,b):abs)) = Dict ((a,f b):unDict (fmap f (Dict abs)))
fmap f (Dict []) = Dict []

f +++ g = \x -> f x ++ g x

get a (Dict b) = fromJust $ lookup a b

test = loeb (Dict [("a",const "x"+++get "b"),("b",const "y"+++get "a")])

Note how the argument to 'loeb' isn't circular, so it's easy to produce from a parser. In this case I'm solving:


define a x *b
define b y *a

(The cool thing is that I'd been hunting for an application of 'loeb' and now I have one, a practical one even.)

Loeb in OCaml

Cool! Your post inspired me to see if it's possible to encode the same solution in OCaml. Indeed it is, provided you have a (lazy enough) implementation of lazy lists. In this case, Run is a naive implementation of catenable lists, with thunked constructors. Encoding Haskell type classes as ML Functors:

module type FUNCTOR = sig
  type 'a t
  val map : ('a -> 'b) -> ('a t -> 'b t)
end

module Loeb (F : FUNCTOR) = struct
  let rec loeb x = F.map (fun a -> a (lazy (loeb x))) x
end

module Dict = struct
  type 'a t = (string * 'a) list
  let map f (d:'a t) : 'b t =
    List.map (fun (a, b) -> a, f b) d
  let get x (d:'a t) = List.assoc x d
end

(* define var = lit (lit|var)+ *)
let parse d =
  let compile l env =
    List.fold_right (fun e r ->
    let tail () = r in match e with
    | `Lit c -> Run.cons c tail
    | `Var w -> Run.append (fun () -> Dict.get w (Lazy.force env)) tail)
      l Run.Nil
  in
  let module L = Loeb(Dict) in
  L.loeb (Dict.map compile d)

As you remarked, the compilation proceeds from an absolutely flat representation. Note that I had to control the depth of evaluation in Loeb, and force the promise in the application code, as required. The result:

# let test = parse [
  "a", [`Lit 'x'; `Var "b"];
  "b", [`Lit 'y'; `Var "a"];
]  ;;
val test : char Run.t Dict.t =
  [("a", Run.Cons ('x', <lazy>)); ("b", Run.Cons ('y', <lazy>))]
# Run.take 6 (Dict.get "a" test) ;;
- : char list = ['x'; 'y'; 'x'; 'y'; 'x'; 'y']

in scheme ... just for the record.


(define (climb N)
  (if (> N 0)
    (if (step)
      (climb (- N 1))
      (climb (+ N 1)))))
(define (stepup) (climb 1))

A lazy list solution ...

If we can assume an lcons that evaluates its car and cdr lazily, here is one that can work (in pseudo-but-nearly-valid-scheme) -

; A step function which fails 50% of the time.
(define (step) 
    (if (= 0 (rand 2)) 
        (begin (display "DOWN") #f) 
        (begin (display "UP") #t)))
 
; Gets the second element of a list s,
; forcing the value of the first element if
; the list happens to be lazy.       
(define (forced-next s) (first s) (first (rest s)))

; (steps) generates a sequence of
; successful upward steps. To climb N steps,
; just take N elements from (steps).
(define (steps)
    (lcons (if (step)
                #t
                (forced-next (rest (steps))))
           (steps)))

I don't think this is stack safe, but what the heck, laziness is cool :)

R5RS lcons

It is not terribly difficult to implement lazy-cons in R5RS, using syntax-rules and lambdas...

delay - force

I should be able to express it using delay and force, but ... I'm too lazy :)

A rather short solution

Interesting what you can stumble upon on the web. Two years later, here is my solution. In C:

void step_up() {
while (!step())
step_up();
}

This provides an alternative complexity analysis:

P(loop terminates) = p
P(loop terminates after n calls to step) = p(1-p)^(n-1)
E{n} = 1/p

Since each failed test results in a call to step_up, we expect 1/p - 1 recursions. So the recurrence relation is X = 1/p + (1/p - 1)X, which solves as X = 1 / (2p - 1) - exactly what kenhirsch got.

the smartaleck solution

Thanks for resurrecting the thread. I hadn't seen it and it's neat.

I'm not sure that you can really call for a solution "not using variables or numbers." By a diagonalization argument you can show that the problem can't be solved by any finite state automata -- you need at least a stack machine. You have to encode an arbitrary non-negative integer in mutable (or infinitely extensible) state, one way or another.

So, I would hope a student who gave a well presented diagonalization or similar argument could get nearly full credit, with extra credit if the student also gives the "but this is the answer you were looking for" solution.

-t

Two Solutions

Recursive:
stepUp() { while ( ! step() ) stepUp(); }

Iterative:
stepUp() { for ( int i = 1 ; i > 0 ; i -= step() ? 1 : -1 ); }

simple constant-space Cat solution


-1 [step [1 +] [1 -] if] whilenz

Haskell solution to the original puzzle

So, two years and a half after the article was posted, in Haskell:


import System.Random (randomRIO)

-- I implemented this function so I would have something to test it with.
step :: IO Bool
step = putStr "step " >> randomRIO (True, False) >>= putDirection
       where putDirection x = (if x then putStrLn "up" else putStrLn "down")
                           >> return x

-- This is the actual function.
stepUp :: IO ()
stepUp = step >>= (\x -> if x then return () else stepUp >> stepUp)

In a less practical spirit

Maybe step_up() could be given by this BNF:

S ::= A true
A ::= false A true | [empty]

using step() as the lexer.

How about

How about:

S := true | false S S

So that FFTTFTFTT is also possible.
Direct transliteration of the Java if-based code.
The while-based code would be:

S := true | { false S }

Sorry, that's wrong, should have been:

S := { false S } true

Mechanical equivalence

I tried to transform the (1) to (2) and back:
(1) S ::= t | f S S
(2) S ::= { f S } t

From (1) to (2) we can "rewrite" the last S:
(3) S ::= t | f S t | f S f S S
(4) S ::= t | f S t | f S f S t | f S f S f S S
The last S will eventually rewrite to t, so we can express it as:
(2) S ::= { f S } t
This can actually be "done" mechanically and I guess that's how the tail recursion is transformed to a cycle.

The (2) to (1) way, if we apply the definition of { }:
(5) S ::= A t
(6) A ::= eps | f S A
If we substitute the A in (5):
(7) S ::= t | f S A t
But the last "A t" in (7) is what S in (5) rewrites to, so we can, hmm, write it as:
(1) S ::= t | f S S
This way doesn't seem to be so "mechanizable". But who wants to transform a cycle to a recursion.

my java solution in two minutes

void stepup() {
while (!step()) {
stepup();
}
}

I see a lot of people came to the same solution :)