Escape from Zurg: An Exercise in Logic Programming

Escape from Zurg: An Exercise in Logic Programming by Martin Erwig. Journal of Functional Programming, Vol. 14, No. 3, 253-261, 2004

In this article we will illustrate with an example that modern functional programming languages like Haskell can be used effectively for programming search problems, in contrast to the widespread belief that Prolog is much better suited for tasks like these.


The example that we want to consider is a homework problem that we have given in a graduate level course on programming languages. The problem was one of several exercises to practice programming in Prolog. After observing that many students had problems manipulating term structures in Prolog (after already having learned to use data types in Haskell) and spending a lot of time on debugging, the question arose whether it would be as difficult to develop a solution for this problem in Haskell. This programming exercise worked well, and we report the result in this paper.

(Haskell source code)

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

This is fun! (Which reminds

This is fun! (Which reminds me to remind everyone that LtU has a "fun" department)

I came to the same conclusion...

...when comparing writing solvers for the knights tour in each of Prolog and Haskell. Prolog code is, even with fairly liberal use of cut, shorter and easier to read, but it is harder to write code that is (i) without search space leaks and (ii) that makes use of such optimisations as first exploring promising goals, without which the tour will not terminate on physical hardware. Hence if we joke that Perl is a write-only language, perhaps Prolog is read-only...

My coding of knight accessibility I found attractive, though (note the code uses the nonstandard plus/3 predicate).

Mercury is a version of

Mercury is a version of Prolog with a rather Haskell like type system -- I've written an implementation of this exercise here:

Changing the search method

The nicest thing about the Haskell code is that it is very simple to change the search algorithm. Search in Prolog is inherently depth-first. When you try to implement BFS or another search algorithm, all the elegance and succinctness of Prolog is lost.

Enter Oz

Through the Looking Glass? :)

Alice ML is a functional programming language based on Standard ML, extended with rich support for concurrent, distributed, and constraint programming.

Relevant to this thread is:

Constraints: solving combinatorical problems using constraint propagation and programmable search

Just in case, Derek meant this Oz, of course (hope he wont get mad at me for ever stealing links from him :) ).

Search and Prolog

Prolog is a very old language. Like Derek and Andris mention, it would be better to compare the Haskell solution with Oz or Alice, both of which support programmable search and constraints. The Oz/Alice form of search is fundamentally different from Prolog's: whereas Prolog does generate and test (ugh!), Oz and Alice do propagate and distribute. This is much more efficient, since it prunes the tree of possible solutions before every search step. Using the right constraint domain (like finite domains instead of rational trees) gives you even more efficiency. That's why I have to chuckle, e.g., when I see people proudly present Sudoku solvers using naive search algorithms written in their favorite language. A real constraint language, like Oz or Alice, is much more powerful.

Hacking Prolog

Of course, it is possible to generate & prune in Prolog, using setof/3 and similar tricks. The resulting code is less unreadable than you might expect, though this is clearly a less ideal approach than that of Oz/Alice.

Also, most Prolog systems

Also, most Prolog systems ship with libraries for constraint programming which can propagate a lot. These libraries typically also provide several search strategies, like first-fail, most-constrained etc. in the case of constraint logic programming over finite domains. Pruning is often quite easy in Prolog, see my solution below which prunes much earlier than the one in the paper; it's faster by several factors as a result.

Language vs. library

I should probably point out that Alice ML is not a "real constraint language" - constraint programming is completely factored out, in form of a binding to the Gecode constraint programming library. This library is in C++, so apparently, a "real" constraint language is not necessarily needed to do constraint programming (nor does it buy you much).

Btw, I would bet that a half-decent hand-tailored algorithm for solving something as comparably simple as Sudoku will easily outperform any heavyweight constraint programming solution. CP cannot play out its real strength on well-understood problems like this.


This library is in C++,

You are mixing levels of abstraction. The Gecode library effectively gives you a new language for expressing problems, a constraint language. I'm not talking about syntax. I'm talking about semantics. Alice is not a constraint language, but Alice+Gecode certainly is.

half-decent hand-tailored algorithm for solving something as comparably simple as Sudoku will easily outperform any heavyweight constraint programming solution.

This is a pretty cryptic comment. I think I can clarify it a bit. Real constraint languages use a concept similar to computation spaces. Typical implementations of computation spaces (like in Oz) work better for big problems than for little problems, because they use cloning instead of trailing to implement search. Prolog uses trailing. Cloning outperforms trailing on real-world constraint problems, even though trailing is more efficient for toy problems (a result shown by Christian Schulte). More recent implementations support both cloning and trailing, so they get the best of both worlds and your comment is no longer true.


You are mixing levels of abstraction.

Rather, I feel confused by your flexible use of the word "language". In particular, it seems inconsistent with your earlier statement in the context of the "paradigm" chart, where you explicitly say that "it is not enough that libraries have been written in the language to support the paradigm". According to that chart, Alice is a constraint language. If so, why isn't C++? Or Java? I'm confused.

This is a pretty cryptic comment.

Looking at the less naive Sudoku algorithms flying around they essentially implement specialised propagation. That is not hard, because the domains are very limited. If you engage the much more general CP machinery of something like Oz or Gecode then you will certainly pay a significant operational overhead that is unlikely to pay back for this simple problem. And unless you happen to have exactly the right propagators at your disposal, even propagation itself might be less powerful.

Plus ça change, plus c'est la même chose

Prolog's roots are old but Prolog adapts well. Several Prologs have optional search features that provide alternatives to the default depth-first search. Consider Ciao-Prolog, which becomes more compelling with each enhancement. Ciao-Prolog's optional computation rules currently include

  1. Breadth-first,
  2. Iterative-deepening,
  3. Depth-First search with limited depth,
  4. Fuzzy LP,
  5. "And-fair" breadth-first,
  6. Andorra (in Beta),
  7. Tabling (in Beta).

see "[Ciao-Prolog's] Alternative Computation Rules"
for further details.

The thing is Oz computation

The thing is Oz computation spaces let you define new search strategies without having to fiddle with the implementation.

Defining distribution strategies

For realistic (big) constraint problems, you need to tweak the distribution strategy. It's not enough to pick from a small set of computation rules. You need to specify more complicated heuristics. For example, here is the documentation for the library support for distribution heuristics in Mozart. You can also program your own heuristics with computation spaces if these are not good enough. Ce n'est pas la même chose!

Improving the Prolog code

The author seems not to be too familiar with Prolog. My own solution is here:

By the way, search in Prolog is not "inherently" depth-first. Prolog's basic execution mechanism traverses the proof tree depth-first, but that doesn't prevent you from implementing any search strategy you like within Prolog.

Solving Escape from Zurg in Ruby


I've been a long-time lurker here, and this topic finally inspired me to post :-)

I've been deeply immersed in Ruby on Rails for the last couple of years and, although the language is far from perfect, it's not bad! I got to wondering if I could come up with a nice solution to the Zurg puzzle in Ruby. It turns out that I could - and the result is, I think (you may disagree :-) a good demonstration of the power and flexibility of the language.

Anyway, the solution is here:

I'd be very interested in your comments/feedback!

And a differet solution in Ruby :)

I first discovered this puzzle from Paul B. above, so I thought I'd have a go with a different approach. The final version is below the UPDATE in the article below.

I'm hoping to crank out a Lisp version this weekend; it will stretch my newbie neurons.

Brian Adkins

A direct solution in Haskell

If one is interested in solving only this particular riddle, the search need not be factored in a type class, and knowledge about the particular problem can be exploited to code a solution more directly. I have added a link to such a direct implementation at the bottom of this web page.

Solving Escape from Zurg in Scala

I've written an implementation here: