## Playing the Minesweeper with Constraints (MOZ 2004)

Peter linked to the MOZ 2004 papers earlier.

This presentation by Raphael Collet provides a nice example of constraint programming, a paradigm we don't discuss often enough.

## Comment viewing options

### I used to hunt mines...

The presentation links back to his website for the program, http://www.info.ucl.ac.be/~raph/minesweeper/.

I really like the way his system handles symmetric subsets in the probability engine, which is a more common case.

His solution is interesting, and I can think of only one suggestion for his probability solver: In some cases the set of locations to try to put mines into can be broken into disjoint subsets, in which case the combinatoric win from this is substantial.

If I every update my old code that solves Minesweeper, I will add both those refinements (and perhaps write in a better language). If anyone, beyond me and Rahael Collet, is interested in the details of how such programs find the solutions, there is a very short description of my old old code available.

### Constraint programming

I always wondered why it was called constraint *programming*. It has very little to do with programming as we normally refer to it and I don't see much reason to discuss it here on lambda.

Is this a flame bait? I guess I'll find out...

### I think of programming as an application of logic

That logic can take several different forms: functional, constraint, ansynchronous, procedural, etc... (corresponding to lambda, predicate, and pi calculus, etc...). Theoretically , as I understand it, the different forms do not exist in isolation as each can be expressed mathematically in terms of the others.

The challenge is to come up with languages that express logic both objectively, as well as being an effective form of communication.

### Constraint solving vs. functional programming

Roughly, the difference between fp and constraint logic programming is that they have different correspondences to logic.

In a functional language, a program corresponds to a proof of the proposition corresponding to its type. Evaluating a program corresponds to proof normalization (ie, eliminating uses of the modus ponens rule).

In a logic language, a program corresponds to a logical theory (a collection of logical constants and equations), and a query corresponds to a theorem to prove. Evaluating a query corresponds to proof search, in which the computer tries to deduce the proof.

One way of thinking about mostly-implicitly-typed FPLs like ML and Haskell is to imagine that they have a static semantics that searches, constraint-style, for an explicitly typed program, and then you switch to a dynamic semantics that evaluates (ie, normalizes) the reconstructed program.

What I like about this is that one can easily imagine adopting more aggressive term reconstruction strategies for functional languages akin to the ones they use in logical frameworks and theorem provers. I think that's one of the ways that languages will become more high-level over time.

### Constraint programming is programming

I think to "programming" as specifying how to solve a given problem in terms of "machine instructions". In the mainstream paradigm, this means defining classes for objects and algorithms to perform some computations.

Constraint programming is really programming, but in a very different style. The main difference is that you are often closer to the problem description. And the real programming task is to find an appropriate description of the problem, in terms of variables and constraints, and an appropriate heuristics to solve the problem. Finding the "appropriate" elements is really a hard task. The difference in terms of performance is sometimes orders of magnitude! You need a clever understanding of the constraint system you're programming with.

The constraint programming "paradigm" is one of the most powerful for solving hard problems (typically NP-complete problems). You can really concentrate on the problem representation, and the heuristics.