Constraint Programming

I have been reading a bit of late on Constraint Programming and thought I'd dump some of the links that have helped me along the way. One problem I find with learning Constraint Programming is that I easily get bogged down in the details of implementation and theory. In basic terms, CP consists of a three stage process: (1) Declare the domain (range) of the variables; (2) Declare the constraints on those variables; and (3) Search for solutions. The 1st and 2nd stages can be combined without too much loss, but there seems to be a general consensus that search should be kept separate as much as possible, since it usually is the most expensive and the least declarative. Most of the resources concentrate on the details about how to go about defining programs in these three stages, as well as giving hints about limiting the combinatorial explosion of the search.

The best tutorial I've found is Finite Domain Constraint Programming in Oz which gives a pretty good practical introduction to the subject. Constraint Programming in Alice is a related work in progress that follows the same general outline. For those who like slide presentations, the lecture notes of Christian Schulte and Guido Tack are good resources. Both Christian and Guido are working on the implementation of Gecode, which is a set of libraries (in C++) that seek to take the model of computation spaces, as first realized in Oz, and extend their reach into other programming languages (Christian uses Java, while Guido uses Alice). For a more detailed look, Christian Schulte's PhD thesis on Programming Constraint Services is an in depth treatise on the use of computation spaces for CP.

One book I've been eyeing is Constraint Based Local Search which uses COMET as the PL. My only hesitation is that I don't quite grok the concept of Local Search - a search method that is supposed to be quite efficient but not guaranteed to find a solution. Anyone care to hit me with a clue-by-four on Local Search?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Constraint Programming Literature

There exists a whole library of literature on constraint programming. Only introductions are cited here for further references.

A friendly introduction into the field is given in the web-tutorial by Bartak which also links to further resources such as related journals, conferences, web-links etc. There are several textbooks on the subject, e.g., by Krzysztof Apt, Thom Frühwirth and Slim Abdennadher, and Rina Dechter. Also the ECLiPSe literature can be recommended, e.g., ECLiPSe: An Introduction

CP Resources

I've skimmed Apt at my local bookstore, and although I've heard it's an excellent text, I didn't feel it was aimed at my level (i.e. complete neophyte on the subject). Will look into some of the other ones you mention.

On the personal side, I noticed your message on the Oz mailing list wrt your thesis the other day. Others here abouts might find your work with CP interesting:

Constraint programming gained much interest in computer aided composition, because it allows to express explicit compositional knowledge in a declarative way. In this field, the user states a music theory and the computer generates music which complies this theory. Music constraint programming is style-independent and is well-suited for highly complex theories (e.g. a fully-fledged theory of harmony). A theory is formulated as a constraint satisfaction problem (CSP) by a set of rules (constraints) applied to a music representation in which some aspects are expressed by variables (unknowns). A generic music constraint system predefines concepts shared by many theories (e.g. duration, pitch, note, chord) and that way greatly simplifies the definition of musical CSPs.

Strasheela

Thank you for your interest. My thesis is not yet available, but meanwhile I uploaded the software which is the offspring of my PhD: Strasheela (together with some documentation, examples and pointers to publications).

Wikipedia

Wikipedia also has a very nice entry on local consistency, which I'm still tearing through at intervals (helpful diagrams, too). I'm a CSP newbie, but this seems a good overview of some of the more important concepts behind the search process.

Local consistency

While the wikipedia page on local consistency is very well written and has lots of helpful diagrams, it isn't very connected to how CP works in practice.

The different notions of local consistency are defined over a formal framework (CSPs with extensional constraints), which does not really correspond to the way a CP system works. For some basic intuition why, think of the space-complexity of defining (n-ary) constraints as the set of allowed tuples (i.e., O(d^n) where d is the domain-size).

In practice, CP systems typically use so-called propagators for implementing constraints. For some constraint c that references variables x1,...,xn, the propagator p will remove values from the domains of the xi variables that it knows do not belong to a solution of the constraint. Note that that this means that this process usually never considers more than one constraint at a time (in constrast to, for example, path consistency).

Formally, you can think of a propagator as a monotone, narrowing function over the Cartesian product of the domains of the variables. From Knaster-Tarski, we can see that the greatest mutual fixpoint of these functions is uniquely defined, and this is what propagation computes. For the set-up to work, the propagators must be checking (if the variables are all assigned, the propagator must be able to check if the constraint it implements holds or not).

To learn about CP, my suggestion is to first start playing with some system to get a feel for how it works. Starting with either Oz or Alice and following the corresponding tutorial is a good recomendation. Later, you can then look more into the theory behind CP.

Local Search and CP

CP is often called a global/systematic search technique. It implicitly explores the search space by enumerating partial assignments and it uses constraints to rule out partial assignments that cannot be extended into a solution.

LS works on complete assignments and move from complete assignments to neighboring complete assignments in order to obtain a solution or a (near-)optimal solution (in optimization problems). The moves can be simple (change the value of a variable) or more involved (change a subset of related variables). Typically LS use constraints to select which moves are desirable. A key aspect of local search is the ability of using local information to guide the search. A useful analogy is the way Newton method uses local information to find the zero of a continuous function. LS may be thought as a discrete counterpart on combinatorial objects.

So CP can be seen as a tree exploration, while LS is a graph exploration. Note that there are ways to make this graph complete and to guarantee that a solution will be found if one exists (see references on complete local search). In practice, most LSs nowadays use restarts or multistart procedures which perform many local searches from randomized starting solutions, enabling to explore the search space extensively and avoiding being stuck in poor local minima.

There is considerable evidence that CP and LS are complementary and sometimes synergetic, depending on the nature of the problems/instances.

What the Comet book does is to show that the 3-stage process you mentioned for CP may also be used for LS. Moreover the first 2 steps are really the same, while the nature of search process is rather different (as I tried to explain) but may also be expressed at a high level of abstraction as well. You may want to check our OOPSLA'02 paper to find some concrete examples of this.

Always good to hear from the author.

Thanks for the explanations. Good to know that complete local search is possible - the Pascal programmer in me still gets disoriented by concepts like nondeterminism.

I understand the primary motivation of the book is local search, but my interest has more to do with the style of the book. The examples seem clear from a notational standpoint, and being a book that has a PL as a backdrop, it looks to be more pragmatically based than the other texts I skimmed through. One of the advantages of using a new PL to teach concepts is that one learns the concepts as a side effect of learning a new language.

Great references!

I have been trying to understand CP myself. I have Fruhwirth & Abdennadher's "Essentials of Constraint Programing" in front of me, I think it will be a good read once I have some basic understanding of (what/how/why) CP :)

Is there a printable version of "Constraint Programming in Alice" available?

CP in Alice still a work in progress.

Guido gave me warning about some of the code in the tutorial that doesn't work in the currentl released version of Alice. It's not hard to work around these things, once you stumble upon them, as they mostly amount to syntactic sugar. The paper itself will probably undergo quite a few more changes along the line. Still, it does make for some good background and interesting examples. Being in a state of flux, I doubt there is currently a printable version.

Along the same lines, the differences between Gecode and Mozart might be of interest. PvR has said that there is a project to integrate Gecode into Oz, so eventually these differences will likely be rectified.

incompleteness of propagation?

Section 2.5 of "Finite Domain Constraint Programming in Oz" shows situations where "[c]onstraint propagation is not a complete solution method." Section 2.1 says "A constraint is a formula of predicate logic."

I thought predicate logic was, indeed, complete, that's why a language like SQL is easier to optimize than turing complete languages.

I don't know why I can't get this completeness stuff through my head, am I missing something fundamental?

Constraint programming in Oz is indeed complete

You only picked up part of the story ;-)

There are tree important ideas working together when a CSP is solved in Oz: partial information, local deduction/propagation (which is by itself incomplete) and programmable search (distributing, branching + exploration of the resulting search tree). Together this yields a complete search method.

In terms of the prototypical N-Queens example

You first declare (1) the Domain of the solutions - that is the positions on the chess grid that the queens may be placed; and (2) the Constraint propagators - the limitations that none of the queens may attack each other. Even knowing this, however, you still have not found a solution. To get past this hurdle, you must do (3) a Search to find the various solutions.

The advantage that CP provides is that the search will not be a complete combinatorial search of all the solutions like it would in a complete relational solution. The information from the domains and constraint propagators can immediately rule out certain search paths prior to the search being performed. Since the types of problems that CP is geared towards are those that are thought of as np complete, anything that can reduce the combinatory search is what one focuses on.

In the case of local search as used in COMET, the algorithm and manner of search is much different as it is only looking at its immediate neighbors in the graph, using the constraints as hints about which way to step next. On the other hand, CP is a tree exploration that typically does a depth first search for solutions based on the hints you provide on how to divvy up the search process.

Don't know if it helps, but I did manage to make a small dent in Chapter 12 of CTM. Also, Alice doesn't have a way that I've found to do the Prolog Choice type operation, so I cheated and used CP to solve a couple of problems in Chapter 9.

Propagation is incomplete

Propagation alone (i.e., removing inconsistent values from the domains of variables) is indeed incomplete. While there are many complete propagators, these are complete for one constraint only. For the combination of several constraints, however, propagation is incomplete. This is because propagation will remove values based on one constraint at a time.

If propagation was complete (i.e., it removed all values that do not participate in a solution), we would have a backtrack-free method of finding a solution to the problem. After each propagation-round, assign one variable, and then re-propagate. Since we can use CP to solve NP-complete problems, propagation would then also be intractable.

Using the common set-up, we get a reasonably time-bounded deductive process, combined with an exponential search-part.

the most popular constraint language

What do you think, what is the language that is most widespread that uses a constraint model?

Multi-paradigm would also be allowable, but I'm wondering more about straight constraint-based.

On Edit: To clarify, is Prolog the most widely spread constraint-based language? Or has it been overtaken by any others? If not, does this say anything about Constraint programming itself?

What are you implying?

What are you implying?

Constraint Logic Programming, the combination of Constraint Programming and Logic Programming (Prolog) is recognized as a very good, if not the best, match. The underlying principle, logic, is the same for both paradigms.

There are ways to integrate CP also in other programming paradigms, often as libraries, but these would be less natural integrations where the CP part more contained. Nevertheless, that is very worthwhile as CP is probably still the most convenient for what it does.