Lambda the Ultimate

inactiveTopic Book and Course in Constraint Programming and Reasoning
started 3/28/2003; 1:11:22 PM - last post 4/2/2003; 1:18:25 AM
Ehud Lamm - Book and Course in Constraint Programming and Reasoning  blueArrow
3/28/2003; 1:11:22 PM (reads: 1460, responses: 4)
Book and Course in Constraint Programming and Reasoning
(via comp.lang.prolog)

Course site related to a forthcoming Springer book.

The site includes lecture slides, labs, and related links.


Posted to Logic/Declerative by Ehud Lamm on 3/28/03; 1:14:19 PM

Tim Sweeney - Re: Book and Course in Constraint Programming and Reasoning  blueArrow
3/31/2003; 6:37:50 PM (reads: 639, responses: 0)
These languages seem very attractive because they make it look like you can specify a problem very concisely, and the language will automatically work out the solution for you. Several comments:

- This doesn't really work in practice. "Generate and test" is hopelessly exponential for problems of interesting size.

- However, constraints can be very useful as a feature in a type system in a more traditional language. Being able to express things like "the type of natural numbers less than 31" and have the compiler perform range arithmetic as part of type checking as very useful. This approach is the first step towards implementing real mathematics in type systems. By "real mathematics", I mean mathematics where natural numbers are a genuine subtype of integers, which are a subtype of rationals, and when you perform common operations on numbers of certain types, the results are closed as you'd expect (i.e. integer*integer=integer), without mathematically unsound casts and coercions.

- I dislike the lack of structure in the example languages described; most of the research is being done in Prolog-style "everything is global". Because of this, there is a huge gap between these R&D languages and any future implementation that might appeal to mainstream programmers. It's very interesting to explore these concepts in structured languages that include things like dependent-typed records.

- It even more interesting to apply CLP style constructs to a language that unifies types, sets, and nondeterminism into a single framework, such as McAllester's "Ontic" language. Here, constraints just work automatically as part of the type system. Also, you get first-class proof:proposition :: value:type, even when the constraints are intractable. For example, you can express the type of pythagorean triples as:

t typedef {x:nat,y:nat,z:nat=sqrt(x*x+y*y)};

And whether or not the language is able to tell you anything useful about the type t, such as whether it's inhabited, and one or more solutions, you can always say:

x:t={3,4,5};

And the language accepts it. Verifying that a deterministic element satisfies a (probably non-deterministic) set of constraints is a much easier problem than trying to solve the constraints in general, so it's reasonable to support logic constraints in a language even when the general case of typechecking/solving them is incomplete or undecidable.

Dominic Fox - Re: Book and Course in Constraint Programming and Reasoning  blueArrow
4/1/2003; 1:03:49 AM (reads: 622, responses: 0)

This seems to suggest that you could "smear" type-checking across the compile-time / run-time boundary. In the case of

x:t={3,4,5};
the values supplied can be checked at compile-time. In the case of
x:t={p,q,r};
where p, q and r are supplied at run-time, the checking would have to take place at run-time. In the case of
x:t={3,4,r};
the constraint affecting r (r=5) could be generated at compile-time from the constraint specified in the typedef, and then the narrowed/simplified constraint could be enforced at run-time. In effect, you would be generating a sub-type (to be checked at run-time) through partial specialisation (at compile-time) of an existing type.

To what extent could you do this kind of thing with C++ templates? I say this because I've just this morning been browsing through Alexandrescu on compile-time assertions. In both cases you're pushing computation back out of the run-time environment and into the compiler, so that the questions of what the language will accept and what programs will execute correctly increasingly blend into each other.

Tim Sweeney - Re: Book and Course in Constraint Programming and Reasoning  blueArrow
4/1/2003; 2:51:33 PM (reads: 578, responses: 0)
Dominic,

In the style of code I was writing, there are three cases:

- Some constraints can be verified at compile time. These simplify down to determinate terms.

- Some constraints can only be verified at runtime, so the compiler just marks them nondeterministic (i.e. the type may be inhabited by 0, 1, or many terms), and lets the runtime sort them out.

- Some constraints can't be verified due to limitations of the language or to undecidability in general. These result in a compiler error. For example, you can express Fermat's Last Theorem as "flt typedef {x:nat>0,y:nat>0,z:nat>0,n:nat>2,x^n+y^n=z^n}". The compiler has no idea whether that statement is inhabited or not, but it's happy to accept the definition. And if you supplied a deterministic value x, the compiler or runtime can tell you whether x is a witness to flt.

I have a compiler that accepts these kinds of expressions and can verify constraints. For example with the above definitions, you can say:

x:t={3,4,r:nat}

And the term reduces to {3,4,5}. This is just a simple case of filling in the blanks and forward-propagating values around terms. This isn't an everything-is-global implementation like Prolog; everything can be arbitrarily nested in records, closures, arrays, etc.

The "=" operator performs unification. In the case of deterministic terms (the only thing you can express in languages like C++), it works just like you'd expect, but given non-deterministic terms a and b, "a=b" denotes the set of values that a and b have in common.

You can't express this sort of thing in C++, because there's a syntactic distinction between templates (which are purely compile-time), and functions (which are a purely runtime). So, for example, you can't write a simple function f(x) and template t and refer to t<f(x)> for example. You can't write a single "assert" that functions both at compile time and runtime. You can't write a matrix(n,m) class which supports type-checked fixed-size matrices and unknown (at compile time) sized matrices.

So every C++ class library tends to implement stuff twice, once for the purely runtime case (i.e. for variable-size matrices) and again using templates for the purely compile-time case (i.e. for fixed-size matrices).

These limitations are understandable for a language that evolved from C, a "processor independent assembly language" from the 1970's. But it's very unfortunate that modern efforts like the Java generics extensions, and the C# generic types proposal, still go that route. Dependent types provide a natural and far more general solution than templates, allowing for checking at compile-time when possible, and at runtime otherwise.

-Tim

Dominic Fox - Re: Book and Course in Constraint Programming and Reasoning  blueArrow
4/2/2003; 1:18:25 AM (reads: 571, responses: 0)

The other thing this puts me in mind of is Design By Contract (tm) - constraints of this kind could be used to express invariance, for instance.

As far as I remember, DBC as practiced by Eiffel works in a sort of Aspect-Oriented way, with preconditions being verified at the start of a method call and postconditions and invariants being verified when it returns. But I think Eiffel also does some compile-time checking, so that verifiable infractions are caught early.

Thanks for the Ontic examples; together with my own recent forays into Haskell, they've considerable enlarged my view of what type systems can encompass.