logic/constraint/relational programming?

Lately I have been coming across a set of ideas which seem to be related:
Logic/constraint/relational programming, unification, back tracking, business rules, logic variables, Gentzen, sequent calculus and so on.

Is there a comprehensive introduction to all of the above? I would like to know the meaning of the terms mentioned above and, if possible, how to implement them.

Krishnamurthi's book Programming Languages: Application and Interpretation mentions prolog, but doesn't show how to implement those ideas in scheme. The Reasoned Schemer is not easy to follow (due to the style of the book). It is not the kind of book one can read once a week after work. There are a few constraint logic books on Amazon, but I obviously don't even know enough to understand if they are good introductions.

There is a lot of talk of how functional programs operate at a higher level of abstraction (compared to imperative programs)...don't these logic/constraint programs operate at an even higher abstraction? Lisp defenders always make sure newbies understand that Lisp is not just for AI. Is the same true of prolog? I have read that logic programs are not very efficient; however, if they are related to relational databases (as claimed somewhere), then can't an 'index' be added to a list of 'facts' to make sure logic program's operation is as efficient as any other program?

I'm just trying to get a sense of the field, its benefits, its pitfalls and reasons why it is not so popular. Thanks.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.


Several of those things you mentioned rely on continuations for implementations. I have an introduction to advanced flow control using continuations that I am working on for IBM's DeveloperWorks. I can mail you an early copy privately if you email me at johnnyb@eskimo.com.

Introduction to logic/constraint/relational programming

Logic/constraint/relational programming, unification, logic variables

See chapters 2, 9, and 12 of CTM.

There is a lot of talk of how functional programs operate at a higher level of abstraction (compared to imperative programs)...don't these logic/constraint programs operate at an even higher abstraction?

Yes, this is definitely true for constraint programs. Google and Wikipedia can point you in the right direction for constraint programming.

I have read that logic programs are not very efficient

This is not true. It's a confusion that originates because Prolog has depth-first search built-in. This search should be seen as a primitive, not a solution. Modern Prologs (e.g., SICStus) are quite efficient.

Continuations and Computation Spaces

Since both continuations and CTM have been mentioned in this thread, I was wondering about the relationship between continuations and computation spaces. If I understand correctly, computation spaces operate on a different model, not really using continuations to perform the search heuristics. In effect, spaces mean that all the computation is performed on a go forward basis. A space can fail, but it doesn't actually revert back to a previous spot or state within the program. When a space fails, the program just tries another space.

Edit Note: Since it's mentioned in the original post, I guess I should also mention that I've been working on a translation of The Reasoned Schemer into Oz. I can't say that I've gotten very far, as I'm stuck on trying to figure out how to convert the Listo operator into Oz. The Run operation in kanren roughly corresponds to Solve in Oz; the unify function (==) corresponds to Oz's dataflow variables; and the CondE sort of corresponds to the Choice operation. That's about all I've figured out up to this point.

Actually i think they are

Actually i think they are not that far apart. Both is based on duplicating some of the evaluators internal state and then doing experimental evaluation on one of the copies. And if that evaluation fails you go back to another copy and try something else.

Cloning a space vs. creating a continuation

Continuations and computation spaces are very different beasts, but there is one aspect that they have in common: they can both capture a computation state and make it visible in the language.

A computation space is a box that contains constraints of two kinds: primitive constraints (variable bindings) and propagators (threads that implement operational versions of constraints). You can control what happens in the box, for example by injecting constraints, by waiting until the space becomes stable (the propagators cannot add any more information), or by cloning it (making a copy). See chapter 12 in CTM or Section 12 of the Oz Tutorial for more information.

A continuation, in the terminology of CTM, is a procedure that encapsulates a semantic stack. This semantic stack is frozen, i.e., it is not scheduled as a runnable thread. Invoking the continuation means to create a copy of this semantic stack and make it runnable.

Cloning a computation space is similar to creating a continuation. In both cases, a computation state is captured and can be used later in the computation. Be careful of the differences though. A computation space contains many (simple) threads, whereas a continuation captures the state of a single thread. How can a computation space capture the state of many threads? Because the capture is done when the space is stable, and then the threads are all suspended. Another difference is that a continuation can be invoked as many times as you want, but a cloned space can be used only once (if you want to use it again, you have to clone it again).

i would recommend russel and

i recommend russel and norvig's 'artificial intelligence - a modern approach', a book that features detailed introductions to most of the topics you are interested in in it's first few chapters. it is light on theory with emphasis on application and implementation. you might have to consult other references where the book does not fill in, but it provides plenty of those too.

it is one of the most enjoyable scientific books i have ever read, mastering the impossible of staying entertaining in even the more dull areas of the topic.

logic programming and relational dbs

addressing your point re: speed of logic programming languages. I find swi-prolog to be generally fast enough for my purposes. Predicates are indexed, and you do have some control over the indexing, similar to a relational db. However, frequently I have a need for things like query optimisation and database style LIKE text searches. I have my own ad-hoc solution to the former, and in-memory sequence scans for the latter are usually fast enough - but if it would be great if there were more logic programming systems with support for this kind of thing, either as libraries, or preferably built into the engine.

The major win of a RDBMS here isn't the indexing on a single table, it's the optimisation of multi-table joins (together with disk-based storage, and all the other stuff like transactions which you may or may not care about)

On the other hand, I get frustrated with the lack of deductive power of relational databases. The two leading open source DBMSs don't even have support for standard SQL recursive queries, let alone any relational programming language (and no, pl/sql doesn't count, as it's imperative).

It is possible to tightly bind your prolog predicates to tables in a relational db then do some clever query optimisation and gain some benefits of both systems (useful for when part of your database is too big for main memory, and can be outsourced to a RDBMS). I have some code based on Draxler's work to do this. This hybrid solution still isn't ideal.

However, despite the apparent similarities between the two kinds of systems, there are big differences regarding things like ensuring termination of queries. Even so, the theorietic gap isn't that large, and it would be nice if there were more practical systems that extended either technology in the direction of the other. Some of the important research on deductive dbs in the 90s seems to be slowly making it's way into SQL-based technology.

The current interest in semantic web databases may help kickstart this but I haven't been too impressed by existing systems.

Business Rules

I wouldn't say that the techniques aren't popular, but they're not so well known. Since releasing ObjectiveCLIPS which is a framework that adds business rules/constraints to CoreData applications on Mac OS X, I've had one serious adopter that actually shipped shrinkwrap that I know of and they had a huge reduction in code and complexity. Lots of folks have downloaded it, but I'm afraid they just aren't "getting it".

There are some resources at the Business Rules Community website that you might look into. People are doing it, but not really talking about it as much as one might hope.