## Re-thinking Prolog

A recent paper by Oleg Kiselyov and Yukiyoshi Kameyama at the university of Tsukuba discusses weaknesses and areas for improvement to Prolog.

Quite many computations and models are mostly deterministic. Implementing them in Prolog with any acceptable performance requires the extensive use of problematic features such as cut. Purity is also compromised when interfacing with mainstream language libraries, which are deterministic and cannot run backwards. Divergence is the constant threat, forcing the Prolog programmers to forsake the declarative specification and program directly against the search strategy. All in all, Classical Prolog is the exquisite square peg in the world with mostly round holes

The strong points of Prolog can be brought into an ordinary functional programming language. Using OCaml as a representative, we implement lazy guessing as a library, with which we reproduce classical Prolog examples. Furthermore, we demonstrate parser combinators that use committed choice (maximal munch) and can still be run forwards and backwards. They cannot be written in Classical Prolog. Logic variables, unification, and its WAM compilation strategy naturally emerge as a "mere optimization" of the Herbrand universe enumeration.

The paper mentions the strength of the approach used by miniKanren (which embeds logic programming with fairer search strategy than normal Prolog into Scheme) and Hansei (which embeds probability based nondeterminism into Ocaml using delimited continuations to allow direct-style expression of monadic code).

After motivating some choices by studying the prototypical example of running append backwards they cover running parsers with "maximal munch" rule backwards - something that cannot be (declaratively) expressed in prolog.

A very interesting paper on logic programming! It also thanks Tom Schrijvers of CHR fame at the end.

## Comment viewing options

### Since I'm not an editor

Since I'm not a contributing editor, would anyone like to move this from the forum to the blog for discussion?

Done.

### bait-and-switch?

From a quick skim, it seems like they're showing a clean library-level embedding of search primitives. (Which is awesome, and goes beyond typical amb etc.).

However, the (very real) problem presented in the abstract is that Prolog-style code quickly devolves into implementing search strategies, and it seems like that still happens here. For an example of the reasoning I was expecting, we ended up building DSL libraries that use program synthesis for computer-assisted design of schedules...

### isn't any style going to have search strategies?

Non-declarative style means I'm writing it all up myself from scratch.

Declarative could be seen in some sense as taking that non-declarative engine and putting a DSL in front of it. If the engine does what you want then great! But otherwise of course it won't work well as-is and will have to be manhandled. No surprise?

What would be claimed to be any gordian knot end-run around this seems-to-little-me fact of life?

or, Are there accidental complexities in Prolog that somewhat go away with this approach instead? Or other approaches?

### generalization and automation

The paper seems to be, in part, about FP embeddings, which is a generalization. Approaches like supercompilation and SMT solvers get at optimized search for generalized domains like that. I agree automation here isn't easy: the synthesis community is tackling it, but experiences so far suggests hints and DSLs like sketching go a long way.

Indeed, from the synthesis mentality, most languages are equivalent (they compile language-of-the-week, including FPs, to constraints), so embeddings are almost an implementation/usability choice. The linguistic challenges are on efficient translations or primitives/DSLs for guidance (e.g., sketching).

### Synthesis mentality?

Compiling language of the week of constraints sounds intriguing - any reference to somewhere where I can read more about that?

### Rosette

http://homes.cs.washington.edu/~emina/

This is more about DSLs, but you can find surrounding literature from here working with C, SQL, Java, etc.

Thanks!

### search strategies

Implementing search strategies or tactics can be very nice to the extent they're easily composed, reused, and abstracted. I wonder if the approach presented here would be more amenable to such treatment.

### Previous Work?

Very interesting, maybe we can get useful language integrations going.

I remember also being disappointed with Prolog after I discovered that "normal" programming required odd tricks. There seems to be some "conservation of weirdness" law going on in programming.

Anyway, I was somewhat disappointed that I didn't see the authors consider previous work such as Building a backtracking facility in smalltalk without kernel support. Would have been very interesting to see how the different base languages affect the result.

### Model checking with miniKanren?

From the brochure: "Applications of the system range from expert systems to polymorphic type inference and overloading resolution, to model checking and theorem proving."

Does anyone know of an application of miniKanren to model checking (or program verification generally)?

### It's normal, in a Smalltalk

It's normal, in a Smalltalk environment, for the call stack to be exposed as a linked list, and for the call stack to be fully mutable.

LaLonde & Van Gulik basically (implicitly) used a direct implementation of delimited continuations, where #capture corresponds (more or less) to reset, and #succeed: and #fail correspond to (more or less) shift.

Which shouldn't be surprising if you've read Danvy & Filinski's Abstracting Control.

(It's very easy to formalise stack-hacking in Smalltalk by explicitly using delimited continuations (created just by storing chunks of the call stack).)

[Edit: I'd meant this as a reply to marcel's comment. One day I'll figure out how to post properly!]

### forward chaining

I haven't written much Prolog code, but when I have, it feels a lot like writing search code in a functional programming language. Thus, I believe the abstract! Prolog seems better in spirit, than as a full-fledged programming language of its own.

To contrast, forward chaining, as in Datalog, I have often found to let you write code that really is declarative. You have to accept the limitation that you are fully evaluating all your relations (excepting optimizations...), but in exchange, you don't have to spend time adjusting the system to compute the exact right subset of your relations. After all, if you're not computing a subset at all, you certainly don't have to specify which one.

### Forward chaining in Datalog?

In what way is Datalog forward-chaining?

### Datalog programs can be

Datalog programs can be solved via forward chaining, though they don't require it.

### Bottom up?

Do you mean bottom up evaluation (eg with magic sets)? I thought that was distinct to forward-chaining (as in Rete) where you derive all consequences of facts as they are added to the KB, rather than searching (forwards or backwards) to find solutions to a particular query.

### Clarity

They cannot be written in Classical Prolog.

This is an extraordinary claim to make of a Turing-complete language, and they fail to substantiate it. I presume they mean more that they cannot be expressed naturally in Prolog. But if it is ok to use a library to make this nicer in Ocaml, why is that not also ok in Prolog?

### Prolog already handles what they want

But if it is ok to use a library to make this nicer in Ocaml, why is that not also ok in Prolog?

Indeed. And you don't even need a library to make it nicer. Unfortunately the examples in the paper are doing Prolog an injustice to the point that they are mostly strawman attacks.

In the first example, the authors are simply comparing apples to oranges. It is true that unrestricted search in Prolog will be unfair in enumerating solutions (page 3, right column), but they are comparing this to restricted search using reify_part with a certain limit to the depth of the search (page 6, bottom left). Obviously, restricting the search space with some domain-specific magic number is perfectly doable in Prolog as well:

?- between(0, 4, N), length(R, N), append([t,t,t], X, R), boollist(X), boollist(R).
N = 3, R = [t, t, t], X = [] ;
N = 4, R = [t, t, t, t], X = [t] ;
N = 4, R = [t, t, t, f], X = [f].


(I would even argue that this is better because it uses a magic number to express a property of all desired solutions, which the programmer can observe, not of the search procedure, which is hidden.)

The second example doesn't actually have to have anything to do with search! The authors' first formulation of the many0 predicate is too general, and they specialize it in a way that is not a good choice in this case.

As long as you are working in a reasonably pure subset of Prolog, if your predicate is too general, this is because there is some clause that is too general. You can fix this clause by adding more goals in its body (making it less general) without touching any other clauses. You need not, as the authors do, throw the entire definition away and rewrite it with different control flow; all you need to do is add some conditions.

In this case, the unwanted generality comes from the fact that empty sequences are matched unconditionally, even if there might be a next element matching the goal. So we only need to change the clause that talks about empty matches to express more precisely what we actually mean. This is easy to do using the \+ negation operator:

many1(P, S, S) :-
\+ call(P, S, _).  % new goal in this pre-existing clause, rest unchanged
many1(P, S, Rest) :-
call(P, S, Srem), many1(P, Srem, Rest).


This more specialized variant of many0 can be used in both of the ways the authors consider:

?- many1(charA, [a, a, b], R).
R = [b] ;
false.  % no more solutions

?- many1(charA, S, []).
S = [] ;
S = [a] ;
S = [a, a] ;
S = [a, a, a] ;
S = [a, a, a, a] ; ...


Negation in Prolog has its own problems, but often it's not as bad as adding cuts. It's unfortunate that the paper doesn't even mention negation except at one point in the Conclusions where it's simply lumped in with committed choice.

The paper's general point is true---you must often be aware of Prolog's search strategy and must be able to work with it. But it's not clear that there can be any formalism for declarative programming on truly infinite Herbrand universes that doesn't have this property. Datalog and Answer Set Programming have cleaner semantics than Prolog, but at the price of giving up infinite universes.

### Good illustration of the main points

Sorry about the late reply. You comments stress the main points of the paper.

Gergo Barany wrote:

In the first example, the authors are simply comparing apples to oranges. It is true that unrestricted search in Prolog will be unfair in enumerating solutions (page 3, right column), but they are comparing this to restricted search using reify_part with a certain limit to the depth of the search (page 6, bottom left). Obviously, restricting the search space with some domain-specific magic number is perfectly doable in Prolog as well:
?- between(0, 4, N), length(R, N), append([t,t,t], X, R), ...
N = 3, R = [t, t, t], X = [] ;
N = 4, R = [t, t, t, t], X = [t] ;
N = 4, R = [t, t, t, f], X = [f].

(I would even argue that this is better because it uses a magic number to express a property of all desired solutions, which the programmer can observe, not of the search procedure, which is hidden.)
Your argument in the last paragraph is a good one, especially in Prolog. In fact, I use the very similar example -- generating all lists of a given size -- in the talk. The example gives me a chance to introduce sharing and discuss what in Hansei corresponds to a logic variable.

The argument in the paper however was different. I should stress that the search procedure is not at all hidden in Hansei. Just the opposite. The search is exposed; moreover, it is programmable, in Hansei itself. In your example, you add the 'between' and 'length' goals to the original goal -- you change the model. The point in the paper is that Hansei lets you use different search procedures on the very same model without changing it. One search procedure may restrict the space with some domain-specific magic number. Another one, applied to the same model, produces first N answers, if there are that many. If the search space is very large, one is better off with approximate search procedures relying on probabilistic sampling. Hansei has an importance sampling procedure that seems to work well in practice. One is free to write one's own search procedures.

A search procedure may also be non-deterministic. On one hand Hansei separates models and search procedures. OTH, since search procedures are just regular OCaml code, they may just as well use Hansei primitives themselves. Therefore, a search procedure also may be a model. This nesting of inference was one of the main design goals of Hansei.

Gergo Barany wrote:

 many1(P, S, S) :-
\+ call(P, S, _).  % new goal in this pre-existing clause, rest unchanged
many1(P, S, Rest) :-
call(P, S, Srem), many1(P, Srem, Rest).

This more specialized variant of many0 can be used in both of the ways the authors consider:
 ?- many1(charA, [a, a, b], R).
R = [b] ;
false.  % no more solutions

?- many1(charA, S, []).
S = [] ;
S = [a] ;
S = [a, a] ;
S = [a, a, a] ;
S = [a, a, a, a] ; ...

Negation in Prolog has its own problems, but often it's not as bad as adding cuts. It's unfortunate that the paper doesn't even mention negation except at one point in the Conclusions where it's simply lumped in with committed choice.
Your code doesn't affect the point and does not actually solve the problem. Yes, many1(charA, S, []). converges. But the closely related many1(charA, S, R), R=[] diverges. BTW, the latter corresponds to what Hansei actually does: it first parses and then checks if the remainder of the stream is empty (or could be empty). Furthermore, the parser given to many can be truly arbitrary. Consider
manyplus(P,S,R) :- call(P,S,R1), many1(P,R1,R).

Then many1(manyplus(charA),S,[]). even with the last argument set to [] diverges after producing the first solution. Such behavior is not at all surprising, given the relationship between the soft cut and negation (explained in the SWI Prolog's online manual itself). I agree negation and committed choice are distinct, and the cited Lee Naish's paper does a good job explaining it. The distinction was not relevant for the point at hand. Elsewhere, Pseudonym wrote: It's about re-thinking Prolog, so the first thing I did was look at the citations for a mention of Mozart/Oz, Goedel, Mercury or, indeed, any previous rethinking of Prolog. The paper stresses Classical Prolog -- since it is Classical Prolog that captured the imagination, and still does. It is the ideal that remains ideal. There are many, many follow-ups, including the ones you cite as well as XSB, lambda-Prolog, various Datalog systems and various ASP systems. Even Twelf is in a sense Prolog's successor. Perhaps they are more successful but not that ideal. By the way, it is quite more complex to emulate even simple Prolog examples using delimited control. One has to be mindful of mutation elsewhere in the program: delimited control captures a part of the stack, but not part of the heap. Furthermore, for lazy generators, mutation is essential -- but not the usual, heap affecting, non-backtrackable mutation.
 By Oleg at Wed, 2014-09-24 07:53 | login or register to post comments 
 
 Multiparadigm The paper reminds me a lot of Mozart/Oz which also realizes the goals set out by the authors' separation of the model specification from the search and non-determinism â€“ and bringing them into the conventional functional-programming language. Except, may be, that Mozart/Oz is not a conventional language. By Nanitous at Sun, 2014-09-14 14:11 | login or register to post comments I noticed that! This struck me about the paper, too. It's about re-thinking Prolog, so the first thing I did was look at the citations for a mention of Mozart/Oz, Goedel, Mercury or, indeed, any previous rethinking of Prolog. Nothing. By Pseudonym at Mon, 2014-09-15 08:06 | login or register to post comments 
 I'd always thought tags 

I'd always thought tags ended at the end of a comment. It seems not. I'm wondering whether an end-code tag  in a later message will do the trick.

[Apparently, yes.]