Logic Programming with Failure as an Exception

I have been struggling with negation in logic programming for a while. There is a fundamental problem with negation because failure to find a proof is not the same thing as a disproof. This makes the common 'negation as failure' approach unsound.

I have looked at many alternatives, like negation as inconsistency, and 3 or 4 valued logics, and whilst some of these have merit they make both the language and the implementation much more complicated. This makes logic programming lose some of its elegance.

I considered for a while that negation was unnecessary, but that results in redundant code for simple examples like 'member' and 'not member'.

I think the fundamental problem is the confusion between the 'proof' and a boolean.

:- x.

Is misleading because we have proved 'x', but not really 'x true'. Let's instead write:

:- x.

So we want to encode that x is true, we should rather write:

(x, true)
:- (x, X).
X = true, proved.

In this way we can write member, and negation of a boolean, in simple Horn clause logic, with only the addition of an disequality constraint, so that the order of clause execution is not significant:

not(true, false).
not(false, true).

member(X, cons(X, Tail), true).
member(X, nil, false).
member(X, cons(Y, Tail), Result) :-
    dif(X, Y),
    member(X, Tail, Result).

not_member(X, List, Result) :-
    member(X, List, IsMember),
    not(IsMember, Result).

This style is of course not as neat or easy to read as the default way that 'not' is used. The problem appears to be the confusion between the program as a proof and the return value. That a something is proved or not-proved is not the same thing as a function return value at all. So what if we instead treat 'failure' as an exception. We would then return a value from the Prolog clause like a function. We could instead write:

not(true) -> false.
not(false) -> true.

member(X, cons(X, Tail)) -> true.
member(X, nil) -> false.
member(X, cons(Y, Tail)) -> dif(X, Y), member(X, Tail).

not_member(X, List) -> not(member(X, List)).

Hopefully it is obvious what is going on here. I have adopted a different symbol (-> instead of :-) to reinforce that we are returning a value not the success/failure of the program. This keeps the language implementation simple, and avoids the whole negation of proofs problem. I would argue that disproofs are impossible, you can only prove something false, hence the idea of a negation of a proof is meaningless. With failure as an exception we keep the nice syntax of negation in logic programs, but also gain the ability to return more than true/false, moving the logic language a little bit towards functional programming.


What do you think of this approach?

I haven't been able to find anything like this approach looking for 'failure as an exception', has this been looked at under a different name, or is there any literature already out there about a similar concept?

Edit: To clarify my use of the term failure as an exception, I mean that unlike Prolog where a clause can evaluate true or false, failure is not returned at all, however it does still trigger backtracking. The change is mostly syntactic sugar for horn clause logic with an 'out' mode final argument.

Edit2: changed "/=" to "dif" to match the usage in Prolog.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Icon works the other way

That is, failure as negation. The backtracking system generates a stream of results, as in Prolog, but backtracking is confined to a single expression. Outside expressions, an empty stream is treated as false, whereas a non-empty stream is true, and there are no boolean objects. For example, < generates a single result (its right operand) if it is true, and no result (failure) if it is false.


Obviously soundness is not a propery everyone is interested in, but that seems like it is still unsound. If the expression fails it means a proof was not found, and therefore negating this failure means equating proved-false with no proof found, when they are not the same thing.

Also running meta-interpreters is unsound, as you can define negation on failure by matching the empty list in the result from an expression.

In Martin-Lof's type theory

In Martin-Lof's type theory the empty type plays the role of the logical constant false.

In a new programming language I'm developing I plan to use the empty expression as an exception igniter and an error indicator, but that follows the philosophy of reducing a full set (type) to a subset (value) successively. If an empty set is reached, an error is thrown. Explicit @Null type or equivalent will be embraced by a symbol type and assigned to the symbol where the empty set is expected as a non-error value.

You are right, as some proofs have potentially infinite search space, we cannot consider non-existence of a proof as a negation of a proof (but who knows, if specific search space could be detected to be finite, and all proof branches are rejected at a proof calculation, there's an argument for something cool).

Unsoundness of negation as failure

Types can define finite spaces, but this is equivalent to using 'member' on a list of atoms, and proving some atom is not in the list, by comparing every atom in the list with the query atom.

In type theory:

not A = A -> _|_

Which seems to me to say if you can prove A, you get bottom (False), which seems slightly different to: if you can't prove A you get true.

The unsoundness of negation as failure is easily demonstrable though, and Oleg has a nice summary: http://okmij.org/ftp/Prolog/#impure

It might be about the

It might be about the context in which a proof is used. You can stitch existence of a proof to a function and say: If there is a proof of A then return X, otherwise return Y (or you can leave Y out of the function). X and Y can be boolean, integer, string, success, failure or anything else, in whatever combination.

So the question could be: what is being done with proofs and how are they interpreted?

Otherwise Problematic

I think otherwise is a problem, because it catches failure, it leads to unsoundness in the same way as negation as failure. I think proofs behave like types, so you might have "A -> _|_" which is basically if we have a proof of A, then throw an exception. What you cannot do is go from absurdity to a proof of A, so _|_ -> A, we are saying from anything that evaluates to false can be used to prove anything.

Actually, in constructive

Actually, in constructive logics absurdity can be used to derive whatever we want. It is equivalent to an imagination in the outer world.


I refer back to Oleg's examples of unsoundness. Such a thing is obviously unsound, you may not be concerned with soundness, but I am.

To put it another way, if you just want to write programs in Prolog, you don't care if negation is unsound.

If you are tying to prove things using a Prolog like logic language you do care about soundness.

Complex numbers

I would like to see some logics conclusions equivalent to those in math with complex numbers. I mean we can successfully drag an imaginary number over solving expressions, possibly cut it down with another imaginary number at some point of calculation, and get the real result at the end.


This is a proof that negation of a proof-of-A doesn't prove True. The proof is based on logic's "resolution rule" which is proven sound and complete in logic science. The rule is based on the following:

if we have "A | B" and we have "¬A | C", then we can conclude "B | C".

Of course, before applying this rule, we have to convert all formulas to disjunctive normal form. There are no other conclusions that can be derived if they cannot be derived by our "resolution rule", as scientists already proved.

Statement: from "A -> F" and "¬A" we cannot prove "T"

The proof follows. So we have:

A -> F

which, after converting it to disjunctive normal form, turns to:

¬A | F

Just for fun, if we say: "A", we can conclude:

from ("¬A | F", "A") we conclude "F"

This is the essence - if we say "¬A" we cannot conclude anything new:

from ("¬A | F", "¬A") there are no new conclusions.


("¬A | F", "¬A") doesn't prove "T".

Case closed.

Which Logic?

Except proofs need to be handled using intuitionistic logic, so "a|~a = 1" does not hold for proofs.

The reasoning is although we should never be able to prove both a and ~a (consistency), hence a & ~a = 0 there may be things for which neither a proof or disproof exist (incompleteness) hence we cannot assume a | ~a = 1.

Edit: we may also want to exclude a & ~a = 0 for inconsistency robustness. Obviously if we exclude negation we exclude both of these, and we get a logic that is both intuitionistic and inconsistency robust.

Resolution rule, as I

Resolution rule, as I learned, is successfully used for propositional logic and first order logic. I assume that it can be extrapolated to higher order logic, why not.

"a | ¬a" is actually a tautology. Why wouldn't it hold for proofs too? It is about judgements and judgement "a" could say: "there is a proof for XYZ" and "¬a" could say: "there is no proof for XYZ".


a|~a does not hold because there are open problems we can neither prove nor disprove. Incompleteness shows we cannot ever prove or disprove some things, its not just that we haven't yet found the proof or disproof.

For example no proof or disproof of the consistency of simple arithmetic exists.

Gödel? Hell of an

Gödel? Hell of an exceptions, but we don't need to enter those if we bring proofs to terms of propositional logic: "there is a proof" or "there is a disproof". Now the existence of a proof is sealed (and reasoned about) outside of our system and our logic foundations don't break.

The bottom line is: if we have a proof for A, we can conclude F. If we have a proof for ¬A, we can't conclude T, unless we explicitly state ¬A -> T.

Can we have a proof? You got me there, I just learned something new. Anyway, it should be a logical conclusion, as infinite search spaces exist.

Fail if proof found.

I think I agree with that. The language outline above does not include the ability to fail if something is proved true, and right now I don't think it is necessary. It could easily be added, but I think by returning a Boolean, you can conditionally carry on dependent on a False being returned.

In this way I am regarding a failure as a real problem with the program. For example the 'member' predicate may return true if the argument is in the list, or false if it is not, it would only fail if the argument could neither be proved in the list nor proved not in the list.

Edit: In this case the logic has to match Prolog resolution, so we cannot suppose that "there is a proof or there is a disproof" holds, as we can easily have a non-terminating proof due to recursion. To have a sound system with variable binding you either have to lose recursion or negation. As negation is easily recoverable inside the system, but recursion is not, the choice of which one to lose is easy.


We should be careful about the axiom that says: from "X" we automatically get "X -> T", if I recall correctly. So from "¬A" we would also get "¬A -> T" which then gets us "T".

Also, aren't "T" and "¬F" part of, or wired below the axiomatic system? That way, stating "F" induces an exception of inconsistency.

Variable Binding

I think the problem is logic does not deal with variable binding. Failure causes backtracking and bound variables get unbound. Negating the failure does not work because we can't rebind the variables (so double negation elimination is unsound).

For example:

:- not(not(X = 0)), not(not(X = 1)).

Succeeds implying that X is both 0 and 1 at the same time, which is obviously unsound.

A word I keep using

You have to do consistent narrowing over a WHOLE expression that uses negation. That doesn't match prolog's model, though there might be some inefficient ways.

:- x_domain(X), not(not(X = 0)), not(not(X = 1)).

Now the expression will fail.
:- x_domain(X), not(not(not(X = 0))), not(not(not(X = 1))).
This expression succeeds and X = 2

More Complexity

I see where you are coming from, as when you negate X=0 you can turn that into the constraint X /= 0. I have been down this road myself, but it makes the interpreter significantly more complex to implement.

The challenge for this approach is negating a predicate like 'member' as defined above.

backtracking, constraints and tested unification

I imagine that not/1 in your usage takes a boolean input and thus

not(X = 1) can accept true or false.

Thus it's acceptable for
not( X = 1) to unify X with 1 then return true to not/1 thus returning false as the value of not,
but it's also acceptable for it to unify X with the constraint "X will never be allowed to unify with 1" and return false to the not function and thus true as the value of the not function.

That would allow your sort of program to backtrack remain consistent and search the space of unification succeeds or fails.

Of course if X is pre-bound then it just checks the binding.

If I hadn't made it clear before, I like your idea which is why I keep adding to it.

One problem with having antiunification constraints, ie a list of things that a variable can't be unifed with rather than things it is unified with, is how can you detect if that list is consistent? If a variable implicitly has a type of boolean, then if you ended a calculation with "can't be unified with true" and "can't be unified with false" then you should fail. But what about more complicated cases where it's antiunified with partially unground structures? Is there a way to use these sorts of constraints without ever running into that problem, where by the end of a calculation everything is bound or consistent?

Negation of proofs

There are two parts to negating a proof if we want to allow it (note my original suggestion does not allow it, and instead has a successful proof return a value that can be negated, proof failure is failure, but negating proofs is interesting, if only to show how difficult it is).

First we negate the predicate, and use symbolic manipulation (mostly De Morgan's rules) to push the negation inward until we can eliminate negation by replacing unification with a disequality constraint and vice-versa. This will result in an and of or structure that we need to convert back into Horn clauses.

When running this predicate instead of a single answer we will get a disequality (a set of not-equals constraints). This can be solved if we know the type of the value to give the set of all possible values, or it could be left as it is.

I tried to do this step-by-step for 'member' last night and could not get it to work properly. I am not sure if this is because there is something about the process I didn't understand, or I made a mistake.

Edit: This is what I get when I try and negate member, and it doesn't work (for example the first clause succeeds whether X is a member of Y or not.

not_member(X, Y, Z) :- dif(Y, nil), dif(Y, cons(X, T)), dif(Y, cons(H, T)).
not_member(X, Y, Z) :- dif(Z, false), dif(Y, cons(X, T)), dif(Y, cons(H, T)).
not_member(X, Y, Z) :- dif(Y, nil), dif(Z, true), dif(Y, cons(H, T)).
not_member(X, Y, Z) :- dif(Z, false), dif(Z, true), dif(Y, cons(H, T)).
not_member(X, Y, Z) :- dif(Y, nil), dif(Y, cons(X, T)), eq(X, H).
not_member(X, Y, Z) :- dif(Z, false), dif(Y, cons(X, T)), eq(X, H).
not_member(X, Y, Z) :- dif(Y, nil), dif(Z, true), eq(X, H).
not_member(X, Y, Z) :- dif(Z, false), dif(Z, true), eq(X, H).
not_member(X, Y, Z) :- dif(Y, nil), dif(Y  cons(X, T)), not_member(X, T, Z).
not_member(X, Y, Z) :- dif(Z, false), dif(Y, cons(X, T)), not_member(X, T, Z).
not_member(X, Y, Z) :- dif(Y, nil), dif(Z, true), not_member(X, T, Z).
not_member(X, Y, Z) :- dif(Z, false), dif(Z, true), not_member(X, T, Z).

dif is a predicate for /= and eq for =, any ideas where I went wrong?

Looks interesting.

Looks interesting. Does not look familiar. The LOPSTR crowd would be the people that would know. If failure is an exception, how would you express try...catch i.e. how would you implement findall in this style?

Failure is like a runtime type error

My thinking at the moment would be that failure is a bad program, or lack of a program to satisfy a hypothesis. I don't think you would want try/catch as that leads you back down the negation path, as catch would be like not, succeeding on a failure.

I think failure should be seen like a runtime type error, and should be unrecoverable.

Edit: thinking about it throwing an exception if something is true should be sound, and is the equivalent of A -> _|_ in type theory. All the problems stem from catching the exception which would be something like _|_ -> X, where X is any proposition. This seems to allow proving anything from absurdity, which is obviously a problem.

Edit2: I think I have been somewhat unclear in my thinking, in that you can of course 'catch' failure with a disjunction. So failure to find a matching clause results in backtracking as normal.


... moved ...

conjunction and disjunction

I like it

I think you need a logical and or etc on values instead of on proof too




foo(EXP,Foo_truth) :- bar(EXP,Bar_truth), zap(EXP, Zap_truth), and(Bar_truth,Zap_truth, Foo_truth).

could be:
foo(EXP) -> bar(EXP) and zap(EXP).

foo2(EXP,Foo_truth) :- bar(EXP,Bar_truth), zap(EXP, Zap_truth), or(Bar_truth,Zap_truth, Foo_truth).

could be:
foo2(EXP) -> bar(EXP) or zap(EXP).

That could even be recoded to be more efficient and shortcut in cases where we don't expect failure.

since and(false,_,false) we don't need to bind _ if the first is false.

we could have a shortcut and and or, say && and || that assume their second element can't fail.

Let's see that would be (using -> ; as the prolog conditional not as your meaning)

foo(EXP,Foo_truth) :- bar(EXP,Bar_truth),
(ground(Bar_truth), Bar_truth=false -> Foo_truth=false; zap(EXP, Zap_truth), and(Bar_truth,Zap_truth, Foo_truth)).

could be:
foo(EXP) -> bar(EXP) && zap(EXP).

unground answers

the prolog versions of this aren't actually RETURNING values, they're matching booleans.

By this I mean that if the answer is unbound, the system is free to TRY true or false or both in succession

Is that what you want?

Out Mode

As you can still use the arguments in the usual way in the in/out mode, the return value can be considered an out mode parameter that is optional (can be void).

Is it useful/complete for it to be in/out

If some of the variables are don't care or dependent, will your method always find all the answers by using in/out?

Is it useful, reliable, complete?

Do you have to be careful to code to make it that way?

In/out sounds useful if it does that.


Or maybe there are cases where you need narrowing.

Ie, you need to assert variable matches true or false so that it KNOWS what to try.

Or a smarter search engine could do some kind of narrowing search where the values are narrowed with nonlocal backtracking and/or more than one at a time.


The way this translates to prolog with extra variables is just like the way prolog does definite clause grammars with a macro

Keean, do you have a macro for this?

If you don't I'll try to alter the DFG macros to do this.

Quoting Structures

I have realised there is a problem the Prolog negation predicate does not behave like other predicates. Normal predicates simply unify the arguments, whereas negation evaluates the argument before matching on the result. This means we need some way of quoting arguments so that we have control over whether they are evaluated before matching/unification

Quoting, eww

Anything that causes you to start quoting things (as e.g. traditional macros do) is Bad News.

Doing it already

Prolog is doing it already for negation, its just because it is a built-in you don't see it.

I thought the problem with negation

was in the opposite direction, at least in some examples I've seen.

It's not that you quote to keep from evaluating, it's that variables have to know their domains before you can prove that no members of their domain leads to a valid proof.

Ie, it has to be able to search the whole space, so it needs to know what the space IS. Or in other words all variable have to be available to be narrowed.


What does this evaluate to:

a(true) -> true.
a(false) -> false.

:- a(b(c)).

Normally in Prolog this would succeed, but if 'a' behaved like the Prolog negation predicate, it would evaluate 'b(c)' and fail. This reminds me of fexpr in Lisp.

Actually it probably does not need quoting if we allow different definition syntax, and make functions distinct from rule clauses. That would of course restrict it to bring one or the other, and not having mixed arguments.

What about Bayesian reasoning?

If 0 is false and 1 is true then 0.75 could be probably true...

Calculating with probability measures combined with backtracking looks like quantum calculations - a sum of paths calculation.

I can think of cases where you would be calculating a bunch in parallel even without backtracking - say you're doing a poker game, you might have a calculation for every possible hand, or groups of hands with similar strength - in which case the sum needs to be adjusted by the probability of the various universes being considered: even more like quantum calculations. You might do player modeling in which case you have another set of weights.

Bayesian is interesting

Bayesian is interesting, but it would be inefficient for programming as all branches need to be explored to calculate the probabilities. There are plenty of possibilities here, but not directly useful for my goals here. It might be useful for game-playing as you say, and I am interested in that (I have wriiten a Go player).

"Negation as failure" is standard and sound in closed-world

AFAICS from more knowledgeable people, negation as failure in logic programming is well-studied. IIUC, it's also known under what conditions it is guaranteed to be sound. If you are *not* under those conditions, fine, but I suspect looking at them would still be very helpful.

You need a closed-world assumption. That is, you need to assume nobody can sneak up and add facts to your knowledge base. That's like declaring an algebraic datatype, assuming nobody can extend it, and then deriving an exact description of its elements, or proving something can't be an element.

equating proved-false with no proof found, when they are not the same thing.

Ah, that's a different concern — but that can easily be sound. You need:
- a logic which is complete — everything that is true (in all models) is provable. First-order logic is such a logic, and Prolog's logic is weaker. Moreover, there proofs are always finite.
- a proof search that will never fail to find a proof if one exists. I remember related claims for Prolog's resolution, but I'm not sure enough of which kind of completeness it has. (A quick check on Wikipedia seems to confirm that resolution is powerful enough)
- a "complete" knowledge base as a starting point. If you say that Paolo is a person and Keean is a person, and nothing else, the semantics is that Dick is not a person — just like for algebraic datatypes. I agree that usually in logic you don't make this assumption — the theory from a knowledge base will only prove what is true in *all* models of your theory, not just in this "minimal" model.

Indeed, soundness of negation as failure (under those conditions) is also known from the '70s.

Disclaimer: even though I am not completely uninformed, I'm not an expert on logic programming, so I welcome corrections.

Adding to the database

Doesn't the closed-world assumption seem kind of weird? Adding to the database is routine even in a single-user system.

Implementation of Failure

Given not(not(X=1)) what is X? Negation as a failure does not work as it undoes the Variable binding (so if X were unbound, it is still unbound) leading to the problems with double negation.

Are you interpreting failure as something other than unbinding variables and backtracking? If so how are you implementing failure?

Edit: perhaps it was not clear that the this still applies if X is typed (closed). "not(not(X=false)), not(not(X=true))" still evaluates true even if you know X is a Boolean because the Variable X is unbound by the backtracking. This is not to say that you cannot have a declarative language where there is a sound interpretation of this, one example would be constraint solving, where the constraints are negated, but such a system does not have the efficiency of directed solving as in Prolog, as you have to collect the constraints and then solve them.

CWA is a lot stronger than that.

The Closed World Assumption has two parts:

  • Everything that is true is known to be true (is explicitly stated as a fact).

  • Everything that is not stated is false.

This assumption is a re-phrasing of the conditions under which negation as failure is sound - under these axioms the soundness of negation as failure is tautological. Your later description of a "complete" knowledge base is actually the CWA.
Edit: should also mention that the inability to assert or retract is inplied by these definitions - all facts are already stated for the CWA to hold.

I realize this is a usless side track

But doesn't that imply that what's wrong with "not(not(X=false)), not(not(X=true))" is that X is not bound.

If CWA assumes that every thing true is stated, maybe it also assumes that there are no unbound variables waiting to be calculated. Ie X has to be already known.

So under CWA the example has to be
"oracle_what_is_the_true_value_of_x(X),not(not(X=false)), not(not(X=true))"

Boolean Satisfiability

Well you would be reducing the problem to Boolean satisfiability, so convert to CNF and solve. In other words you have two constraints on the value of X. In CNF, not(not(not(X))) /\ not(not(X)). Notice the variables in a SAT solver work in a different way, and we can no longer direct the search, but have to rely on the SAT solver search algorithm. There is no 'time' in the solution, either we have found the solution or we have not, there is no explicit sense of trying a solution and backtracking if it's wrong.

You will of course run into big problems if you try and implement 'member' in this way.

No Programs?

The inability to assert or retract would seem to preclude writing what we would consider 'programs' and reduce the functionality to that of a constraint solver (SAT solver for Boolean domain). This is what I was trying to get at when I said it would no longer be a directed solver.


Even with a CWA a program could still use assert/retract. The distinction would be that it could not introduce new information by doing so, that is each assertion/retraction would be some syntactic sugaring of the facts already present, e.g. filling out implications.

Prolog is just a constraint solver, with side-effects isn't it? If you allow another solver to also use side-effects then it seems to me that you should get a language as powerful. This is the idea behind answer-set programming where the resolution-logic of prolog is replaced directly with a solver over a boolean domain.

I've had a chance to mull over your idea and I think that I have seen it before (not 100% sure about that) and that it is quite interesting. I think it may be known as a "folklore" technique because when I worked with the guys who do ciaopp I vaguely remember them showing me how to do this to convert a prolog program into a form that one of their analyses would terminate on. But a very vague memory and it could be that they showed us a style that was similar but not quite the same.

What strikes me as particularly interesting is that the style that you propose can be converted to a strong assumption in the same manner as the CWA but it ends up with a tri-valued model: things that are true, things that are false and a set of unknowns. The set of facts in a program that explicitly disconnects the truth value of proposition from its provability in this way can form a constructable subset of the true and false statements in a particular domain, leaving an implied set that has a third value by default: unknown / unproven. This seems highly reminiscent of Version Spaces in Machine Learning and suggests that it would be interesting to try to bound difficult to express concepts by over/under-approximations.

Other Solvers

If you allowed side effects in other solvers, would it make negation unsound in the same way? As you said, Prolog is a constraint solver, so adding Prolog like logic variables to other solvers would result in Prolog? The optimisations that make SAT solvers efficient would not hold in the presence of side effects? Certainly SAT solvers cannot cope with Prolog like programs, and answer set programs have to have a single stable model.


If you allowed side-effects in another solver - it probably would make it unsound. It depends on what the side-effects do. My guess would be that there would be a pure subset of the language with the same soundness properties as the solver, similar to the pure kernel in Prolog.

Pure prolog cannot do very much - any "program" in Prolog uses unpure side-effects. The pure kernel does not allow any I/O so programs within it are trivial, although pure fragments of larger programs are useful for analysis.

Unification with a Cut Construct

Prolog is just a constraint solver, with side-effects isn't it?

It's a language build around unification with an explicit cut rule. I.e., you have some control over the search.

Well. That's how I remember it.

quick and dirty

I set out to implement this as macros in prolog, but then I realized that it's not really prolog and the interface with regular code would be hard.

I'm going to have to come up with a prolog-like language to play with this where the dialect is kept separate from the rest of prolog.

It's been a long time since I coded in prolog, so this might not be as easy for me as I hoped, we'll see.

Tried it.

So this intrigued me enough that I downloaded SWI and started to write a meta-interpreter for your functional syntax. It did not work. The problem is that what you have written is not really a syntactic sugar for the underlying prolog. The simple facts are fine (e.g. not(true) -> false.), but the semantics of the compound calls don't really make sense. It would look something like this to be sugar:

member(X, cons(Y, Tail)) -> Result :- X /= Y, Result = member(X, Tail).

Otherwise the issue is that the mixture of prolog-like (non-returning predicates) and function-like (returning predicates) doesn't seem to be well-defined. What are the semantics of the , in the new language?

Then the actual call in the final not_member did not make any sense to me at all when I started to think about implementing it as a meta-interpreter. Have you hacked up any prolog code to execute the syntax that you sketched?

Void Returns

My initial idea was that equality / disequality returns void (but succeeds or fails as before and leaves the variables bound on success, and unbinds and backtracks on failure). It may be better to adopt different syntax, as it is effectively acting as a pattern guard. So some alternatives:

member(X, cons(Y, Tail)), dif(X, Y) -> member(X, Tail).
member(X, cons(Y, Tail)) | dif(X, Y) -> member(X, Tail).
member(X, cons(Y, Tail)) -> dif(X, Y), member(X, Tail).

The semantics of the ',' would be to discard the result of the left hand side, and backtrack on failure, so the right hand side only gets evaluated if the left-hand-side succeeds.

If you restricted the comma to the left side of the '->' you might want to enforce its arguments do not return any values.

I think what you are referring to with the 'not' not making sense is what I was trying to get at when talking about quoting. There needs to be a way to determine if the arguments get evaluated or treated as a structure.

Perhaps the simplest answer is to leave the syntax alone and just write Prolog using this style, and remove the negation primitive. However I do think there is something interesting going on with regards to the crossover between functional and logic programming.

Starting from a functional perspective, if you added a literal structure syntax to a simple functional language with pattern-matching and guards, what you have would appear to be equivalent to a committed choice logic language. I am sure this is old news to most people, but it seems an interesting point in the design space to me.

An example syntax that fixes the quoting issue would be:

atom: a b c
variable: X Y Z
structure: s{a, b, c}
function: f(x, y, z)
function definition returning void: f(a, b, c).
function definition returning something: f(a, b, c) -> d. 

not(true) -> false.
not(false) -> true.

member(Key, nil) -> false.
member(Key, cons{Key, Tail}) -> true.
member(Key, cons{Head, Tail}) -> dif(Key, Head), member(Key, Tail).

not_member(Key, List) -> not(member(Key, List)). 

-> not_member(b, cons{a, cons{b, cons{c, nil}}}).

This would have to use backtracking semantics to work, as you have to backtrack on 'dif(Key, Head)' if you chose this clause first.

If we instead went with something more like pattern guards, then this could be implemented as a committed choice language, and we could get rid of backtracking (or you would only have to backtrack the current guard), and any failure would terminate the program.

As structures are explicit now there are some other interesting possibilities:

returning a structure: f(a) -> cons{a, nil}.

more complex to implement:

defining functions in a structure: my_fn{n(t) -> f. n(f) -> t., t}.

Edit: changed /= to dif to make semantics clearer to Prolog users.

defining a language, clarifying if all of the ideas make sense

It seems to me that options for a function/predicate are the following orthogonal choices:

I) success:
a) must succeed (falling through the last clause signals an error)
b) vs. can fail

I know this was the original idea of the post, but I'm not sure it makes sense. It seems that you're saying it's an error if there are no other clauses to try ie no semicolon. Is it an error if there are other clauses to match, ie in prolog one can rewrite clauses with semicolons to have multiple clauses an no semis - are you breaking that equivalence?

Maybe all you really want is to notice "unproved" as the value of a whole program as a warning.

II) backtracking
a) multiple successes, can be backtracked into
b) vs. cut on first success

III) return type:
a) returns a typed value (truth is one type)
b) vs. returns an untyped value
c) vs. does not return a value

each individual clause also could have choices such as:

I) how return value is calculated, and whether return value a guard:
a) returns the value of the last sub-expression
b) vs. returns true on success
b) vs. returns false on success
c) vs. the final sub-expression must return a specific value, such as true or false otherwise the clause fails.
d) vs. the final sub-expression must return a specific value, such as true or false otherwise it is an error ie the whole PREDICATE fails

It's not clear to me when predicate failure should be an error. It seems to me that if the programmer expects failure then it's not an error.

I think, therefore that NONE of this makes sense to have automatic errors. If you want failure to be an error then you need a predicate like what other languages call "assert". Since assert has a different meaning in prolog, you need another name. error_on_fail/1

It's an open question to me whether even in this case I like the idea of there being two kinds of failure, but prolog already has that:
You can "fail" and you can "!,fail" - cut fail fails the whole predicate.

So rewritten this is:

I) backtracking
a) multiple successes, can be backtracked into
b) vs. cut on first success

II) return type:
a) returns a typed value (truth is one type)
b) vs. returns an untyped value
c) vs. does not return a value

each individual clause also could have choices such as:

I) how return value is calculated, and whether return value a guard:
a) returns the value of the last sub-expression
b) vs. returns true on success
b) vs. returns false on success
c) vs. the final sub-expression must return a specific value, such as true or false otherwise the clause fails.
d) vs. the final sub-expression must return a specific value, such as true or false otherwise the whole PREDICATE fails

The idea of guarded clauses suggests a more general mechanism of guarded EXPRESSIONS with arbitrary guards and possibly arbitrary functions applied to the return value.

This last bit comes the fact that Prolog gives you the most general mechanism for return values, ie. out/matching valiables but no expressions. It can do ANYTHING but not readably. And the proliferation of variables required to create expressions makes code very hard to follow. But as soon as we try making prolog syntax more compact it's tempting to add more and more syntactic sugar for every possible thing we might want. So none of this adds power, only readability, and we can argue forever on what is better aesthetics.

I'm a fan of adding almost anything the makes the number of NAMES in a program a few as possible. If you don't have to define multiple levels of predicates that's a win. If you don't have to name more variables, that's a win.

Some Answers?

To address those points specifically, what I am looking at is:

1. can fail.
2. can backtrack (but committed choice is an interesting design, if you have guard clauses you effectively have one level of backtracking, but for the moment I am more interested in an underlying Prolog like implementaton).
3. returns an untyped value (note 'proof' is not returned, semantically this is the same as pure-Prolog with no negation). Returning true or false has nothing to do with failure, failure is specifically when there is no matching clause. So a clause that returns false still 'succeeds'.

The program shoould be translatable to pure Prolog without negation. See the original example for the actual implementation of member and not_member in Prolog. Any questions about the semantics can easily be resolved by looking at the translation into Prolog.

website glitched

I entered a comment and it appeared twice.

Syntax for subexpressions

How do you syntactically distinguish between functions and term matching? For instance:

foo(bar(Baz)) which translates to the following prolog
Ie. nested functions

2)and that which translates to

Ie, just match terms?


The answer was in the post above, my preferred method at the moment is () for function application and {} for structure literals.


say the prolog is:

so that the call has mixed parameters
would that be


that seems to work as long as you assume that () is used on all functions of no parameters.

or did you thing it was going to be
foo(...) for every parameter is a function
and foo{} for every parameter is a structure
in that case you would need a quote so it would be

or an unquote so it would be

those seem worse.

Unquotes are still useful in structures to fill in their parameters with return values.

Let's say that back-quote is unquote`
I think you don't need quotes as long as {} works like in the first example


should be
a_clause(Degrees,RESULT) :- bar(R0), sin(Degrees,R1), foo(bax(R1,whooo),R0,RESULT).

interfacing to the rest of prolog

I think the best idea is:
functions have to be defined and seen by the compiler so they compile properly.

It has to know that baz is in your dialect rather than prolog in order to handle it properly.

built in prolog is does not return values. This suggests that if you want to use an expression like:
builtin()->true;false (using -> as if) then you need a way to make inline expressions with if and , and ;
Also you need to stop using '->' because that means 'if' in prolog.

maybe => instead

also ? isn't used in prolog
maybe you could define ? as a preop that means
?(X,R) :- X-> R=true; R=false.

so you end up having
a_clause() => foo(? builtin()).
a_clause(RESULT) :- R1= ? builtin(), foo(R1,RESULT).

and for inline logical expressions reuse ()
a_clause() => foo( (builtin()->true;false) ).

This example has the same meaning.

Is there any problem with reusing () that way?

also we need ways to tell it about things that are already functions in prolog (ie the final variable is output).

Model-theoretic semantics?

If I skip the pipe-notation as a purely syntactic change then the difference between these two forms is really interesting from a logical (rather than procedural) perspective:

member(X, cons(Y, Tail)), X /= Y -> member(X, Tail).
member(X, cons(Y, Tail)) -> X /= Y, member(X, Tail).

The procedural reading of these two forms seems similar: the X\=Y term is still being used as a test to guard execution. Either it is being used before the rule is chosen for execution, or afterwards and then backtracking allows the choice to be discarded.

But the logical reading of these two forms looks very different. Moving the term from the body to the head with a conjunction means the first version is no longer a Horn clause so any guarantees provided by a single positive term (Definite Clausal Logic) disappear. This raises questions about soundness and stability. The procedural semantics could be easy to work out (I agree they do look like pattern guards in a functional program), but the relationship to clausal logic would differ and the model-theoretic semantics may be more tricky.

If I rewrite the implication explicitly (and swap sides so the arrow reads more conventionally, i.e the arrow is implication and not the functional production):

member(X,Tail)-> member(X, cons(Y, Tail)) ^ X≠Y
X≠Y ^ member(X, Tail) -> member(X, cons(Y, Tail))

These both read strangely to me. They were locally true at the point in the original procedural interpretation they would have been called in. But trying to map them back to clausal logic produces something that is not globally true (the uniqueness property that was necessary to form disjoint formula now describes sets rather than lists, a weird syntactic glitch).

This relates to your previous discussion about quoting/unquoting and also bind/unbinding of variables on failure. The version that looks more like pattern guards does not have (the same) straightforward mapping into clausal logic so the model-theoretic semantics are a little unclear.

In the second form each rule is still a Horn sentence, so it is a Definite Clausal Logic. But the typing of terms according to their "return types" makes some formula illegal and thus the mapping back onto an underlying logic is not trivial.

Deterministic not and member.

I had another play with this. Here is a version that allows the boolean logic to be expressed in this style of predicates with deterministic modes. I've move the "results" to the first argument as it is more standard that way. It allows the simple expression of member and not in a way that can be used together. To get around the issue of the procedural optimisation breaking the logical semantics I've borrowed a trick from combinators over lists.

not(true, false).
not(false, true).



equals(false,X,Y) :- X\=Y.

member2([Res|Rest],X,[Head|Tail]) :-


It seems to be about a far as one can go in a pure style. I wrote another version that handles the functional application as a meta-interpreter. There is inside an interesting quirk to do with quoting as it really needs a locally scoped assert/retract to handle binding properly. It's possible but I didn't implement it as it would be awkward to simulate local names within compound terms asserted/retracted against single global names.

If you're interested in the functional/logical cross-over angle then there was quite a batch of research in the 90s around the definition and implementation of Escher. I'm not sure how that worked out - I think the details of the language semantics got a little bit too hairy. RELFUN is quite an interesting entry point to that area.

Why such an inefficient algorithm?

You convert the whole set into a list of bools, true if equal to the test member, then you zip up the whole new list with or.

If you have committed choice predicates and false isn't fail then wouldn't ANY member test be negatable?


I don't understand what's so special about this style that you wanted to do it this way.

Also that inefficient algorithm would be much easier to read in a functional style
member2(Res, X, List) :- Res=zip(or, map(curry(equal,X), List).

Explicit model

What would you try to optimise for efficiency?

The cases are expanded out to make the determinism explicit for all modes. The optimisation that you seem to want to make trades resolution steps in one mode for a loss of determinacy in other modes. Determinism across different modes for the predicates is generally what is interesting about writing in a logical style instead of a functional one. If you lose that then why use Prolog?

Also that inefficient algorithm would be much easier to read in a functional style
member2(Res, X, List) :- Res=zip(or, map(curry(equal,X), List).

Yes it would - it is an encoding of a fold. But to put it into a functional style would require a meta-interpreter to handle the application and I was curious what could be done without one.

:/ other modes don't all work

I'm surprised you said it worked in other modes, because I couldn't tell it was written for more than just a forward checking mode.

I just pasted the code into ECLiPSe 6.0 prolog with the items renamed.

Useful other modes don't work. What works and what doesn't is a surprise.

For instance:
member(R,X,[1,2,3]) doesn't generate 1,2,3 in X (it gives 1 then gives up).

But member(R,4,[1,2,3|Y])
R=false, Y=[]
R=true, Y=[4]
R=true, Y=[4,4]
R=true, Y=[4,4,4]

Which points out that this kind of prolog doesn't generate success before failure.

generates X=1 but never X is unbound.
I don't know why since X\=X on the command line fails properly

Maybe programs that run in multiple directions are largely too flaky. We need a better way.

Or am I confused and this is supposed to run in a committed choice interpreter and never generate multiple results? In which case the unfinished set example points out the question WHAT DO YOU DO WHEN THE ANSWER ISN'T DETERMINED?

Also what I would optimize is that member should stop as soon as it succeeds in finding a match.

The deeper moral is that committed choice is wrong because:
if it's given an underdetermined problem it picks the first answer it find and doesn't tell there are others.

Normal backtracking is kind of wrong in that the only way you know that the problem is underdetermined is to examine the answers you get.

Figured it out

If you change the definition of equals thus so equal can try refusing to unify things that unify:

Then it will try other unifications. But then it is also free to try refusing to equate ground terms that are equal.
such as allowing 1 /= 1

You could put in a ground test but that feels wrong, what about partially specified structures? but really in that case "being a memember" isn't what's being tested... So really calling on a list with partially ground members seems ambiguous and wrong.

You want
equals(false,X,Y):- nonground(X); X\=Y .

But now it needs a name other than equals because it's not treating the parameters the same.

That fixes it but feels wrong
Note that having holes in the member list can give extra but not wrong answers
?- not_member(R, X, [1, Y, 3]).
R = false
X = 1
Y = 1
Yes (0.00s cpu, solution 1, maybe more)
R = false
X = 3
Y = 3
Yes (0.00s cpu, solution 2, maybe more)
R = false
X = X
Y = X
Yes (0.00s cpu, solution 3, maybe more)
R = false
X = 3
Y = Y
Yes (0.01s cpu, solution 4, maybe more)
R = true
X = X
Y = Y
Yes (0.03s cpu, solution 5, maybe more)
No (0.03s cpu)


Now try:

?- not_member(R, 3, [1, Y, 3]).
R = false
Y = 3
Yes (0.00s cpu, solution 1, maybe more)
No (0.03s cpu)

Oops, now it stops on the first answer because 3 is ground. But if it were allowed to fail it would find the second one.


This is a general reply to this thread. This seems interesting, but in general it is hard to write multi-modal code in Prolog.

I think the advantage of Prolog is that algorithms can be expressed clearly and concisely taking advantage of the built-in tree literal structures, unification based pattern matching and backtracking. I don't think everything has to work for all modes.

However, if this kind of code is what we are aiming for, it would seem better for this kind of thing to implement constructive negation, such that the result of a program is a constraint set (so as well as "X=2" being retiurned, "X/=1,X/=3,X/=4" would be an alternative).

If this were possible what answers would we expect for the different possible queries?

As I said

The important things that testing shows is:
1) if success can be "test failed, here's a false" and the problem is underdefined, it can return false as the first answer when you wanted true.
2) if you use committed choice and a problem is underdefined, then it can return false and then be unable to give you true even though true is what you wanted.

a) committed choice doesn't work well enough.
b) you have to collect answers till you find an acceptable one or fail. Since there's this extra step, you need a construct to say what answer you're looking for on each query.

While you COULD turn false back into failure, if you wanted that you'd be programming prolog.

so instead of
and X is true if you could prove foo()
You need


prove_member_true(X,E,L):- true_first(X),member(X,E,L).

or maybe

prove_member_true(X,E,L):- true_first(X),member(X,E,L),!.

What's the principle here?

Disequalities as Constraints

Disequalities need to be implemented using constraint propagation. I think the standard Prolog bultin for this is 'dif(X, Y)'.

not standard

constraints aren't part of the prolog standard, though there are common libraries that do it one way or another.

In ECLiPSe those are built on a bunch of primatives that aren't part of the standard either, so ISO prolog may be a little weak in the area of constraints.

In ECLiPSe prolog, variables can have a bunch of hidden fields, "attributes" and they're somehow used to implement different solvers than mere unification based ones. There's some level of parallelism with some clauses waiting in some solvers, and there's narrowing where other solvers pick new values for multiple variables at once and backtrack to their own chosen points I suppose.

I'm not an expert in this. For years there were constraint solving contests based on MiniZinc (which has a library in ECLiPSe)..
My impression is that these contests didn't yield much success.

It was a switch to SMT based solvers and a new language for that where advances were made. Prolog turned out to be inefficient.

There was an early paper where someone pitted Zinc based constraint propagation solvers against a simple binary SAT solver that found integer solutions as independent bits - and the SAT solver was clearly superior.

Then he did another paper with a smarter SMT solver and that beat everything.

But SMT and SAT throw away the whole model of prolog, while the constraint solvers fit in smoothly.

Later Standards

I thought CLP was added in the later Prolog-2 or Prolog-3 standards. In any case most of the well known Prolog implementations (SWI for example) Include CLP and the dif predicate, and my own interpreter has native support for it, using the attribute variable method.

SMT and SAT are interesting, but they are not directed solvers, that is you have no direct control over how they explore the search space. Further they really don't cope well with recursion and variable length data structures. Most of the problems I am interested in are to do with compilers, and hence operate on trees.

I cannot easily see how to write a SAT or SMT problem that evaluates an expression tree like this for example:

add(z, X, X).
add(s(X), Y, Z) :- add(X, Y, Z).

expr(lit(X), X).
expr(add(X, Y), Z) :- expr(X), expr(Y), add(X, Y, Z).

:- expr(add(lit(s(s(z))), add(lit(s(z)), lit(s(z))))).


I have so far failed to understand smt. At one time I rented a book on it, but didn't get very far.

We should both read Knuth's new fascicle 6A

There's a free version on his site or you can buy an updated version for $25 many places.

links to both are here

Already read it

I have already read it, implemented code for the methods, and got a cheque from Knuth for finding an error :-) He may have got further since then, but it was all optimisation of the fundamental DPLL by then anyway. I didn't have much interest in the 'approximate' methods, but that's where the really amazing performance is for a certain kind of problem. When I read it, it was unfinished and I stopped at lookahead-solvers, as I realised until I had worked out how to convert the problems I wanted to solve into CNF there was not point in implementing a faster and faster solver. Futher, none of the further optimisations help the general case. You can see the code I implemented on GitHub https://github.com/keean/SAT-Solver/blob/master/sat.cpp#L6 Algorithms A, B and D all work, L (the lookahead) is unfinished. It also didn't deal with SMT as far as I can remember.

Interestingly efficient SAT and SMT is quite nice on top of Prolog and CLP: https://www.cs.kent.ac.uk/pubs/2012/3136/content.pdf

Edit: I bought the hard copy of the fascicle, as there is significantly more in it than when I last looked at it, including lots more interesting algorithms.

I will bother you later if

I continue to not understand how SMT maps to SAT or how SMT models can be combined.

Oh my, that paper looks enlightening!

SMT in one paragraph

SMT is basically this. given a theory involving inequalities on numbers, like "a < b /\ b < 5 |= a < 4" you rearrange to a proposition "a < b /\ b < 5 /\ not(a < 4) which we need to show is unsatisfiable. From this a boolean problem (a /\ b /\ ¬c) is derived (the skeleton). A SAT solver is used to enumerate all possible solutions to this, and for each possible solution a constraint solver which determines if the conjunction of the constraints, inverted where the SAT solver produces a zero bit, is consistent. To prove the theory all the possible SAT solutions must result in inconsistent constraints. Does that sound about right?


I don't have a full interpreter installed on this machine to check - but are you describing something that works in a similar way to CLP/fd? Something like:

X #> Y, X in 1..3, Y=1, label([X]).
X = 2,
Y = 1 ;
X = 3,
Y = 1.

Oops, poorly tested

Oh dear - clearly my testing sucked a bit. The problem that you spotted is because I've used the wrong unification test in this line:

equals(false,X,Y) :- X\==Y.

This variation of the unification check works the way that you are describing with respect to grounding. If I plug in your test query then it generates 1,2 and 3 as expected. The syntax is written for SWI so it works directly on http://swish.swi-prolog.org/

I think Keean is using a flavour of Prolog that I haven't seen before, as /= isn't a valid operator on flavours that I'm familiar with (SWI and ciao) so I assumed it meant \=.

It is possible there are other bugs lurking in there - I am not the least lazy tester in the world. Let me know if it doesn't work on any other modes and I may be able to fix it.

Constructive Negation

This is supposed to be in response to CLP(fd)

- Unification represents an equality constraint. The interesting thing about equality constraints is we can solve immediately by unifying the variables even if the variable are ungrounded.

- The negation of unification is the disequality X /= Y (written "dif(X, Y)" in standard Prolog). This is not a test, but a constraint. If either of the variables are ungrounded it gets 'frozen' and it gets 'thawed' and re-tested every time something attempts to unify with that variable. This is like CLP, but it is only the disequality on unification (we are not dealing with numbers and inequalities).

- given any clause we can calculate the constructive negation by 'negation elimination' (essentially the Sato and Tamaki negation technique). For example


this could be rewitten

p(X) :- X = a; X = b.

and hence the negation:

not_p(X) :- not (X = a; X = b).

Except we cannot directly implement negation soundly so apply De Morgen's:

not_p(X) :- not(X = a), not(X = b).

And as already discussed we know how to implement the negation of unification using CLP.

not_p(X) :- dif(X, a), dif(X, b).

:- not_p(a)
not proved.

:- not_p(c)

:- not_p(Y)
Y/=a, Y/=b.

Note in the ungrounded case we cannot resolve the disequality constraints, so they remain in the solution.


Are you familiar with Ciao? The functional library in the system is quite close to what you want and it integrates well with the constraint libraries. There is a nice overview of the language and the tooling in http://arxiv.org/pdf/1102.5497.pdf


I am more interested in understanding negation, and I am working with my own logic language implementation that provides disequality constraints natively using the attribute variable technique. So I have logically sound unification (rational tree unification with post cycle check), I have a complete search strategy (iterative deepening), and I am working on complete inference.

The original point of this post was to look at the 'positive' fragment of logic only because it occurred to me boolean algebra can be reconstructed within it, then there only remains the original syntax issue.

However I also think constructive negation is a viable way to add support for the negative part. I am undecided at the moment which way to go. It is certainly harder to implement constructive negation in the interpreter than only implement the positive fragment.

I had a quick look at http://clip.dia.fi.upm.es/papers/negation-gulp.pdf and it looks like the kind of thing I am interested in, but they are already optimising, whereas I would be happy with slow but working at the moment.


Looking at the standard Prolog builtins I would expect:

equals(false, X, Y) :- dif(X, Y).

To be clear I am using a prolog like logic language, which implements sound unification, and a complete search strategy (iterative deepening). Complete inference is a work in progress. I probably should use the standard Prolog names here though.

I see the difference now - I

I see the difference now - I had assumed earlier that you were actually working in Prolog with standard depth-first search and unsound unification. The example code above implements what you described (initially in the example at the top of the topic) in a cut-free implementation and a "reasonably" natural style. This suggests that the alternative syntax/semantics that you suggested originally should work reasonably naturally in a meta-interpreter over pure (cut-free) Prolog.

Using \== does not defer anything until later - it perform a single unification check without grounding the variables. Working with deferred constraints is interesting - are you planning to try alternative modes on the embedded functional terms, e.g. what inputs could create this output over a closed input domain?


My prolog only has dif in a compatibility library, undocumented so I don't know what it is except by context.

But I don't see it deferring, I see it declaring to a constraint solver that two values are different.

In the integer constraint library there is $\= for a not equal constraint
or #\= for both are integers and not equal

Not Closed Domains

At the moment I don't want to assume a closed domain, as that would introduce typing. My preference at the moment is to propagate the disequalities. So given:

:- not_memeber(X, [a, b, c]).
X/=a, X/=b, X/=c

whereas with a closed assumption and knowing that the list members have a type with a domain {a, b, c, d} you could conclude "X=d". This is however easily recovered.

:- not_memeber(X, [a, b, c]), p(X).
X = d, proved.

There are no types involved here, its simply the result of the disequality constraint propagation.

I missed the part

where you said it was your own language.

Can you explain what sound unification is, and what the algorithm is?

It's not just unification that's correct over self referring structures, right? That wouldn't be relevant in this example.

Sound Unification

As you say I dont think its relevant in this example, I just was pointing out other differences from standard Prolog. The unification algorithm is rational tree unification (so unification works on cyclic structures) with a post-cycle check (so unification fails if there are any cycles in the result). This keeps inference sound for a Herbrand universe, but also ensures unification terminates. I think its faster than including an occurs check in the unification itself, because you only have to cycle check parts of the tree (where there is a repeated variable on either side of the unification).

returning values.

I have an embedding of an extended prolog in scheme (using continuations, it's really embedded and allows mixing scheme and prolog).

It can allow clauses to return values like functions, but it doesn't do what you want.

You want to be able to say (unify '(a b c)(a_clause 5)) where the unify can turn the "return value) of a_clause into an input (of '(a b c)).

In mine return values are just return values. It would be interesting to use macros to fix this... and a very painful library to write. My own library is a lot of macros, and macros aren't easy.

committed choice

I think if committed choice is the mode of this language then, like many examples I gave above, you can't use it to give you any proof and let it stop there, you have to request a proof that is predefined in all particulars and see if you get it. And if there are variables other than those in your answer, you have to simulate backtracking by permuting all the possible values on those.

As I said, I'm not seeing committed choice as a good strategy here.

Perhaps you can explain how I'm wrong?

Not Committed Choice

I prefer a backtracking implementation too, I was somewhat trying to separate concerns by indicating that the implementation was the same for backtracking and committed choice. I prefer non-ovetlapping clauses where possible.

Intensional Negation

The second form of negation discussed in this thread is intensional negation, or constructive intensional negation (not just constructive negation as I called it above).

Does anyone know of any freely available papers covering implementation techniques for this kind of negation?