Pure, Declarative, and Constructive Arithmetic Relations

Pure, Declarative, and Constructive Arithmetic Relations. Oleg Kiselyov, William E. Byrd, Daniel P. Friedman, and Chung-chieh Shan. FLOPS 2008. (source code)

We present decidable logic programs for addition, multiplication, division with remainder, exponentiation, and logarithm with remainder over the unbounded domain of natural numbers. Our predicates represent relations without mode restrictions or annotations. They are fully decidable under the common, DFS-like, SLD resolution strategy of Prolog or under an interleaving refinement of DFS...

[The] attempts to define decidable multiplication even for the seemingly trivial unary case show the difficulties that become more pronounced as we move to binary arithmetic. We rely on a finite representation of infinite domains, precise instantiatedness analysis, and reasoning about SLD using search trees.

So you've read The Reasoned Schemer and were excited about the fact that unlike the built-in operations in Prolog, arithmetic relations (over binary numbers) were fully implemented. For example, addition could also be used for subtraction and multiplication for factoring numbers and for generating all triples of numbers related by multiplication. Now comes this paper to explain the motivation behind some of the more arcane definitions needed to implement arithmetic in a fully relational style, and to prove their properties formally. The paper develops unary and binary arithmetic relations in pure Prolog (with no cuts, negation or introspection).

LtU readers will also be interested in yet another embedding of pure Prolog into Haskell, that the authors offer. It is not meant to be the most optimal or convenient Prolog implementation (it wasn't even intended to be an implementation of a logic system). It was explicitly designed to be easier to reason about and so help prove certain properties of SLD or similar evaluation strategies. The main difference of DefinitionTree from other embeddings of Prolog in Haskell has to do with the generation of fresh names for logic variables. In DefinitionTree, name generation is not an effect, and the naming is fully decoupled from the evaluation. The evaluation no longer needs to carry a state for the generation of fresh names, hence the evaluator is easier to reason about equationally.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Backtracking arithmetic in Prolog

The following three predicates are found in many dialects of Prolog:

  • between/3: (+,+,?) succeeds if the third value is between the first two, provided the second is greater than the first two;
  • succ/2: (?,?) succeeds if the second value is the successor of the first; can trivially be defined in terms of add/sum;
  • plus/3: (?,?,?); exactly equivalent to add/3 in the paper.

(For those not familiar with Prolog's arity notation, something like (+,?,?) means that a 3-place predicate can generate answers to values provided in any of the places marked "?"; the predicate will neither succeed nor fail if a logic variable is passed to a position marked "+").

These are very useful predicates in Prolog, and though they are easy enough to define in standard Prolog, their absence from the standard counts as something of a defect.

The predicates other than add/3 defined in the paper do not seem to be so useful, but the implementation is clean, it's always nice to see code justified by correctness arguments, and I enjoyed looking over the code. I'm at a bit of a loss to see the point of the paper, though: what makes this count as a scientific communication? Does it lie in the fact that their code works for several different resolution strategies? Are they trying to promote a certain resolution-strategy-insensitive coding style? I don't get it.

Mode-oblivious arithmetic

I don't think plus/3 (as found in, say, SWI Prolog) is our add/3:

?- plus(A,B,C).
ERROR: plus/3: Arguments are not sufficiently instantiated
?- plus(A,B,5).
ERROR: plus/3: Arguments are not sufficiently instantiated
?- plus(5,A,B).
ERROR: plus/3: Arguments are not sufficiently instantiated

A broader point that we want to make in this paper is that arithmetic need not be treated specially in a logic programming system and may not necessitate the definition of a predicate to care about the modes of its arguments. This point addresses an instance of the question, why extend logic programming to constraint programming?

Good answers, thanks. The

Good answers, thanks.

The FLOPS 08 programme looks very interesting & has a strong PC. I should think whichever of the authors delivers the paper will have an interesting reception.

In your paper, you write:

In your paper, you write: "Unfortunately, this flexibility has a price: CLP restricts the arithmetic domain to be finite...", but this is not the case. For example, check out library(clpfd) in SWI-Prolog:


:- use_module(library(clpfd)).
?- A + B #= C.
A = _G1441{inf..sup},
B = _G1442{inf..sup},
C = _G1445{inf..sup}.
?- A + B #= 5.
A = _G1751{inf..sup},
B = _G1752{inf..sup}.
?- 5 + A #= B.
A = _G1752{inf..sup},
B = _G1755{inf..sup}.

The atoms "inf" and "sup" denote negative and positive infinity, respectively. Compile SWI Prolog with GMP to see that you can reason over arbitrarily large integers with library(clpfd). For example:


?- 5 + X #= 7^7^7, X #< 10.
fail.

Furthermore

In addition to Mat's comments, the example that he gives terminates in CLP. Using the add predicate in the paper in the same way fails to terminate (as explained in one of the propositions) :

add(X,Y,Z), Z=[u,u].

On having a read through the paper this approach appears to be much less powerful than it first appears because it rules out any variable sharing, and only considers single instances of arithmetic operations rather than conjunctions.

So it would be impossible to do something like:

add(X,[u],X]). % Impossible but doesn't terminate
add(X, Y, Z), add(X2, Y2, Z). % Can't enumerate X+Y=X2+Y2

So you have to be very specific about what you mean by arithmetic to claim that it can do the same as CLP in a pure subset of Prolog. It is much more limited than CLP over programs of arithmetic operations, rather than just single operations. And even over single operations I think the X+1=X would fail finitely in CLP (I haven't got a clp library installed here to check this right now so please correct me if I'm wrong)...

The mention of using the multiplication predicate to factor is vaguely reminiscent of Torben Mogensen's "Semi-Inversion of Guarded Equations" which was capable of transforming between a multiplication predicate written in a specific way and a automatically generated division predicate.

Correspondence to known sets of axioms for the natural numbers?

Is this a new axiomatization of the natural numbers? Or does it correspond to other (decidable) axiomatizations of the numbers? Is it more powerful as e.g. Presburger arithmetic?

Did you look at HOL-Light, Isabelle, or ACL2?

Did you look at the implementation of arithmetic in the logics of the HOL-Light, Isabelle, or ACL2 theorem provers? I doubt that any satisfy all of your goal properties, but I suspect that each of them may satisfy more than you'd guess.

Stress on purity

To answer several questions, I'd like to point our two premises of the
paper. It is easy to write all-mode arithmetic relations in general:
by using var/1, we determine which of the arguments are instantiated,
and so use addition, or subtraction, or generation as appropriate. In
the latter case, we may chose to propagate the addition constraint or
to residualize: suspend the evaluation of the addition making it a
constraint, to be re-evaluated when variables become sufficiently
instantiated. Mercury, for example, allows us to write different
clauses for different modes of a predicate. There is a good reason why
such a predicate must be marked unsafe in Mercury, and why the
frequent use of var/1 is discouraged. In our paper, we specifically
disavowed any use of var/1, negation or other such (reflective)
facilities. We deliberately limited ourselves to Horn clauses, and
nothing else.

There is no doubt that theorem provers or constraint solving libraries
offer very sophisticated procedures for solving arithmetic
constraints. In any such procedures, to my knowledge, we must know
where all our variables are. But in our paper, we specifically do not.
We have no idea which arguments of our goals are or contain variables,
and we cannot tell. We disclaim any and all intensional analysis. Not
that there is anything wrong with it; we merely wanted to see how much
we can do without.

Our use of SLD resolution and its simple variants -- as widely
implemented and quite efficient -- was another self-imposed
restriction. One may quite rightfully say that we deliberately painted
ourselves into the corner. On the other hand, purity and minimalism,
however difficult to live with, often offer deep and general insights.

As to the relation to Presburger arithmetic: the latter can decide
arbitrary formulas within its domain. We can fully decide only base
predicates -- addition, multiplication, etc. -- but not their
conjunctions, or conjunctions with equality. Conjunctions of the
base predicates let us express arbitrary Diofantine equations, which
are not generally decidable.

Finally, the goal of this project was not to be useful but to be
insightful (although, quite surprisingly to us, a similar development
for Curry proved to be practically useful and has become part of a
Haskell Curry implementation).

Purity and Presburger

Excellent answer to the question of motivation: I now see properly where the paper is coming from.

As to the relation to Presburger arithmetic: the latter can decide
arbitrary formulas within its domain. We can fully decide only base
predicates -- addition, multiplication, etc. -- ...
This is true, but perhaps risks misleading those not familiar with the logical theory underlying resolution based languages. So here follows a quick tour that might help with heckenpenner's question...

Theories like Peano arithmetic (PA) and Presburger arithmetic (Pres) are theories of first-order logic, and so have a deceptively sophisticated language of quanitifiers, whilst Prolog does not. PA sits in a nice place: it is strong enough to talk about all operations of arithmetic; it's limitation lies in that it cannot prove the totality of all actually total functions. Pres is much weaker: it has no idea of the existence of multiplication, let alone higher functions.

Prolog can be seen as weaker in some respect than theories such as Pres in that it has no first-class quantifiers, but there is some metamathematical wizardry in the shape of skolemisation which tells us how to formulate semi-decision procedures for all formulae of first-order logic (FOL) & likewise of theories in FOL whose axioms we can translate into Horn clauses (with additional techniques, all first-order theories can be so translated). The indirect nature of this connection makes informal equivalences problematical to state (in the sense that they will fail to fix crucial details and so will tend to mislead), and I'm sorry to say I don't know of a good expository overview of the theory.

Further Analysis

Very interesting and insightful response to the questions and comments so far. It certainly clears up the motivation for the paper a lot more. Although what you've done so far is interesting research, I'm intrigued by your description of seeing "how far you can go" in horn clauses.

Possibly an open question, but given my examples above do you think it would be possible to handle them by relaxing your stress on purity, but without introducing contraints through deferring evaluation? In particular, if you allow negation-as-failure could you rule out (with finite failure) clauses of the form add(X,[u],X)? Obviously you could rule out such a single instance of an infinite set of tuples that are not in the add relation, but what is your opinion on how far you could go in ruling out families of invalid tuples? Ie Can pure prolog + negation-as-failure model addition exactly? If it can, then is \+ a sledge-hammer in this case, or is the minimum needed to model this relation?

Retreating from pure Horn clauses

That's the big question. It appears unlikely that pure Horn clauses will lead to something comparable in expressive power to Constraint Logic Programming. So, we need more power. But can we still avoid introspection (var/1) and unsoundness (Mercury modes)? Options include negation, cut, or moving to a more powerful base language with quantifiers and/or lambdas.

Also, what subset of problems would a more complete solution have, given the undecidability of the general case?

How far along we can push the extensional approach

http://lambda-the-ultimate.org/node/2697#comment-40470
http://lambda-the-ultimate.org/node/2697#comment-40512

This is indeed a very good question. Negation-as-failure of course
helps -- but it helps too much, because Prolog's built-in negation
predicate immediately gives us var/1 (spelled as vr/1 below),
using the well-known trick:

vr(X) :- \+ \+ (X=1), \+ \+ (X=2).

When arithmetic predicates are invoked with arguments that share logic
variables, we rather sooner, in my experience, come across a recursive
invocation of a goal whose arguments differ only in the names of free
variables from the arguments of the original invocation. That
indicates that tabling will give us finite failure in this case. I
don't know if tabling will give us finite failure in all the cases
where sharing of variables across arguments would have lead to
divergence. But is seems quite likely. Tabling and the XSB system thus
appear the most promising extension to look at.

Of course checking how much more can be implemented in lambda-Prolog
(where we do have quantifiers and more general formulas), and whose
variable-scoping mechanism offers a general way to prevent unsoundness
often occurring in negation (that is, when the negated formula binds
variables that were accessible outside negation). Alas, lambda-Prolog
doesn't seem to be actively developed, although the web page
http://teyjus.cs.umn.edu/ does
show a quite recent release, after a 2-year hiatus. Hopefully the
development will continue.

Bedwyr

Does this suggest, perhaps, that we should be paying more attention to Bedwyr?

Is logic programming macro-expressible?

I haven't (yet) read The Reasoned Schemer, which extends Scheme with logical constructs, or looked at the logic programming monads for Haskell. Do these libraries fully macro-express logic programming features in these languages, or do they only simulate them?

For example, Scheme can macro-express coroutines in terms of call/cc, so they don't have to be included as a language primitive. On the other hand, Haskell cannot macro-express call/cc, only simulate it locally using the CPS monad. Adding coroutines as primitives to Scheme would not increase expressiveness, but adding true first-class continuations to Haskell would increase expressiveness.

To gain the full expressiveness of logic programming in a functional language do you have to add primitives - as Mercury and Curry do - or do libraries suffice?

Yes, logic programming is macro-expressible.

In fact, there are three macros that support all the programs
in the draft second edition of "The Reasoned Scheme" (except for
chapter 10, which hasn't changed from the first edition) (code available on request, but second edition is not). The three necessary macros (with a few help macros, which can easily be written with functions) are run (an interface operator, very similar, but tighter in the new edition), fresh (renamed for the moment to exists), and conde, which is an interleaving variant of conde from the first edition of "The Reasoned Schemer". Because of this interleaving, not all lists of answers are identical, but generally corresponding lists contain the same values.

Welcome to LtU, Prof.

Welcome to LtU, Prof. Friedman!

LtU owes its beginning to EOPL[*], still the definitive PL textbook in my opinion, so I am particularly thrilled when the authors of EOPL post here.

[*] Newcomers may not know that I began LtU when I was developing a PL course, which I decided to base on EOPL2.

Available online?

Some googling turned up this:
http://www.cs.indiana.edu/classes/c311/newestlatest/mk.scm
Is that the same code?

Almost

Hi Sjoerd!

The code you found includes disequality constraints, which are useful but complicate the implementation. You may wish to begin with this code:

http://www.cs.indiana.edu/classes/c311/newmkcode/mk.scm

Cheers,

--Will