Linear Logical Algorithms

Linear Logical Algorithms, Robert J. Simmons and Frank Pfenning, 2008.

Bottom-up logic programming can be used to declaratively specify many algorithms in a succinct and natural way, and McAllester and Ganzinger have shown that it is possible to define a cost semantics that enables reasoning about the running time of algorithms written as inference rules. Previous work with the programming language Lollimon demonstrates the expressive power of logic programming with linear logic in describing algorithms that have imperative elements or that must repeatedly make mutually exclusive choices. In this paper, we identify a bottom-up logic programming language based on linear logic that is amenable to efficient execution and describe a novel cost semantics that can be used for complexity analysis of algorithms expressed in linear logic.

In my last post, I linked to a paper by Ganzinger and McAllester about specifying algorithms as logic programs, and a) admired how concise and natural the programs were, and b) was sad that the logic programs used some "non-logical" operations, such as deletion.

So, what does it mean for an operation to be "non-logical", and why is it a bad thing? Roughly speaking, you can think of the analogy: non-logical operations are to logic programs what impure operations are to functional programs -- they are features which break some parts of the equational theory of the language. Now, the Curry-Howard correspondence for functional programs says that types are propositions, and programs are proofs. It turns out that a different version of this correspondence holds for logic programs: in logic programming, a set of propositions is a program, and the execution of a program corresponds to a process of proof search -- you get a success when execution finds a proof of the goal.

When you have nonlogical operations in your logic programming language, you've introduced operators that don't correspond to valid rules of inference, so even if your logic program succeeds, the success might not correspond to a real proof. Deletion of facts from a database is a good example of a nonlogical operation. Regular intuitionistic and classical logic is monotonic: deduction from premises can only learn new facts, it can never disprove things you already knew to be true. Since deletion removes facts from the set of things you know, it can't have a logical interpretation in classical/intuitionistic logic.

However, it turns out that not all logics are monotonic, and in this paper Simmons and Pfenning show that if you take the language of propositions to be a fragment of linear logic, then all of the operations that Ganzinger and McAllester use actually do have a nice logical interpretation.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Why delete facts? Is it the

Why delete facts? Is it the logical equivalent of garbage collection in functional languages?

Facts tend to be conditional...

...so when the conditions change, the fact no longer holds. There have been various strategies for addressing this observation: truth maintenance, circumscription, and transaction logic, to name but a few. In AI and logic programming, these all fall under the rubric of "non-monotonic" reasoning, which cognitive philosophers call "defeasible reasoning." These Logic Programming and Non-Monotonic Reasoning '07 slides (PPT) provide an excellent historical overview, while the work of John L. Pollock represents, AFAICT, the current state of the art in cognitive philosophy, particularly as brought to bear upon the questions around artificial intelligence as a practical endeavor.

A fact is

a claim from which, on the current evidence, it would be perverse to withhold provisional assent. The current evidence may change.

Deletion is needed for efficiency

So, the way a forward logic programming language works is that you start with a database of facts, and a collection of rules, which are of the form "if FOO, then BAR". The engine will then nondeterministically select rules and try to see if the rule's hypotheses (FOO) are in the database. If they are, it will then add the consequences (BAR) into the database and repeat, until no more consequences can be added. Now, note that the engine has to take a rule and look through the database for facts. This means that as database grows, the time it takes to fire a rule will have to go up, because it has to search through more facts before deciding that a rule does or doesn't match.

If there are a bunch of redundant facts in your database, then you can end up doing a lot of useless work. If you can remove facts from the database as they become irrelevant, preventing the engine from ever considering useless paths.

For example, suppose we have a database with facts of the form "item(x)", and we want to write a program that collects all of them into a list (in some unspecified order). So we might add the fact that "list([])" holds, and then run the following one-rule program:

   item(x)
   list(xs)
------------------- 
   list(x::xs)
   delete list(xs)
   delete item(x)

So, the way to read this is that if we find "item(x)" and "list(xs)" in the database, we delete both of those facts and replace it with a new fact "list(x::xs)". This guarantees that the algorithm can run in linear time.

If we couldn't delete facts, then we'd have to write a more complex program:

   item(x)
   list(xs)
   not_member(x,xs))
----------------------
   list(x::xs)

So at each step, we have to run a linear-time membership test, which means that even in the best case where the search never picks the wrong list() atom, we still have quadratic complexity, and in the worst case we end up generating all n! permutations of the list.

If by efficiency, you mean termination

An also-relevant point is that we're often interested in deductive databases with running rules to saturation - determining the least set of facts closed under the rules. If we don't delete facts (or use linear resources) in the program above, we have a tradeoff. With the not-member test, we cannot describe lists with multiple elements (a list with two "3"s, six "1"s and one "2" - though there are ways around this problem). Without the not-member test, we face the possibility that there is no way of reaching saturation, but the program with deletion will reach "saturation."

I use scare quotes because, as described in the paper Monadic Concurrent Linear Logic Programing, the point where no more rules is different in the presence of linear resources, and is referred to there as quiescence.

One possible reason

Some comments in the paper make it look like a nice way to handle mutual exclusion in a logic language. In prolog if you want to handle negation, or if-then-else constructs then you need to use cuts to prevent part of the SLD tree from being searched.

The ephemeral rules look like pebbles (thinking of Cook's 3-sat proof) - so they define mutual exclusion. Each one has a cut wrapped up inside it. When you produce an ephemeral rule as the result of an inferrence you are creating a resource. Different inference rules can then consume that resource but they cannot share the pebble - it's an atomic operation that locks (or commits) to proofs that follow from that derivation.

The background logic in Section 3.2 looks more powerful than this, but this would be an interesting starting point for a language as it may make reasoning about programs with mutual exclusion easier.