future history of logic programming

A Possible Future History of Logic Programming [PDF, 98KB] by M.H. van Emden:

When the first edition of Senner's history came out in 2020, it was widely praised for its compelling view of the development over many decades of logic programming. Reviewers praised it for its broad perspective, but deplored its lack of historical detail. Since then several collections of papers have made their way from estates via the auctions to various libraries. Senner has taken this opportunity to incorporate these recent findings in a new edition.

What has not changed in the new edition, and this is what I take issue with, is that Senner presents the development of logic programming as a relentless forward march towards an inevitable outcome. Of course, it is true that not a decade went by without some of the building blocks being fashioned that we take for granted as part of the majestic edifice that now dominates the landscape of programming.

How did Prolog achieve world domination? (I thought Forth was going to win for sure?) Easy. Through the following contributions of the 90's to logic programming:

  1. XML. The tree as universal data structure.
  2. Execution via virtual machine.
  3. Constraint programming.

What Senner's History of Logic Programming (MIT Press, 2020) fails to do is pinpoint the year in which revisionist historians of logic programming were decisively exposed for the frauds that they were. (Kidding.)

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

using WAM to represent XML

BTW, the above paper mentions Expeditious XML Processing [PDF, 227K, scholar.google.com] by Kelvin Yeow, Nigel Horspool, and Michael Levy:

The efficiency of an XML processor is highly dependent on the representation of the XML document in the computer's memory. We present a representation for XML documents, derived from Warren's representation of Prolog terms in WAM, which permits very efficient access and update operations. Our scheme is implemented in CXMLParser, a non-validating XML processor. We present the results of a performance comparison with two other XML processors which show that CXMLParser meets its high-performance goal.

Time to become a contributing editor...

I wonder?

rights and obligations

With rights come responsibilities :)

I only post once in a blue moon... Wouldn't be able to handle the pressure of having to contribute regularly. Besides, I wouldn't be able to flame Paul Snively anymore if I were an editor. (I do have an interesting piece for him that I can't quite find the time to write up...)

Ha? 99% of the editors post

Ha? 99% of the editors post once in a blue moon (if that often), and nobody is allowed to flame around here as far as I remember...

There's no pressure

I'm an editor for over an year now and I only posted four stories so far (an improvement of 200% over my pre-editor contribution rate ;).

Actually...

...when have you ever flamed me? I seem to recall you challenging some of my assumptions with extensive supporting citations and quotes, which I think is the whole point of being here.

As for regularity, I certainly don't regularly find things to post here, much to Ehud's chagrin, I think. Part of that is due to work demands, but part of it also is that I'm trying to spend more time doing than writing about. Though you might believe it a waste of time :-) I"m still learning how to use Coq in general, and specifically how to write compilers using Coq.

I'm also intrigued, though, by logic programming, particularly models of action and state in logic: the Event Calculus, for example, or the combination of F-Logic, HiLog, and Transaction Logic found in Flora-2, which crucially depends on tabled Prolog as found in XSB.

Both seem quite powerful. Interestingly, it's become popular to implement the Event Calculus by making the closed-world assumption, reducing the first-order logic to propositional logic, and handing the whole world off to a SAT solver... and SAT solvers have gotten astonishingly efficient these days, particularly the GPU-based ones. So I am inclined to agree with the author of the future history that the people of 2020 will look back on this period as a watershed period in logic programming.

Concurrency Too

I'd like to add one additional thing I think will contribute to logic programming in the near future. It wasn't a contribution of the nineties, but it was something that was there by 2004: The need for concurrent execution.
Just around 2004 the mainstream processor manufacturers came to a point when they could not produce more MIPS (or GHz) for their processors. Moore's law was still working (in terms of transistors per mm^2) but not in the (twisted?) form of processing speed. Instead of speed, these manufacturers shifted to a "multi-core" approach, allowing better parallelism.
That pauses a problem for software developers, as their programming languages are not fit to handle parallelism (see this thread). Procedural / imperative languages (as most commonly used by those who seek performance) are inherently problematic with parallelism. Logic programming, on the other hand, has (at least) 2 valid approaches to parallelism (AND and OR parallelism). This paradigm, alongside functional programming, can thrive under these circumstances. I think that concurrency, alongside other things, can help bring logic programming to the spotlight.

Parallel logic programming

Parallel logic programming was a craze a long while ago, and many Prolog variants were introduced. My general impression was that people lost interest, and that other approaches to concurrency gained momentum.

It is amusing to recall that Erlang was prototyped as a meta-circular Prolog interpreter. It's an interesting anecdote, if nothing else.

The Nineties, Again...

I did see that most of the papers about parallel Prolog where from (at most) the 1990s, and I did see that there is (practically) nothing new in the field. I think one of the reasons for that is that the world has shifted, and people are using logic programming for different things than they did before. In the eighties and (probably) the nineties AI was pretty strong in its classical approach of using Lisp and Prolog and such. Nowadays, logic programming is mainly used in server environments (agents and stuff), where parallelism is achieved by simply running several different tasks at time...
Anyway, as a strong believer in the power of logic programming (and I hope to post some thoughts about this soon), I believe that what was true then is still true now, just waiting for the right application :-)

Trees & Logic Languages -- n00b Question

So, what does the tree as a universal datatype have to do with logic programming? It's been my impression that the tuple was the big dog in these languages, though fancy trees made an appearance in implementations.

How do logic languages allow direct interaction with trees?

Trees

Unevaluated function symbols are labeled trees. For example p(r(s),q(t,u)) is:

 
  p
 / \
r   q
|   |\
s   t u

Unification can be thought of as querying trees. And unification without occurs-check is as the author points out, unification over rational trees.

An interesting spin on logic

An interesting spin on logic programming is to think about it in terms of the interpreter(s) needed to evaluate it. In this point of view logic programming looks a lot like other forms of interpreters or evaluators. A few lines of code can be the difference between backtracking, or normal functional evaluation. If one lets the interpreter drive one's thinking instead of the logic theory a lot of interesting and practical things are possible.

I will go out on a limb and invoke a little computer science: The objective is to guarantee first order behavior, and to limit second order behavior to controlled and "provable", or at least "stable" trajectories, as in non-wellfounded set theory. Much of this theory has been invoked right here of LTU, previously.

Also, there is another thread on this page that is interesting here. All programming can be seen as abstract interpretation. Logic programming is no different. Logic programming expresses semantics directly in a target domain. If all programming were done this way the issues of provability, security, etc., would be much diminished because the program itself would be clear and meaningful on it natural abstraction level. This is usefull for modeling purposes (abstract interpretation), and it is also execuitable.

xref

Hewitt's Histor of Logic Programming (mentioned in the main post) is discussed here.