History of Logic Programming: What went wrong, What was done about it, and What it might mean for the future

Carl Hewitt is speaking tomorrow at Stanford's CSLI CogLunch on the history of logic programming.

A paper is here, so LtU readers can offer their perspectives on the argument.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.


Both links are the same. Is the second link meant to be this?

Yes. Sorry about that.

Yes. Sorry about that.

A much better rendition of

A much better rendition of the page is availble here. The what went wrong/What was done about it boxes were missing.


I recently read the wikipedia entry, which sounded a lot like Hewitt's page. I now see that there was a bit of an edit war going on.

Btw, Logic, Programming and Prolog (2ed) by Nilsson & Maluszynski is freely available as a pdf now.

Uniform provability

A point that needs clarification: Hewitt's argument for the computational limitations of logic programming in the "Is Logic Programming computationally universal?" is a criticism of the Prolog paradigm, based on the computational limitations of uniform provability for Horn clauses. Other logic programming paradigms, based on different underlying conceptions of deduction, can give a different relationship to concurrency.

A good example of this is Dale Miller's observation that linear logic supports a notion of uniform provability, and his resulting design of Forum. Cf. the LtU story Uniform Proofs as a Foundation for Logic Programming, and Dale Miller's paper Forum: A Multiple-Conclusion Specification Logic.

Postscript: I should not forget the work done by Pfenning's group on Concurrent LF, which introduces the ability to reason about concurrency through a monadic abstraction. It deserves its own story.

LP vs FP

Hewitt's comparison of the achievements of LPing and FPing reminded me of Tom Schrijvers A Wake Up Call for the Logic Programming Community, which compares the health of the Prolog community unfavourably to that of Haskell, and which was published as an editorial last winter in the ALP newsletter. Schrijvers seems optimistic that the problem is mostly a social one, and not a problem with Prolog.

Edit: Fixed Tom Schrijver's name.

Shrijvers seems optimistic

Shrijvers seems optimistic that the problem is mostly a social one, and not a problem with Prolog.

What do others think?

He's wrong

I think that Prolog has lasting interest as a teaching language, and is one of the handful of PLs that everyone with a broad interest in programming should learn, but that the limitations of its basic computational mechanism make it uncompetitive as a general PL. I find that Haskell, through the list of successes paradigm, is usually a better language for writing the kind of programs Prolog is good at than Prolog.

I think the LP community should be seriously looking for a successor. There's no shortage of candidates.

Towards a WAM Model for Lambda Prolog

Nadathur, Jayaraman (1989). Towards a WAM Model for Lambda Prolog. TR-CS-1989-11, Duke.

Issues concerning the implementation of the logic programming language $\lambda$Prolog are investigated. This language extends Prolog by providing for higher-order notions and incorporating a richer set of search primitives. Techniques must therefore be devised for implementing these two new aspects. Extensions to the Warren Abstract Machine (WAM) are proposed in order to deal with the several significant new problems that arise in conjunction with the former aspect. A representation for $\lambda$-terms is outlined that facilitates the examination of their internal structure while simultaneously permitting reductions to be performed easily. Mechanisms are discussed for implementing higher-order unification - an operation that is much more complex than first-order unification - within the backtracking paradigm of Prolog. Instructions for creating $\lambda$-terms and invoking operations within unification are also provided. The nature of compiled code is illustrated through examples, and it is argued that the enhancements to the WAM preserve its behavior over first-order programs. The ideas presented in this paper are the basis of an implementation effort that is currently underway.

This abstract suggests that lambda-Prolog would not be a painful direction for the LP community to take, though I have not read the paper (not available online), and I wonder about the phrasing it is argued that the enhancements to the WAM preserve its behavior over first-order programs.

One of the authors, Nadathur, is involved in the current Teyjus implementation of lambda-Prolog.

I shall email the authors, to see if the paper is available electronically.

A Treatment of Higher-order Features in Logic Programming

Gopalan Nadathur pointed me to this sequel, namely Nadathur (2002). A Treatment of Higher-order Features in Logic Programming:

The logic programming paradigm provides the basis for a new intensional view of higher-order notions. This view is realized primarily by employing the terms of a typed lambda calculus as representational devices and by using a richer form of unification for probing their structures. These additions have important meta-programming applications but they also pose non-trivial implementation problems. One issue concerns the machine representation of lambda terms suitable to their intended use: an adequate encoding must facilitate comparison operations over terms in addition to supporting the usual reduction computation. Another aspect relates to the treatment of a unification operation that has a branching character and that sometimes calls for the delaying of the solution of unification problems. A final issue concerns the execution of goals whose structures becomes apparent only in the course of computation. These various problems are exposed in this paper and solutions to them are described. A satisfactory representation for lambda terms is developed by exploiting the nameless notation of de Bruijn as well as explicit encodings of substitutions. Special mechanisms are molded into the structure of traditional Prolog implementations to support branching in unification and carrying of unification problems over other computation steps; a premium is placed in this context on exploiting determinism and on emulating usual first-order behaviour. An extended compilation model is presented that treats higher-order unification and also handles dynamically emergent goals. The ideas described here have been employed in the Teyjus implementation of the Lambda Prolog language, a fact that is used to obtain a preliminary assessment of their efficacy.

The paper seems to be an evolution of the ideas in the earlier TR, though I have not looked closely enough to be clear. Gopalan says that the new architecture of Teyjus 2.0 represents a further evolution beyond this paper, which has not yet been written up.

Attempting to generalize Prolog led to 5th Generation failure

There is a section in my paper "Middle History of Logic Programming" on how attempt to generalize Prolog was a principle cause of the failure of the Japanese 5th Generation Project.

but that the limitations of

but that the limitations of its basic computational mechanism make it uncompetitive as a general PL

I agree. In some sense this is a shame since the basic principle is so appealing.

To make he discussion slightly more controversial: I think one of the reasons is that the control structures are opaque (or implicit, if you prefer). While several paradigms claim that they can "save" programmers from the tediousness and errors the come from dealing explicitly with control (yes, I am talking to you FP-er out there), until now the experience of many programmers, including people who used several of these languages quite a bit is that this raises various types of problems (including making some types of reasoning harder, even if others are made easier by this language design decision), limiting in practice, and hard on beginners.

Case in point: Laziness.

It seems that the right balance between this lofty goal and other demands is still to be found.

(I hasten to add that this does not mean that imperative languages as found today are "better" than FP languages as found today, only that both paradigms will need to continue evolving).

Prolog has sent out its sons and daughters...

compares the health of the Prolog community unfavourably to that of Haskell

The Prolog community has had an enormous influence on computer science. Constraint programming is one direct influence, but the main influence has been indirect. Many people who were once logic programming researchers have moved into other areas of computer science and made major impacts. Offhand, here are a few: Tim Lindholm (Quintus Prolog developer, was one of the key designers of the Java Virtual Machine at Sun), Ehud Shapiro (originator of concurrent Prolog, is a major figure in other areas such as DNA computing), Vijay Saraswat (originator of concurrent constraints, is a major figure in parallel languages), Ian Foster (designer of Strand concurrent logic language, is the originator of Grid computing), Pascal Van Hentenryck (pioneer of constraint logic programming, is now a major figure in combinatorial optimization), Seif Haridi (designer of AKL deep guard language, is a major figure in distributed and peer-to-peer computing), Koen De Bosschere (now working in parallelism and computer architecture), Manuel Hermenegildo (a pioneer in And-parallel Prolog and abstract interpretation for Prolog, now a key figure in security with abstraction-carrying code), Peter Van Roy (myself, developer of Aquarius Prolog, coauthor of CTM and also working in distributed computing). This list could be extended greatly; I just wrote down the first names that came to mind.

I think it can fairly be said that the logic programming community has influenced many areas of computer science out of all proportion to its small size.

I think it can fairly be

I think it can fairly be said that the logic programming community has influenced many areas of computer science out of all proportion to its small size.

No one has even hinted at otherwise; that was not and is not the issue being discussed.

Logic programming paradigm vs Bob Kowalski

This is a rather long-winded presentation of the history of logic programming (base of Prolog) aimed at singling out Bob Kowalski as an important historical figure that is today standing in the way of making Logic Programming a more general programming paradigm, in the likes of Functional Programming. See the third footnote related to the already-mentioned Wikipedia edit war.

I would like to see the other side of the argument.

future history of logic programming

Hewitt's History has already been mentioned in the post on the Future History of Logic Programming.


I have to say that I found the paper long on polemics and short on suggestions as to how "logic programming," however defined, should advance. I think the Pure, Declarative, and Constructive Arithmetic Relations thread has some excellent suggestions: first of all, Kanren is, IMHO, important. As I commented here, I also think that XSB + FLORA-2 are worth attention. Oleg Kiselyov also points to XSB in the "Pure, Declarative, and Constructive Arithmetic Relations" thread linked above, where I also ask about the significance of Bedwyr as a possible successor to Lambda Prolog.

Let's also not forget that Oz is an excellent logic programming language, among other kinds of programming language. :-)

I still believe that there's gold buried somewhere in the materials linked to from this comment regarding the Basic Intuitionistic Logic of Proofs and the Reflective Lambda Calculus.

Finally, as a working programmer in addition to an amateur PLT theorist, let me just say that I see an increasing number of applications being developed where the key issue is reasoning about objects in the real world, with all of their properties, and how they change over time based on both user actions and events external to the system. I can tell you from direct experience that using the imperative, object-oriented paradigm to address such systems is excruciating and extremely error-prone. Logic programming, especially with some explicit support for modeling basic assumptions about the world as in the Event Calculus or framing state change within a logical context as in F-Logic/Transaction Logic as found in FLORA-2, has a great deal to offer here, and is often, IMHO, overlooked due to limited familiarity with the progress already made in logic programming since the development of Prolog in the 1970s. So I see both existing industrial opportunities and more theoretical opportunities to advance the current state of the art.

reasoning about objects in the real world, and how they change

Thank you for sharing your experience.
Object-oriented programming itself is issued from modelling and simulation, is it really useless in that context? What are the main issues with it?

This is a really good question...

...that I intend to address later. I just want to put this placeholder here for now. :-)

OO and simulation

One starting point: OOP is in many ways little to no help with continuous simulation. It's also not necessarily all that helpful with a discrete event sim if (for example) all messages really want to go back via an event queue - wouldn't you rather write recipient.message(parms) than queue.add(message(recipient, parms))?

Which isn't to say that it hasn't been a valuable step on the way, or that approaches that're closer to being the Right Thing don't often bear a distinct resemblance to it. But we're talking about 30-40 year old ideas, and we've learned things in the meantime.

Personally I'm fond of approaches that let you define a 'metamodel' describing what amounts to a simulation's 'laws of physics' rather than forcing a general one upon you. But I'm a Haskell-writing EDSL addict, what else did you expect?

The future of logic programming

Prolog is a very old language. In 1972, the sweet spot that it hit intrigued a lot of people, and unfortunately that has slowed down progress (many logic programmers still stand firm by Horn clauses and the WAM, even today!). My view is that logic programming has to take its place inside a bigger framework, where it can be used when it is needed (and then it really shines), but where other paradigms can be used when it is not appropriate. A far cry from the original dream of programming everything in logic! That original dream is unrealistic, because logic is not the right way to express all computing. "There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy." Logic programming used to be full of Horatios and functional programming still is. Here is an article that explains this in more detail: Logic Programming in the Context of Multiparadigm Programming: The Oz Experience (Journal of Theory and Practice of Logic Programming, Nov. 2003).

Actors are the future of Logic Programming

In 1972, researchers at MIT diagnosed the limitations of Logic Programming (pioneered in Planner) and invented Actors as the way forward.

Unfortunately, Actors were not understood elsewhere for a long time. Kowalski thought that developing a backwared-chaining-only subset of Planner was the answer and consequently participated in developing Prolog.

For the current state of the art see "Actor Model of Computation: Scalable Robust Information Systems" at


I read the article, and I too though it seemed like an attack on Bob Kowalski. The fact that it was penned by Carl Hewitt, who seems to have been spending the past three years or so in various attempts to elevate his stature in the CS pantheon, and whose research is (in some fashion) in "competition" with Kowalksi, makes it seem even more so.

Hewitt, after all, was banned from Wikipedia for relentless self-promotion there--including some rather absurd claims concerning the importance of the Actor model to physics(!). The ban initially applied only to articles related to himself and his work, but Hewitt has apparently tried to evade the ban using "sockpuppets" on numerous occasions; it's safe to say he's currently persona non grata at Wikipedia. See also this talk page for more about Hewitt.

At this point, I'd take anything Hewitt has to say on the subject with a large grain of salt. Stuff seems to be personal with him; he seems to view the work of Kowalski and others as heresy to be shouted down and destroyed, rather than as merely theory to be falsified. While a few of his observations are probably correct (I don't think that Prolog-style deductive languages are ever going to dominate CS; there are too many problems for which an inductive approach is far cleaner and simpler)--so what? Prolog and its progeny have proven quite successful in many application areas; that alone justifies the merits of the approach.

It's a damn shame, really, when a scientist of his stature sullies his work and reputation in this fashion, by acting like a three-year-old.

In fairness to Hewitt

here is his take on the whole Wikipedia affair; he alleges that Kowalski and others are trying to censor and bury his contributions to the field, and are using Wikipedia as a forum for doing so.

Read it for yourself and decide.

I should note that a) I'm entirely impartial on the debate concerning the Kowalski/Hewitt schools of thought, such as they are. Hewitt's ideas and research, from Planner to the actor model to whatever else, I've no objection to. However, b) I am an active participant on Wikipedia, and observed first hand Hewitt's numerous attempts to "spam" Wikipedia pages with references to his own work, far above and beyond their actual significance to the field. Gratuitous reference to the Actor model and such were being inserted by Hewitt on page after page after page, in many cases in places where they weren't really appropriate. While Wikipedia does encourage scientists to edit subjects in which they are experts, including areas they might themselves be working on--it only gives them carte blanche to do so as long as they remain objective--and has clear policies for dealing with conflicts of interests among editors. Hewitt quite clearly crossed the line, on numerous times after numerous warnings, which is why he was eventually topic-banned.

In Hewitt's reponse, cited above, he states that the CoI policy essentially prohibits him from citing his own work. However, the policy specifically addresses the issue:

Editing in an area in which you have professional or academic expertise is not, in itself, a conflict of interest. Using material you yourself have written or published is allowed within reason, but only if it is notable and conforms to the content policies. Excessive self-citation is strongly discouraged. When in doubt, defer to the community's opinion.

Wikipedia's policy on no original research adds further:

This policy does not prohibit editors with specialist knowledge from adding their knowledge to Wikipedia, but it does prohibit them from drawing on their personal knowledge without citing their sources. If an editor has published the results of his or her research in a reliable publication, the editor may cite that source while writing in the third person and complying with our NPOV policy.

Many scientists successfully contribute to Wikipedia in their fields of expertise, including citing their own research, with no issues whatsoever. Only a handful that I'm aware of have run into this sort of issue; and Hewitt is, unfortunately, one of the more flagrant offenders.

missing page!

"here is his take (http://wikicensored.info/) on the whole Wikipedia affair..."

That's a blank page. What did it say?

Alternative link

See here.

Wikipedia policy of Outlawry

Wikipedia operates by the ancient practice of outlawry. Outlawry means that those banned by Wikipedia lack rights -- suffering a form of Wikipedia death. To give them aid and support commits the Wikipedia crime of aiding and abetting, risking the danger of being banned oneself.

Professor John Harnad (who was blocked by Wikipedia) summarized as follows [Wikipedia Review 2008b]:

Wikipedia, on the contrary, is the enshrinement of contempt for learning, knowledge and expertise.
It is, for many, a diversionary hobby to which they are prepared to devote a great portion of their time, as others do to computer based video games.
Unfortunately, it has led also to an inner cult, shrouded in anonymity, with structures and processes of self-regulation that are woefully inadequate. 
Many of these tools and procedures are reminiscent, in parody, of those of the Inquisition:
secret courts, an inner “elite” arbitrarily empowered to censor and exclude all those perceived as a threat to the adopted conventions of the cult;
denunciations, character assassination, excommunication. 
An arbitrarily concocted “rulebook” and language rife with self-referential sanctimoniousness give a superficial illusion of order and good sense,
but no such thing exists in practice.

It is truly a “Tyranny of the Ignorant”. (emphasis added)

For more information see "Corruption of Wikipedia" at:


The wikipedia angle is fascinating from a sociological point of view (hopefully some sociologist will write a paper on this episode!) but given the focus of LtU, it might be more interesting to try to give references, and informed opinion about the actual debate. What are the core issues of contention? What sources are available to decide them? What was the influence of each on the field at large?

The underlying debate..

...near as I can tell, is an old one in knowledge systems research: the inductive/deductive debate. Hewitt is on the side of the "inductivists", and seems to fancy Kowalski et al as strict deductivists, standing steadfast in the way of progress by refusing to consider "procedural" solutions. Whether or not Kowalski and his peers are in fact enamored of such an absolutist position, I don't know.

As mentioned above, much of the debate seems to be a bit personal. Much logic programming research seems to center around Prolog and its decendents--a language with a mostly-deductionist approach, other a few "impure" features like cut. Hewitt's primary contribution to the field--Planner--is mainly relevant as a historical artifact. Much of the brouhaha on Wikipedia seems to concern the impact Planner had on Prolog--in other words, unfortunately, how much credit Hewitt is entitled to. He seems to feel he is entitled to far greater recognition than he receives, and in some cases--as in Wikipedia--has attempted to alter the record to better highlight his immense contributions.

As is often the case with flamewars among the scientific community, the debate surrounds a prescriptionist rather than a descriptionist issue--a question of how "should" something be done, rather than what something is or how something works. Should we admit or abolish procedures in logic programming? Many other similar debates are found in PLT, or have been in the past--should we admit side effects, should we use static or dynamic type systems, should we default to strict or lazy evaluation, OO-vs-not, structural-vs-goto, formal verification and methods, etc. In most cases, the correct answer is "it depends"--answering a "should" question, in many cases, is a systems design issue, and thus the province of engineers (and possibly policymakers), and not research scientists. Scientists do and should inform the debate, of course--and are far too often ignored by practitioners who instead base design decisions on personal or institutional prejudice--but the best course for a scientist studying knowledge systems is to do research and write papers on the effects of various choices; *not* to go around the country giving talks in which one of your peers is singled out as a boulder blocking the path of progress.

Even though I have, in three posts now, raked Mr. Hewitt over the coals for his petulant behavior--I'll have to agree with him that an absolutist deductive-only position is undesirable. There are many cases where a pure-deductive approach is satisfactory, and Horn clause solvers like Prolog have termination as a desireable property (modulo recursive definitions/"occurs check" violations)--but I think that the dichotomy is improper. To the extent that Hewitt thinks deductive-only approaches should be abandoned; there I do *not* agree.

Thanks to Messrs. Curry and Howard, we know that programs are proofs of some proposition. Program fragments, such as (imperative) procedures are likewise proofs, as are algorithms in the abstract. If a mathematician is given a fourth-degree polynomial to solve, he doesn't attempt to derive the solution himself--he pulls out his CRC manual (or looks online to mathworld or some similar resource) and takes advantage of 500 years of mathematical knowledge to plug-and-chug. (We'll ignore numerical approxmations, and assume the exact formula is used). To put it another way, the algorithm for the solution to a quartic polynomial is a theorem that up to four unique solutions exist (one which has the added benefit of stating what they are). Purely deductive approaches place a limit on the sort of theorems that may be introduced into the system.

In some cases, a procedure may constitute an additional axiom; one which cannot be derived from the existing axioms but which is consistent with the system. By allowing this such axioms, the system is strentghened. In other cases, such as the quartic formula (which can be derived from elementary algebra, but is difficult to derive if you don't know the tricks involved), procedures may be "optimizations" of things which can be derived from within an existing system.

The cost of adding arbitrary theorems (procedures) is that it is harder if not impossible to exclude the introduction of axioms or theorems which are incorrect (produce an inconsistent system), or fail to terminate, or both.

I suspect that in most cases, the trade-offs involved are application dependent. Polemics for or against a particular position, when that position does have some useful application, are unhelpful. A common sin in CS debates is an insistence on "universality"--that a technique must be applicable to all relevant problems in order to be useful, and that techniques which are only relevant to a subset of problems are worthless and should be discarded. By portraying deductionists (or certain alleged deducitonists) as "being in the way", Hewitt does the field no favors. Likewise, I can think of a few practitioners on the other side of the debate, who I won't name here (not being involved in the Hewitt affair), who do exhibit a hostile attitude towards inductive approaches; when they go overboard in their advocacy, they are equally unhelpful.

Thanks for the summary! Am I

Thanks for the summary!

Am I the only one who find the deductive/inductive distinction made in this debate somewhat strange? I can understand the deductive nature of Prolog (within limits), but I am less certain about the notion of inductive reasoning used here.

An interpretation

American AI strongly favors "forward" chaining systems, and planning systems which are also forward. This may be related to the fact that at the conclusion of the Planner project the notion of backtracking is rejected as "only a search" method, and focus moved on to the Actor model. Meanwhile in Europe development proceeds on backtracking and Prolog.

Edit: Using this link instead of the one above. On page 3.

What went wrong:
1. Although pragmatically useful at the time it was
developed, backtracking proved to be too rigid and

What was done about it:
1. Concurrency based on message passing was
developed as an alternative to backtracking.

Hopefully this helps to explain Hewitt's attitude towards backtracking, but personally I don't get it. Is there something wrong with backtracking? I for one am unable to reach that conclusion.


I'm also confused by the supposed dichotomy. To me, "inductive" approaches refers to things like machine learning, or Inductive Logic Programming. I'm not sure what connection it has to procedural knowledge representations. I'm also confused by the supposed characterisation of Robert Kowalski as committed to "deductive" approaches. Kowalski has done much research on non-deductive techniques, such as Abductive Logic Programming. Perhaps, given that neither party is a member of LtU (as far as I know), it might be better to not attempt to paraphrase their positions.

(Nit-picking something Scott said:)

Horn clause solvers like Prolog have termination as a desireable property (modulo recursive definitions/"occurs check" violations)[...]

As far as I remember, Prolog's operational semantics is Turing-complete, and reasoning with Horn clauses is undecideable. IIRC, Datalog is decideable.

Nitpicking the nitpick.

(Nit-picking something Scott said:)
Horn clause solvers like Prolog have termination as a desirable property (modulo recursive definitions/"occurs check" violations)[...]

As far as I remember, Prolog's operational semantics is Turing-complete,

Due to recursive definitions.

Sociology and Subject Issues of Wikipedia Conflict

The sociology issues of the Wikipedia conflict hark back to the issues between Galileo Galilei and the Roman Catholic Church.

On core subject issues of contention, see the references in "Middle History of Logic Programming" at

The future of logic programming

Abstract State Machines may at first seem to have nothing to do with logic programming, but take a closer look at these slides available with the ASM Book. Is semantics the future of logic programming?

State Machines are an inadequate foundaton for concurrency

State machines are an inadequate foundation for concurrent computation.

For an explanation, see the following Microsoft Channel 9 video:

If only you guys could have

If only you guys could have included someone like Leslie Lamport in the discussion. From a systems perspective, this a bizarre thing to claim (though, of course, the question itself is bizarre).

If I can't imagine it, then it can't be.

A common line of reasoning is: If I can't imagine it, then it cannot be.

I found this interesting

I found this interesting take on the debate in a long footnote in SICP on page 336. I am quoting only the conclusion:

Thus in assigning credit for the development of logic programming, the French can point to Prolog's genesis at the University of Marseille, While the British can highlight the work at the University of Edinburgh. According to people at MIT, logic programming was developed at these groups in an attempt to figure out what Hewitt was talking about in his brilliant, but impenetrable Ph.D thesis. For a history of logic programming see Robinson 1983.

Knol: "Middle History of Logic Programming" by Carl Hewitt

A more complete history of the events discussed here has been published on Knol as Middle History of Logic Programming.

See the Google Knol "A historical perspective on developing foundations for client cloud computing" for conceptual development of Logic Programming from its origins to the present.

LP is inadequate even for small scale programming

Logic Programming is inadequate even for small scale programming because it does not support change. For example, LP cannot implement the following account:

Actor CreateEuroAccount[startingBalance:Euros]
    currentBalance := startingBalance
             // currentBalance is an assignable variable initially assigned to the initial balance of type Euros
   implements Account using
     GetBalance[ ] ->   currentBalance
     Deposit[anAmount] ->  Void afterward  currentBalance := currentBalance+anAmount   
        //  return Void also the next message is processed with currentBalance reflecting the deposit
     Withdraw[anAmount] ->
        (amount > currentBalance) ? 
          True  Throw  OverdrawnException[ ]
          False   Void  afterward  currentBalance := currentBalance-anAmount  
              //  the next message is processed with currentBalance reflecting the withdrawal

For an explanation of the syntax above see the following:

That's just the closed world assumption

That's the closed world assumption. Endemic to most current implementations of LP, but not inherent.

"Pure" functional languages have the same basic problem, so they came up with monads as a way around it. One could use the same technique in LP.

Temporal logic programming

Temporal logic programming can model change easily enough. (See Dedalus for excellent examples.) But Ray's point still holds - LP tends to scale poorly.

There are ways to make LP scale a lot further. LP can be modularized, incrementalized, and stabilized, with careful restrictions on how we express things.

I've been interested in pursuing 'stability' as the basis for large scale reactive logic programming. Basically, if we can ensure that small changes in input data or constraints usually result in small changes in the output or solutions, then we can have large scale logic systems that perform minimal search to find a new solution after a change. Non-determinism helps a lot in achieving stability.

I'm also interested a sort of logic/functional programming where each function actually describes a set of possible functions - a tactic for creating a function - but we can choose one function in the end based on static type information.

Even with time arguments, Prolog can't implement concurrecy

Even with time arguments on events, Prolog cannot implement concurrent computation.

For example, Prolog can't implement the account above.

Logic Programming does not

Logic Programming does not mean Prolog. I'm not even fond of Prolog as an LP language.

Also, unless you're conflating 'concurrency' with 'non-determinism' or 'parallelism', I think your assertion is incorrect.

Prolog is more than expressive enough to represent an executable model of concurrent computations involving deposits and withdrawals. The Dedalus paper linked above offers good hints on how to approach such a problem with temporal logic. What is an "implementation of concurrent computation" if not an executable model?

Limited models...

I think Hewett is arguing about a restricted case of LP that operates without the ability to contradict earlier assertions/derivations. This is typical of several LP languages, because it speeds things up by making it unnecessary to track justifications/proofs. We've been answering about a more general model of LP, which explains the disconnect.

The RETE algorithm, the Warren Machine, and Prolog have problems with any kind of change; essentially you have to withdraw all derived facts and start over from the new conditions. But Logic Programming is larger than those algorithms and that language.

With Temporal Logic Programming, each derivation is in a message queue which will notify it of any changes in the given premises (facts and rules) from which it was derived. If one of those facts (or rules) is withdrawn, then derivations based on it must find other support or withdraw themselves.

Temporal LP is capable of modeling any computation. It's only marginally harder to implement, not so much beloved of theorists because it models messy impure computations, and not so much beloved of practical programmers because it requires more computational resources to track justifications and message queues, making the combinatorial explosion of scaling in LP harder to cope with.

Temporal logic never

Temporal logic never contradicts earlier assertions/derivations. That is, the assertions with argument for time `T` are not contradicted or withdrawn when we derive assertions for time `T+1`. The past doesn't mutate. (If you're withdrawing facts, you're doing it wrong!) Old facts might be garbage collected once they are no longer relevant to deriving further assertions, but this isn't the same as withdrawal.

Prolog does not have difficulty modeling temporal logic. But Prolog does not have an understanding of temporal logic built into its garbage collector, so is inefficient for implementing long-running applications with temporal logic.

The Bloom language derives from Dedalus which derives from Datalog, and demonstrates that large scale (via modularization) temporal logic programming can be feasible and practical. It is part of the Berkeley Order Of Magnitude (BOOM) project.

There are also interesting restrictions on temporal logic that make it highly usable while achieving predictable resource consumption: linear temporal logic, discrete event simulations, synchronous reactive programming. My reactive demand programming is also in this class of programming models.

I agree these are "not so much beloved of practical programmers" but I think the reason for that has more to do with network effects ("I've never heard of it in use, and if it's so great then everyone else would be using it and I'd have heard of it, right?") than its technical properties and merit.

What is the Temporal Logic Program for the account above?

What is the Temporal Logic Program for the account above that is only "marginally harder to implement"?

Even with Monads, Lambda Calculus can't implement Concurrency

Even with Monads, the Lambda Calculus can't implement concurrent computation.

For example, the lambda calculus can't implement the account above.