Limitations of Prolog-style clausal programs

Beginning in the 1970’s, Japan took the DRAM market (and consequently most of the integrated circuit industry) away from the previous US dominance. This was accomplished with the help of the Japanese VLSI project that was funded and coordinated in good part by the Japanese government Ministry of International Trade and Industry (MITI) [Sigurdson 1986].

MITI hoped to repeat this victory by taking over the computer industry. However, Japan had come under criticism for “copying” the US. One of the MITI goals for ICOT was to show that Japan could innovate new computer technology and not just copy the Americans.

ICOT tried to go all the way with Prolog-style clause programs. Kowalski later recalled “Having advocated LP [Logic Programs] as a unifying foundation for computing, I was delighted with the LP [Logic Program] focus of the FGCS [Fifth Generation Computer Systems] project.” [Fuchi, Kowalski, Ueda, Kahn, Chikayama, and Tick 1993] By making Prolog-style clause programs(which was mainly being developed outside the US) the foundation, MITI hoped that the Japanese computer industry could leapfrog the US. “The [ICOT] project aimed to leapfrog over IBM, and to a new era of advanced knowledge processing applications” [Sergot 2004]
Unfortunately, ICOT misjudged the importance of the Actor Model [Hewitt, Bishop, and Steiger 1973] which had been developed in reaction to the limitations of Planner. ICOT had to deal with the concurrency and consequently developed concurrent program languages based on Prolog-style clauses [Shapiro 1989].

However, it proved difficult to implement clause invocation in these languages as efficiently as message-passing in object-oriented program languages. Simula-67 originated a hierarchical class structure for objects so that message handling procedures (methods) and object instance variables could be inherited by subclasses. Ole-Johan Dahl [1967] invented a powerful compiler technology using dispatch tables that enabled message handling procedures in subclasses of objects to be efficiently invoked.
The combination of efficient inheritance-based procedure invocation together with class libraries and browsers (pioneered in Smalltalk) provided better tools than the slower pattern-directed clause invocation of the FGCS Prolog-style clause program languages. Consequently, the ICOT program languages never took off and instead object-oriented like Java and C# became the mainstream.

Going beyond object-oriented program languages to concurrent program languages, ICOT encountered further difficulties dealing with concurrency, e.g., readers-writers concurrency. Concurrency control for readers and writers in a shared resource is a classic problem. The fundamental constraint is that multiple writers are not allowed to operate concurrently. Below is an implementation of a reading priority guardian that encapsulates a shared resource:

CreateReadingPriority.[theResource:ReadersWriter] ≡
 Actor with writing:=False, numberReading:(Integer thatIs ≧0):=0◊ 
   queues readersQ, writersQâ—Š
   implements ReadersWriter using
    Enqueue (writing or not IsEmpty writersQ) ¿ True ↝ readersQ; False ↝ NullQueue ?
       Prep numberReading:=numberReading+1â—Š
          RequirePre not writingâ—Š  
           Permit readersQâ—Š
      hole theResource.Read[query] 
                 always permit (IsEmpty writersQ)  
                                     ¿  True ↝ readersQ;
                                        False ↝ (numberReading = 1)  
                                                  ¿  True  ↝ writersQ; 
                                                     False ↝ NullQueue ? ?
                      also  RequirePre numberReading≧1◊ 
    Enqueue (numberReading>0 or not IsEmpty readersQ or writing or not IsEmpty writersQ) ¿
               True  ↝ writersQ;
               False ↝ NullQueue ?
       RequirePre not writingâ—Š Prep writing:=Trueâ—Š 
         RequirePre numberReading=0â—Š 
            Hole theResource.Write[update]
               always permit (IsEmpty readersQ)   
                                       ¿  True ↝ writersQ; 
                                          False ↝ readersQ ?
                  also RequirePre writing◊ writing:=False◊§▮

Implementations of the above kind procedure in the Prolog-style clausal languages used by ICOT were both verbose and inefficient.

The technical managers at ICOT were aware of some of the pitfalls that had tripped up previous Artificial Intelligence (AI) researchers. So they deliberately avoided calling ICOT an AI Project. Instead they had the vision of an integrated hardware/software system [Uchida and Fuchi 1992]. However, the Prolog-style clause program languages turned not to be a suitable foundation because of poor modularity and lack of efficiency by comparison with direct message passing [Hewitt and Agha 1988].

Another problem was that multi-processors found it difficult to compete because at the time single processors were rapidly increasing in speed and connections between multiple processors suffered long latencies.

The MITI strategy backfired because Japanese companies refused to productize ICOT hardware.

However, the architects of ICOT did get some things right:
* The project largely avoided the Mental Agent paradigm
* The project correctly placed tremendous emphasis on research in concurrency and parallelism as an emerging computing paradigm.

The architectural reliance on Prolog-style clausal programs was a principle contributing cause for the failure of ICOT to achieve commercial success.

For more information see Inconsistency Robustness in Logic Programs.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Prolog style languages?

The Prolog style language (KL0) was used for system progamming of the PSI which was sequential. System programming for the PIM, which was parallel, was written in sort of flat GHC (KL1). I personally wouldn't describe Committed Choice Non-Deterministic (CCND) languages as "Prolog style". Many applications were written in ESP which was object-oriented and heavy use of OOP techiniques was made if I'm not mistaken. Again not really "Prolog style" in my book.

I think it's more correct/useful to talk about CCND languages and ESP, than Prolog-style languages.

Is there a better name for Prolog-style clausal languages?

I would put the committed choice clausal languages in the category of Prolog-style clausal languages.

I don't know.

Fair enough. I only really made the point as CCND languages differ from Prolog so much that I consider them to be different.

Guarded clauses are very similar to Prolog-style clauses

The guarded clauses used by ICOT can reasonably be considered to fall into the category of Prolog-style clauses since they have the following form:

goal :- guardedSubgoals | concurrentSubgoals

However, the rules for suspension and synchronization are very tricky :-(

The meaning of clause order is different

Apart from the commit operator, the major difference regarding clauses in Prolog and GHC is the order of clauses in a procedure has a semantic meaning not present in GHC (or PARLOG - don't know about the other CCND languages). So, in GHC the merge procedure

merge([A|As], B, C) :- true | C=[A|Cs], merge(As, B, Cs).
merge(A, [B|Bs], C) :- true | C=[B|Cs], merge(A, Bs, Cs).
merge([], B, C) :- true | C=B.
merge(A, [], C) :- true | C=A.

which is used to merge two streams (arguments 1 & 2) into one (argument 3), has no guarantees on the order of elements of argument 3 should the first two arguments be bound to non-empty lists. In Prolog, a similar program

merge([], B, C) :- true, !, C=B.
merge(A, [], C) :- true, !, C=A.
merge([A|As], B, C) :- true, !, C=[A|Cs], merge(As, B, Cs).
merge(A, [B|Bs], C) :- true, !, C=[B|Cs], merge(A, Bs, Cs).

would just append argument 2 onto the end of argument 1.

I got my GHC info from ICOT Technical report TR-103

Ignore me

I was posted incorrectly.

More to Logic Programs than Horn-Clause syntax

There is a lot more to Logic Programs than can be expressed in Horn-Clause syntax. In addition to the following (similar to above):

when ⊩ Merge[[A, ...As], B, C] → ⊩ C=[A, ...Cs], Merge[As, B, Cs]∎
when ⊩ Merge[A, [B, ...Bs], C] → ⊩ C=[B, ...Cs], Merge[A, Bs, Cs]∎
when ⊩ Merge[[], B, C] → ⊩ C=B∎
when ⊩ Merge[A, [], C] → ⊩ C=A∎

there are also the following:

when ⊩ Merge[A, B, []] → ⊩ A=[], B=[]∎
⊢ Merge[[], [], []]∎ 
when ⊢ Merge[A, B, []] → ⊢ A=[], B=[]∎
when ⊢ Merge[[], B, C] → ⊢ C=B∎
when ⊢ Merge[A, [], C] → ⊢ C=A∎

Support this proposal! Artificial Intelligence Q&A

Just figured out that 3 years ago there was a beta of an artificial intelligence site which failed. Hope the new one works.

Its flying now:

Preferable not to confuse assertions [⊢] and goals [||-] ;-)

It is preferable not to confuse assertions [⊢] and goals [||-] ;-)

CRCW, my favorite Model of Parallel Computation

Hewitt is again trolling LtU with some glorious statements
such as "Concurrent systems can be axiomatized using mathematical
logic but in general cannot be implemented." I guess he doesn't
know the work of Nick Pippenger and others.

Especially the CRCW variant is not so difficult to effectively
realize in logic programming. I did this 15 years ago with the
help of some special modal logic. The modal logic had an operator
with the following semantics:

M |=  A :<=> exists x in U (M[c/x] |= A)

So the modal operator is kind of an existential quantifier
that acts not on some variable that is visible as a free variable
in the quantified formula, but rather takes into account the
meaning of a constant in the environment.

A vanilla interpreter for the modal logic is found in the
appendix of the 1999 paper. The modal operator is implemented
by the sweep predicate, it is one of the several modal
operators needed.

The conflict situation in the CRCW model will lead to multiple
possible futures. Which is viewed in the non determinstic
behaviour of the interpreter on the meta level, which can be
use for mathematical reasoning about processes.

On the other hand one could pick one of the non-deterministic
executions, either by drawing from a random generator or by
having the process physically implemented. And would then get
a logical explanation of a particular execution.

The limitations one will experience are not in logic programming
for the execution, but in mathematical logic, if one starts to
ask questions about the parallel program. Especially those
involving temporal quantifiers in its simplest form.

What happened to the PRAM?

Basic Model of Parallism

<joke short>

<joke long> --Hewitt

volunteer moderator

I think "autistic" is inappropriate here. It gives to this remark an insulting tone, which is not what we want as discussion standards on LtU (I would agree that "the Hewitt threads" have a screwed notion of discussion already, but that's not a reason to make things objectively worse), and orthogonally it is a medical comment that would be best avoided in informal discourse.



It may be a

It may be a not-my-native-language thing, but I actually don't see the problem with saying something like "you may be confused about ..." during a technical conversation. The actual form "do not confuse ..." was maybe on the patronizing side, but I wouldn't have guessed you found it insulting.

⊢Φ is standard notation to assert that Φ is a theorem

⊢Φ is standard mathematical notation to assert that that Φ is a theorem [Frege 1879].

Seems likely Countably

Seems likely Countably Infinite is quite as aware as you or I of the historical derivation of the turnstile notation from Frege's Grundgesetze. Are you agreeing with CI that turnstile is used for consequence rather than assertions or goals? I note that for the usual modern sense of turnstile I'd be more likely to cite Keene's Introduction to Metamathematics (my copy isn't on my bookshelf, across the room, I've just pulled it out of a stack of papers and books about two feet from where I'm sitting), 1952 and last I knew still in print, rather than Frege.

[Edit: Oh, sorry, you're citing the Begriffsschrift rather than the Grundgesetze, aren't you. Though I don't think that lessens my point.]

Countably denied ⊢ can be used to assert theorems

Countably denied that ⊢ can be used to assert theorems in posting above.

Realy? Not how I read it.

Realy? Not how I read it. Countably said it's used for such-and-such, not so-and-so. You replied by talking about how it was used nearly a century and a half ago, and sliding in a slur on Countably's knowledge of the subject while you were at it. I was mainly pointing out that one would likely understand modern usage better by looking at Kleene, a mere half century ago. As for the slur — I don't approve of Countably's tit-for-tat dig (for which Countably was criticized, and Countably agreed it hadn't been good form), but really, you'd be raising the level of discourse here if you'd practice a bit less tat.

Unfortunately, Countably was incorrect about use of ⊢

Dear John,

Countably said "The turnstile is generally [not] used ... for assertions ... It is used for the consequence relation [with premises and conclusions]"

Unfortunately, Countably was incorrect. ⊢Φ is standard mathematical notation to assert that Φ is a theorem.


PS. I never slurred Countably's knowledge of Frege; I only provided a reference to Frege as the originator of the ⊢ notation.

Countably was making a valid

Countably was making a valid point, which I recommend spending more effort understanding than dismissing. Asserting that Φ is a theorem is not the same thing as aserting Φ. The former is a metamthematical statement that Φ is a consequence of the axiomatic system (whatever that system is, hence the generalization of turnstile as described in Kleene).

Did not dismiss Countably; just denied Countably was correct

I did not dismiss Countably's claim; just denied that it was correct. Technically, asserting Φ means asserting that it holds in some theory T which can be expressed as ⊢T Φ.

I conjecture that you don't

I conjecture that you don't realize how you come across to others. I'm looking at your original comment, and I have to say, you did dismiss him. Your denial of a slur doesn't stand up well to scrutiny either. This is worrisome both because it diminishes your likely impact with others (in some venues, anyway), and because if you are in fact undervaluing other's remarks (or, symmetrically, overvaluing your own) this would tend to diminish your own ability to gain insights.

It's preferable to write straightforwardly on LtU

Dear John,

It's preferable to write straightforwardly on LtU and not publish guesses (often incorrect) what other people might be thinking.

What I actually wrote was perfectly straightforward: "⊢Φ is standard mathematical notation to assert that that Φ is a theorem [Frege 1879]." Do you think the quotation (made in direct opposition to the claim of Countably Infinite) is wrong?

LtU is a good but not perfect forum (added more points in editing):
1) Most posts are concise and to the point.
2) Occasional posts wander and seem over-long.
3) Most posts are welcoming and friendly.
4) Occasional posts deliberately and explicitly personally insult others.
5) Most posts are on point and either reinforce and agree with previous posts and/or respectfully disagree.
6) Occasional posts publish guesses about what others might be thinking that appear to be implicit personal accusations, e.g., accusations of undervaluing others contributions, overvaluing your own contributions, etc.

In general, the LtU forum works well provided that civility is preserved and ad hominem, personal attacks are avoided :-)


PS. From the LtU policy page: "Civility: first and foremost, we aspire to a high quality of discussion. This requires civility. It should hardly be necessary to explain this: be polite, avoid ad hominem attacks, etc." (added in editing)


(My tone of voice is intended to be calm but sure.)

From the LtU policy page:
Mentioning your own work: When it is relevant to a topic under discussion, it's fine to include links or references to your own work. Remember to acknowledge it as such. Avoid excessive self-promotion.

Three of the four "New forum topics" are yours, two are about the same article. You've been posting the same few links over and over again for months, often in comments that are no more than two lines long and not very enlightening.

Look at the recent threads that you have not taken part in. I think you'll find that the quality of discussion is much higher than on those you have taken part in.

People here have been insulting you and calling you a troll, even though this is usually a friendly, polite community. These insults are only happening to you, but you blame the community.

You say you have good intentions, but you are not showing it well. People have tried to tell you how to communicate more effectively here, but your communications have barely changed.

The patience you are being treated with amazes me. LtU is a great community.

criterion for when a subthread has outlived its usefulness

Your post thoroughly misrepresents what I've said in this subthread. I would be tempted to add some parting shot along the lines of "you even...", except I can't decide which misrepresentation (or point-missing) is most appalling.

[edit: jason stumpf has, of course, demonstrated in his above comment far more cool than I in mine. Bravo.]

Back to content

Regardless of the process stuff above, you seemed to have an objection to the program below asserting that propostion Φ holds in theory T:

   ⊢T Φ∎

Unfortunately, Kleene is somewhat obsolete for Computer Science

Unfortunately, the work of Kleene (although state of the art when it was published) is somewhat obsolete for Computer Science because:
1) Computer Science needs a standard mathematical foundation that enables computers to reason about their own reasoning processes.
2) Kleene didn't properly formalize Church's Paradox showing that mathematics cannot computationally enumerate the theorems of mathematics.
3) Kleene did not recognize that self-referential propositions lead to contradictions in mathematics.

I'd expect Frege to be

I'd expect Frege to be rather obsolete as well; but I interpreted the reference to Frege, like mine to Kleene, as addressing the historical origins of modern notation, in which regard I'd be inclined to count Kleene as less remote than Frege.

It is interesting to me that you are, apparently, embedded in a community that places theory T as a subscript of the turnstile rather than to its left; I've seen that convention only rarely; in my own experience, it's a somewhat obscure variant in computer science usage. There are, of course, other examples of disjoint subcommunities using disparate conventions.

Frege was the originator of ⊢ notation

Frege was the originator of ⊢ notation. His great work foundered when he received a letter from Russell announcing his famous paradox.

The notation Ψ⊢T Φ works well for the transitivity of ⊢T.

Mathematicians whose surnames' only vowel is 'e'

Well, I'm not aware of anyone here questioning Frege's priority for turnstile. I did suggest Kleene as a more immediate influence on modern usage. Multiple different notational conventions in parallel complicates things; but I'd hazard a guess Countably Infinite, and I, would be more directly downstream from Kleene.

Yes, the story of Russell's letter arriving just as the second volume was going to press is rather famous, perhaps because it so well captures the phenomenon of discovering at the last moment that a major effort has gone horribly wrong.

Set definiton: restriction rather than abstraction from a pred

Unfortunately, I don't know how turnstile spread into modern usage :-(

The big lesson from the Frege debacle is that set definition proceeds from restriction by a predicate rather than abstraction from a predicate:

  ∀[s:Set,p:Predicate,x]→ x∈(s|p) ⇔ x∈s∧p[x]
  ∀[s:Set,p:Predicate]→ {x∈s|p[x]} ≡ s|p

Unfortunately I don't know

Unfortunately I don't know how turnstile spread into modern usage :-(

Well, Kleene is the tranmission vector I suspect; when a book stays in print continuously for half a century, one suspects somebody has been reading it.

Just standing back from it, your confident assertion about the big lesson from Frege... sounds controversial; at any rate, I have no confidence in my conjecture about what you mean. Would you care to explain this point a bit — in words?

Church and Jaśkowski were towering figures

Development of the ⊢ notation was influenced by fundamental work by Alonzo Church and Stanisław Jaśkowski, who were towering figures after Frege and Russell. Now modern Logic Programs are having an influence ;-) See history in “Mathematics Self Proves its own Consistency”.

Frege conceived his system based on the premise that there is a well-defined set for every predicate. However, Russell discovered a paradox based on Frege's premise. Furthermore, other similar paradoxes emerged. Consequently, a more constructive grounded approach is needed for sets in Computer Science. See above reference.

Frege conceived his system

Frege conceived his system based on the premise that there is a well-defined set for every predicate.

On that much, we can agree. However, you have to be very, very careful of where you go from there. A great deal hinges on what one means by "well-defined".

Computer Science foundations need both sets and types

Computer Science foundations need both sets (for collecting things) and types (for abstracting things). Both sets and types should be constructive with types built out of types (including â„•) and sets built out of types (using characteristic functions) with appropriate axioms to connect sets and types.

See LtU discussion Strong Mathematical Foundations for Computer Science

Depending on what you mean

Depending on what you mean by "type", I may... not exaclty "disagree", but, let's say, decline to agree. There's a level at which the notion of type is inescapable, but complicated types as usually experienced in programming are more of a problem than a solution. The absence of structured types is, I think, one of the reasons so many Lisp languages have a large radius of abstraction.

Just curious — do you consider zero an element of â„•?

Predicates, sets, and types have subtle relationships.

Predicates, sets, and types have subtle relationships. Russell introduced strict types into the foundations of mathematics addressing issues raised by the paradoxes. However, the Russell type system proved to be too restrictive. Trying to base foundations on constructive types (e.g. Coq) also proved to be too restrictive.

Types are a crucial part of any practical, effective program language. But it must be done "correctly" ;-) The Lisp 1.5 approach of just having a single (anonymous) type Any is not sufficient.
But it is important that a type system include the type Any.

There are no important foundational issues that depend on whether zero is included as an element of â„•. In that sense, it is an arbitrary decision. Of course, deciding whether to start the numbering of structures (e.g. sets and arrays) from zero is a different question.

Why does it work?

The more I think about this the more convinced I am that cognitively we use an algebraic system of logic. We probably recognize ambiguities because they produce multi-values in which case we do a little more logic to select just one. Boolean algebra can be built in as functions. Negation logic is probably also possible as an extension?

All I am saying is that it isn't obviously wrong, something is going on because we can and do communicate. It is worth looking into.


Well, Kleene is the tranmission vector I suspect; when a book stays in print continuously for half a century, one suspects somebody has been reading it.

The Begriffsschrift is still in print. From Frege to Gödel



Logic Programs for "It's raining." problem

Below are some modern Logic Programs for the "It's raining." problem that is not very problematical:

when ⊢ "It's raining." → ⊩ "Countably has an umbrella."∎ 
  // When it is asserted that it is raining, 
  // set a goal that Countably has an umbrella.
  // A motivation for the above is the observation that when 
  // asserted that it's raining, Countably often has an umbrella. 
  // So in this case, it's worth setting a goal "Countably has an umbrella."
when ⊢ "Countably has an umbrella." → ⊩ "It's raining."∎ 
  // When it is asserted that Countably has an umbrella, 
  // set a goal that it is raining.
  // A motivation for the above is the observation that when asserted 
  // that Countably has an umbrella, it's often raining. 
  // So in this case, it's worth setting a goal "It's raining." 

Above were clarified in editing. Also, it would possible to add additional Logic Programs to the above depending on theories about Countably and his umbrella.

Note that the above says nothing about whether neither, either or both of the following (or their negations) hold:

"It's raining."
"Countably has an umbrella."

It's preferable for Logic Programs per se to be independent of theories of psychological agents ;-)



Long lines inside pre tags push out right margin

LtU page format suffers when we put long lines inside <pre> tags, because it forces out the right margin.

Thanks for noticing effect of long lines in <pre>

Thanks for noticing the effect of long lines in ≺pre>.

I shortened the long lines in my previous post :-)



Some forward chaining in a theory for Countably's Umbrella

Below are some Logic Programs for a theory Countably with forward chaining rules that say that Countably invariably has an umbrella when it is raining.

when ⊢Countably "It's raining." → ⊢Countably "Countably has an umbrella."∎ 
  // When it is asserted that it is raining, assert that Countably has an umbrella.
  // Of course, the above could have incorrect results, e.g.
  // it's raining but Countably forgot to have an umbrella.
when ⊢Countably "Countably has an umbrella." → ⊢Countably "It's raining."∎ 
  // When it is asserted that Countably has an umbrella, assert that it is raining.
  // Of course, the above could have incorrect results, e.g.
  // Countably could have an umbrella for a school play although it's not raining.

The above example illustrates how Procedural Embedding has been fundamental to Logic Programs ever since Planner.



Procedural Embedding stronger than Procedural Interpretation

Procedural Embedding is stronger than Procedural Interpretation of mathematical logic propositions, e.g., relational Horn-Clause syntax.

Kowalski advocated a bold thesis: “Looking back on our early discoveries, I value most the discovery that computation could be subsumed by deduction.” However, mathematical logic cannot always infer computational steps because computational systems make use of arbitration for determining which message is processed next by a recipient that is sent multiple messages concurrently. Since reception orders are in general indeterminate, they cannot be inferred from prior information by mathematical logic alone. Therefore mathematical logic alone cannot in general implement computation. Logic Programs (like Functional Programs) are useful idioms even though they are not universal. For example Logic Programs can provide useful principles and methods for systems which are quasi-commutative and quasi-monotonic even though the systems themselves cannot be implemented using Logic Programs

Some last note


There is a small problem with your above post. I
guess you misunderstood Skolemization, in that you
didn't notice that it needs prenex normal form first.
Namely the above formula you give:

   forall x (p(x) <- exists y q(y, x)).

Does have a different skolemization. It will not
introduce a new function symbol. The skolemization
is as follows and it is the same as the clause
Prolog would be using:

   p(x) v ~q(y,x).

The reason is simply that during skolemization you
have to move the quatifiers to the front, before you
invent the function symbols. So that you end up with
what you have started with as a formula.


If you don't believe me, check out Prover9. I get:

% -------- Comments from original proof --------
% Proof 1 at 0.00 (+ 0.00) seconds.
% Length of proof is 7.
% Level of proof is 3.
% Maximum clause weight is 0.
% Given clauses 0.

1 (all x (p(x) <- (exists y q(y,x)))) # label(non_clause). [assumption].
2 p(2) # label(non_clause) # label(goal). [goal].
3 -p(2). [deny(2)].
4 p(x) | -q(y,x). [clausify(1)].
5 -q(x,2). [resolve(3,a,4,a)].
6 q(1,2). [assumption].
7 $F. [resolve(5,a,6,a)].

See step #4. You can download a GUI version of prover9
from here, its available for Mac, Windows and Linux:

Horn-Clause syntax is inadequate foundation for LP

The bottom line is that Horn-Clauses syntax is an inadequate foundation for Logic Programs.

For example, consider the following proposition:

∀[aPerson]→ Orthodox[aPerson] ⇐ ∀[anotherPerson]→ Orthodox[anotherPerson]⇐MaternalAncestor[anotherPerson, aPerson]

Unfortunately, the Horn-Clause syntax for the above proposition can be somewhat obscure because of the introduction of an awkward intermediate predicate to cope with the lack of expressiveness of Horn-Clause syntax as follows:

• ∀[x, y ]→ Orthodox[x ] ⇐  Predicate1[x, y]
• ∀[u, v ]→  Orthodox[u] ⇐  MaternalAncestor[u, v], Predicate1[v, u]

Without the restriction of being limited to Horn-Clause syntax, the following Logic Program more clearly procedurally embeds some back chaining for the above:

when ⊩t Orthodox[aPerson]→ ⊩t ∀[anotherPerson]→  Orthodox[anotherPerson]⇐MaternalAncestor[anotherPerson, aPerson]▮

The above Logic Program more clearly shows a method to back chain from a goal of the form Orthodox[…] than the corresponding Horn-clause syntax program above.

PS. Welcome back to LtU! In view of your edited comment below, I have edited mine to clarify the issues.

@Hewitt: Not Allowed Skolemization

Moderation seems to take some time, there are two
pending posts from me now. So I will reuse some
of my removed comments, and respond to Hewitts
Skolemization here.

Here is a counter example that shows that his
Skolemization doesn't work:

forall x q(x,x).
forall x, y (p(x) <- q(y,x))
?- exists p(x)

The above problem usually yields a Yes when doing
a refutation proof with the usual Skolemization.
But if one uses Hewitts Skolemization of a logically
equivalent formula of the second line, i.e. if
one sets the following clause:

forall x (p(x) <- q(f(x),x))

The above problem now yields a No in a refutation
proof, since q(x1,x1) and q(f(x2),x2) do not unify.
So your Skolemization does change the logic of the
problem and is not allowed.

For Skolemization you are not always allowed to take
a subformula and replace the existential quantifier
there by a Skolem function. In doubt one has first
to form the prenex normal form.

There is no fragility in this process. It doesn't
matter which prenex normal form you take. Since all
the prenex normal forms of a formula are logically
equivalent. It might matter for the execution though,
what concerns speed and memory and there are techniques
for mini-scoping etc..

But since resolution is complete for first order logic
it again doesn't matter what concerns completness.
Since Hewitts skolemization is wrong I cannot follow
his chain of arguments concerning a difference between
procedural embedding and procedural interpretation.

Of course the more optimizations a system does on
his own the less it is possible to infuence the
execution of the given formulas by chosing logically
equivalent formulations. For example a deductive
database does typically conjunction reordering, so
that A&B and B&A do execute the same way, whereas in
Prolog they execute differently.

Hope this Helps

See "Computation is not subsumed by deduction"

Unfortunately, the Brachman & Levesque 2005 lecture that you quoted is now thoroughly obsolete with respect to the current state of the art.

With respect to Kowalski's slogan "algorithm = logic + control" (above), see the forum topic Computation is not subsumed by deduction (contra claim by Kowalski).



Some backward chaining in a theory for Countably's Umbrella

Below are some backward-chaining Logic Programs for a theory Countably.

when ⊩Countably "It's raining." → ⊩Countably "Countably has an umbrella."∎ 
  // When there is a goal that it is raining, set a subgoal that Countably has an umbrella.
when ⊩Countably "Countably has an umbrella." → ⊩Countably "It's raining."∎ 
  // When there is a goal that Countably has an umbrella, set a subgoal that it is raining.

PS. There is no "forcing" of bi-implication.



Planner was the first definitive LP language but ...

Planner was the first definitive Logic Program language to embody Procedural Embedding (in opposition to uniform proof procedures using resolution) but compromises were made because of limited capabilities of available computers. Importantly because of limited memory and processing power, backtracking control structure was adopted. Limitations of Planner were duplicated by Prolog with the additional limitation that Prolog allowed only backward chaining.

See the following article for a history of Logic Programs and an explanation of the current state of the art: Inconsistency Robustness in Logic Programs



Unfortunately, Prolog derivatives didn't have both ⊢ and ⊩

Unfortunately, Prolog derivatives didn't meet the requirement to have both ⊢ and ⊩.

Kowalski attempted a rescue in his 1988 article "Logic-Based Open Systems" in response to my 1985 article "The Challenge of Open Systems." However, his response only further muddied the waters.

The proposal is that Logic Programs have a clear definition: programs in which computational steps are logically inferred. (edited for clarity)



Logic Program: each computational step is logically inferred

The proposal is that in a Logic Program each computational step (e.g. in Actor Model of computation) is logically inferred (e.g. in Direct Logic). The proposal has consequences: For example, the "Guarded Horn Clause" (GHC) language used by ICOT is not a Logic Program language because computational steps in GHC are not in general logically inferred because of the "commit" operation.

PS. It appears that ⊢ and ⊩ are suspiciously absent from your alleged examples ;-)



<No more jokes>

<No more jokes>, studying is now required.

"Tell" and "Ask" are poor substitutes for ⊢ and ⊩

"Tell" and "Ask" are poor substitutes for ⊢ and ⊩. Unfortunately, "Tell" and "Ask" have too much psychological agent baggage to be used as the foundation of Logic Programs.



⊢ and ⊩ are symbols in Logic Programs

⊢ and ⊩ are symbols in Logic Programs. It can be a challenge to provide succinct, accurate readings in natural language, e.g., the English renditions you attempted may not be accurate or readable.

dwelling on natlang translation

I find excessive dwelling on difficulties of translation into a natural language is apt to betray not enough appreciation of the dynamism of natural language, and not enough to say about the language translated from.

Natural language can help people understand technical symbols

Using natural language can help people understand technical symbols. But it needs to be done well. Feynman was a master expositor :-)



⊢ and ⊩ are intelligible symbols in Logic Programs

⊢ and ⊩ are perfectly intelligible symbols that are fundamental in Logic Programs to perform Procedural Embedding. Unfortunately, the field of Logic Programs has a checkered development with many misunderstandings that persist to this day :-(

See background at LtU topic Inconsistency Robustness in Logic Programs, which has references to several other papers that use ⊢ and ||- (and their predecessors).



Polya had a good intuitive understanding of ⊢ and ⊩

Polya had a good intuitive understanding of ⊢ and ⊩, e.g., see How to Solve It.

Understanding the how of "what" and the what of "how" is work in progress.


Using natural language can help people understand technical symbols. But it needs to be done well.

See xkcd 805 (note the mouseover).

But surely you jest: Logic Programs not *that* difficult ;-)

Very funny!

But surely you jest: Logic Programs are not that difficult ;-)

The history of programming

The history of programming suggests we're still in the process of realizing it's much more difficult than we thought.

One of greatest challenges: Inconsistency Robustness

One of the greatest challenges is Inconsistency Robustness, i.e., information system performance using pervasively inconsistent information.

Normal science progresses

Normal science progresses because it encourages individual researchers to concentrate, one at a time, on problems that are small enough for them to tackle. However, this narrow focus on particular problems (however large they might individually seem whilst one is focusing on them) may cause us to lose track of the scope of the larger problem.

Actually, I suspect the biggest challenge facing us is known, in its most visible aspect, by the relatively innocuous name "software bloat". Our hardware keeps improving, yet we really have no clue how to tell it what to do, and as we attempt to tell it how to do bigger and bigger things, our software sprawls chaotically almost as fast as our hardware improves.

Software growth is just getting started but currently stalled

Software growth is just getting started but currently is stalled by obsolete technology based on closed-world assumptions. Object-oriented programs were invented by Dahl and Nygaard to perform closed-world simulations. But closed-world assumptions are no longer appropriate between systems or even within a large system. New paradigms are needed for software development to address massive concurrency and pervasive inconsistency.

About right, with a caveat

That sounds about right. With a caveat. It is quite notable that we still don't have a clue how to handle concurrency, a problem we've been aware of for decades and don't seem to me to have made much progress on; but I would add that we haven't altogether figured out yet how to tell a single processor what to do, either.

Actors are becoming the standard model for concurrency ;-)

Actors are becoming the standard model for concurrency ;-)

Any objections?


You have a COI judging the reception of Actors.

How do Actors stand as a model of concurrency?

How do Actors stand as a model of concurrency?

LtU doesn't need the "View from nowhere" ideology

LtU doesn't need the "View from nowhere" [Nagel] ideology with its alleged objectivity and associated censorship of knowledgeable contributors that has been one of the major downfalls of Wikipedia (and most recent establishment press coverage).

Wikipedia's approach to

Wikipedia's approach to neutrality doesn't work, because it requires the Wikipedian community to collectively make subjective decisions "correctly". That's a disaster, not least because it teaches people that neutrality is something that can't be acheived by individuals. It also tends to, as you note, lead to suppression of ideas outside the mainstream. Some of the most notorious examples of failures of the Wikipedian system — I mean, failures that the Wikipedian community itself recognizes in retrospect — are about material that obviously ought to have been treated as legitimate being steadfastly rejected as not mainstream.

In this particular case, though, I was using a quite different standard (that of Wikinews), and my point was simply that your view of the success of Actors should be taken with a grain of salt, by keeping in mind where you're sitting in relation to the thing you're looking at.

COI = conflict of interest.

COI = conflict of interest.

Sorry, should have specified

Yep. Sorry, should have provided that expansion. For journalistic purposes, COI implies you shouldn't be the one to write the article. (Your opinion may, of course, be reported in the article, without necessarily compromising the neutrality of the article, provided the opinion is attributed to you rather than being reported as fact.)

Everyone has to stand someplace (even if they pretend otherwise)

Everyone has to stand someplace (even if they pretend otherwise).

A strength of the Encyclopedia Britannica was that Einstein wrote the first draft of the article on General Relativity. The cult of the amateur and disdain for professionals is one of the greatest deficiencies of Wikipedia. Another weakness is the pretense that what Wikipedia anonymous Administrators impose is neutral "fact" as opposed to the mere "opinions" of professionals who are accountable (and therefore can be accused of having a "Conflict of Interest").

Do you see the difference

Do you see the difference? One asking somebody to explain a thing that they have created. The other is asking someone not to write the evaluation of what they have created.

(you don't have to bring wikipedia up because we are not talking about wikipedia)

Avoiding an issue by attacking the messenger

Comments and remarks should be discussed on their own merits in terms of issues and arguments. Conflict of Interest (abbreviated COI on Wikipedia) is a standard personal attack that is used to avoid discussing issues and carry out censorship.

Of course, authors should stand behind what they are saying and the credibility of authors can be taken into account.

Conflict of interest is

Conflict of interest is abbreviated COI a lot more widely than that. And when you say it's a form of personal attack, that's nonsense. It's a reasonable concern routinely raised in discussion between reasonable people, and doesn't reflect badly on the party who has the COI unless they ignore the warnings of others and plow ahead with stuff that one oughtn't to do when one has a COI. If you get accused of it a lot on Wikipedia... I met a guy once who was deathly afraid of poison ivy because, he said, he was so allergic to it that he could get a bad case of it just from coming within a hundred feet of the stuff. In further conversation, it emerged that he didn't know what poison ivy looks like.


and doesn't reflect badly on the party who has the COI unless they ignore the warnings of others and plow ahead with stuff that one oughtn't to do when one has a COI.

So it's a tool for controlling others --- "shut your mouth or be shamed!"

He's not posting under a psuedonym and I think we all know his connection to the topic under discussion. Instead of providing an argument, you're making snide comments. It's quite distasteful. You don't have to eliminate him from the discourse by these methods; you're free to ignore him.

I kindly suggest we go back

I kindly suggest we go back to the discussion of programming languages.

It is quite notable that we

It is quite notable that we still don't have a clue how to handle concurrency

I don't think that's true. We have quite a few clues! And I think map-reduce, Bloom, Paxos and concurrent revisions are decent progress on different aspects of this question.

Git is used to deal with difficult concurrency issues

Git is used to deal with difficult concurrency issues in software development even though Git itself doesn't offer much automation.

Inconsistency robust?

I think it is a mistake to use consistency to characterize ordinary information. In a sense information defines ambiguity, we can't also expect it to be consistent. Four bits is more than two bits. But two isn't wrong because it isn't four. Ambiguity is normal and even desirable. Truth or certainty means more bits, usually the more we know the more certain we are.

At scale, more information means more inconsistency

According to Saki:

A little inaccuracy sometimes saves tons of explanation.

According to Rovelli:

There is a widely used notion that does plenty of damage: the notion
of "scientifically proven". Nearly an oxymoron. The very foundation
of science is to keep the door open to doubt. Precisely because we
keep questioning everything, especially our own premises, we are
always ready to improve our knowledge. Therefore a good scientist is
never 'certain'. Lack of certainty is precisely what makes
conclusions more reliable than the conclusions of those who are
certain: because the good scientist will be ready to shift to a
different point of view if better elements of evidence, or novel
arguments emerge. Therefore certainty is not only something of no
use, but is in fact damaging, if we value reliability. 

I am objecting to the word

At scale, more information means more inconsistency.

I am objecting to the word "inconsistency" in this context because it has a well defined meaning in mathematical proof as a-priori true. Information theory is a process in time. Ordinary truth, right or wrong, develops out of the process in time. We need to separate this kind of truth from the mathematical. At scale certainty can and should increase, the fact that it may not is a problem of another kind.

Contradiction: both a proposition and its negation asserted

A contradiction is manifest when both a proposition and its negation are asserted even if by different parties, e.g., New York Times said “Snowden is a whistleblower.”, but NSA said “Snowden is not a whistleblower.”

Information not logic.

From an information point of view we have two, one-bit information sources. Formally they can be combined to get the probability of "Snowden is a whistleblower" equals 50%. I see it as information not logic. Information has a logical perspective of it's own that we could bring in, but it wouldn't involve contradiction as in proof by negation, since an embodied theory such as information is always true. It is the purpose of formal logic to determine "truth" according to Aristotle.

Argumentation is fundamental to Inconsistency-Robust information

Argumentation is fundamental to Inconsistency-Robust Information Systems. For inconsistency robustness, it is not sufficient to simply calculate the percentage of the the number of published opinions in favor and opposed to a proposition. Instead, the question is: what are the arguments in favor of "Snowden is a whistleblower." and what are the arguments against?

Virtual Reality?

"Inconsistency-Robust Information Systems" In this expression there are too many abstractions going in too many directions for my taste. Information systems process information, not inconsistency. A court of law is the proper venue for a determination of logical truth.

Systems process information, which is pervasively inconsistent

In practice, information systems process information which is pervasively inconsistent, e.g., climate models and theories of human brain. In other words, inconsistency is a pervasive property of information in practice.

That is your interpretation

That is your interpretation not mine. Information systems process information. There are ordinary meanings of the word "consistency" but we all know that this is not Hewitt's meaning. Hewitt uses the word consistently in the same context as Paraconsistent, negation, and proof by contradiction. These concepts invoke the logic of Frege and Russell which is incompatible with an "information system" as I define it. Information theory uses algebra and linear systems theory which are compact and embodied in a process. At this point I feel that I am getting outside the Ltu guidelines for proper posting so I will leave it at that.

A propostion and its negation are contradictory

“Snowden is a whistleblower.” and “Snowden is not a whistleblower.” are contradictory propositions. Contradictions are manifestations of inconsistency.

Can't get much simpler than this ;-)


The problem is your initial translation of positions into logic. There would be no direct contradiction or inconsistency if you represented (e.g. using a modal logic) the origins of each proposition.

And modal logic is deeply related to information systems, e.g. for modeling the spatial-temporal properties and flow of information.


“Snowden is a whistleblower.” and “Snowden is not a whistleblower.” are contradictory propositions. Contradictions are manifestations of inconsistency.


Intention or extention?

In thinking a little more about this there is one more point to make which should clarify both positions. Your language use is intensive while mine is extensive. I would clarify my position by saying that no one, not me or you, should ever use a word like "information" without being clear about it's intended mode (intension or extension). The issue is too important given the awesome significance of information in contemporary society. In intension, it is natural to talk about negation, but not in extension.

Edit: There is contemporary discussion of the concept of "disembodied information" This is tricky but important. It is about meaning and semantics of media content. Is there any? If not, why not?

Information Inconsistency: both intension and extension

Information Inconsistency has both intension and extension.

An example of an information system having intension inconsistency is for it to manifest both the proposition "Snowden is a whistleblower." and the proposition "Snowden is not a whistleblower."

An example of information system having extension inconsistency is for there to be some proposition for which the information system can infer both the proposition and its negation. Both the proposition and its negation hold in the information system.

A contradiction implies

A contradiction implies non-existence; thus my point that negation does not extend. Extensions are existent things, not arguments.

Denial is not viable in face of existent pervasive inconsistency

Denial is not viable in face of existent pervasive inconsistency.

Raising issues of Inconsistency Robustness, Obama deleted the
following statement from his 2008 campaign website:

Protect Whistleblowers: Often the best source of information
about waste, fraud, and abuse in government is an existing government
employee committed to public integrity and willing to speak out. Such
acts of courage and patriotism, which can sometimes save lives and
often save taxpayer dollars, should be encouraged rather than
stifled. We need to empower federal employees as watchdogs of
wrongdoing and partners in performance. Barack Obama will strengthen
whistleblower laws to protect federal workers who expose waste,
fraud, and abuse of authority in government. Obama will ensure that
federal agencies expedite the process for reviewing whistleblower
claims and whistleblowers have full access to courts and due process.

It may be that Obama's statement on the importance of protecting whistleblowers went from being a promise for his administration to a political liability. But there are an existent contradictions between what Obama said then and what he is saying now.

Contradictions certainly

Contradictions certainly exist but they don't extend into information. The reason information is measured in bits is due to it's physicality. It is not mass/energy physics but thingness and/or existence.

This goes to my point about embodiment, as long as there is contradiction there is no information, and no-thing, it is nothing, empty and devoid of meaning; except it has the meaning of being a contradiction.

If information carried contradiction it would not have a measure.

Meaningful information can have contradictions

There is a lot of meaning in information about the ongoing controversy about contradictory statements as to whether Snowden is a whistleblower.

Binary bits are just one mechanism to store, communicate, and process information. Neurons, sound waves, and marks on paper are other mechanisms used by humans to store, communicate, and process information.


The problem needs to be restated in a manageable way.

Neurons, sound waves, and marks on paper are other mechanisms used by humans to store, communicate, and process information.

In information theory they all reduce to bits>
In my 1948 dictionary information is defined as follows:

knowledge communicated or received concerning some fact or circumstance.

There are two issues here, Knowledge(facts) and communication. The communication problem has been solved since Shannon. The knowledge or epistemology problem is more difficult. I follow C. I. Lewis.

Meaningful information is *not* in general reducible to bits.

Meaningful information is *not* in general reducible to bits.

Meaningful information can be processed (e.g. by humans and/or analog and digital computers), recorded as signs (e.g. on paper or on magnetic tape), or transmitted as signals (e.g. using facial gestures or sound waves). Some of the above can be digitized or a version performed digitally, but that does not reduce meaningful information to bits.

We are collectively processing, storing, and transmitting meaningful information as to whether Snowden is a whistleblower.

It is a theory Carl, theory

It is a theory Carl, theory is as close as we can get within a science or mathematics. How do you propose to explain it? Scientifically, that is?

Meaningful information is *not* in general reducible to bits.

Of course not, theory is never more than a partial insight.

Meaningful information is *not* bits; although bits can be used

Meaningful information is *not* bits; although bits can be used to store, transmit, and process information. Consequently, theories about bits are not theories about meaningful information. On the other hand, Direct Logic is about making inconsistency-robust inferences using meaningful information, even if the information is inconsistent.

Platonic backhand

"Meaningful information" is an intentional expression roughly in the same class as "golden mountains" or "round square". And another thing, analog communication and computation doesn't support negation.

Meaningful information is not necessarily knowledge

Meaningful information is not necessarily knowledge. Opinion and fringe views can also be meaningful information.

Reminds me of a story.

Meaningful information is not necessarily knowledge. Opinion and fringe views can also be meaningful information.

Yes, meaningful info includes a lot of things such as conjecture, criticism, and heuristic model-fitting search paths. Anything that makes you come up with a different answer based on priming or starting point is at least partly meaningful. Lunatic views can be useful if they remind you of models you forgot to apply, along the lines of: "You're wrong, but I just noticed I missed something."

Sometimes I have no idea what you're going on about here, though. What train of thought are you pursuing in this conversation? It looks subtle so far. The rest of this is an anecdote about models, knowledge, and folks with strong opinions.

A few years ago Google tried to launch a knowledge-based service, which I gather was spear-headed by Udi Manber. I have no clear idea why this since languished. Maybe you have some idea. I assume it's because knowledge offerings are a hard sell.

About ten years ago I interviewed at A9 when Udi Manber was the CEO, and he asked me what I thought was interesting. When I asked if he meant specifically or generally, Manber said he meant very generally. After I noted no one ever asked me that, I considered a moment, and finally replied, "Representational systems." At this point the interview crashed and burned, and I was figuratively thrown out, after Manber repeated those words twice in apparent mounting anger before going into wrath of god mode, assuring me my answer was not in the least acceptable and using many words to convey this. It was odd my idea of what is interesting was an instant fail. Maybe I should have said continuation passing.

Extracting knowledge from that story would be hard, wouldn't it? Nuance implied by story and format contains meaningful info, even if it's hard to reduce to propositions.

Meaningful information can be more than assertions

Meaningful information can be more than propositions. What about declarations, commands, goals, etc.?

One more idea with respect

One more idea with respect to extension and embodiment. Unless we use probability, only one of the above statement can be extended because only one can be true. If we don't know which is true then maybe we shouldn't say either. But we could say instead "Snowden might be a whistle blower". ;)

"Snowden might be a whistleblower." is too wishy-washy

"Snowden might be a whistleblower." is too wishy-washy. Issue for an information system is what are the arguments in favor of "Snowden is a whistleblower." and what are the arguments against.




The intellectual history of the twentieth century is yet to be
written, We are still too close to these events to be credible
because of the "conflict of interest" issue mentioned above. When
this story is told it will be unlike any history ever written. It
will be a story of the human mind itself and how it evolved and
changed in modern history. It is still difficult to talk about the
most recent chapters of this tail because they arouse too much
emotion and COI. But we must try because the story is not over yet.
The most important chapters are yet to be written. It is your life
we are talking about. There are many places to start. Much of it has
been posted on Ltu. If you really want to know, look around, and
connect the dots.

Important CoI: between those who speak and those who censor them

An important Conflict of Interests is between those who speak and those who attempt to censor the speakers.



LP should should not be restricted to Horn-Clause syntax

Over the course of history, the term "Functional program” has grown more precise and technical as the field has matured. Logic Programs should be on a similar trajectory and should not be artificially restricted to Horn-clause syntax. Accordingly, “Logic Program” should have a precise general characterization. Kowalski has advocated Horn-clause syntax for Logic Programs. In contrast, our approach has been to explore Logic Programs defined by a general principled criterion, namely, “each computational step is logically inferred” using inconsistency-robust Natural Deduction that is contrary to (often undesirable) reduction to Horn-Clause syntax.

PS. Thanks for noticing the duplicated sentences in Inconsistency Robustness in Logic Programs



Those who don't know history are doomed to repeat it

'Those who don't know history are doomed to repeat it.' — Edmund Burke



Inconsistency Robust Logic

For Inconsistency Robust Logic see the following: Formalizing common sense



Any *specific* objections to Inconsistency Robust Logic?

Do you have any specific objections to Inconsistency Robust Logic?



Unfortunately, Minimal Logic is *not* Inconsistency Robust

Unfortunately, Minimal Logic is not Inconsistency Robust, e.g., it has ¬Φ,Φ⊢T ¬Ψ




the parent post was edited.

In Direct Logic, Φ⇒Ψ means Φ⊢Ψ and ¬Ψ⊢¬Φ

In Direct Logic, Φ⇒T Ψ means Φ⊢T Ψ and ¬Ψ⊢T ¬Φ. Consequently, your "bootstrap" of negation (above) is invalid.

Hint: It is necessary to use Direct Logic to prove something in Direct Logic.



Atomic Boolean Direct Logic is a relevancy logic

Atomic Boolean Direct Logic is a relevancy logic in the sense that if
Φ⊢⊥Ψ, then Atoms[Ψ]⊆Atoms[Φ]

Of course, ⊥ is an inexpressive, inconsistency-robust, computationally decidable theory in Atomic Boolean Direct Logic, which allows only atomic propositions composed with conjunction, disjunction, and negation.

PS. In the context of Direct Logic, I have no idea what you mean by "->".

Concurrent logic

From what I can remember concurrent logic could handle shared resources quite well. Using streams (merged lists) it wasn't difficult to manage both concurrent reading and writing of shared resources.

ICOT languages strictly speaking not Logic Programs

ICOT languages were strictly speaking not Logic Programs. In ICOT languages involved computational steps that were not logically justified. To see this, write out the logical axioms for merging lists.


In you should read the operator '|' as conjunction, then you get a merged list relation as a logical consequence of the formulae.

What computational steps are not logically justified?

ICOT Prolog-style clause languages are not logic programs

Because of commit operator, ICOT Prolog-style clause languages are not strictly speaking Logic Programs, i.e. they cannot be implemented using mathematical logical inference.

Proper Logic Programs for Merge

Below are proper Logic Programs for merge in which each computational step is logically inferred:

when ⊩ Merge[[A, ...As], B, C] → ⊩ C=[A, ...Cs], Merge[As, B, Cs]∎
when ⊩ Merge[A, [B, ...Bs], C] → ⊩ C=[B, ...Cs], Merge[A, Bs, Cs]∎
when ⊩ Merge[[], B, C] → ⊩ C=B∎
when ⊩ Merge[A, [], C] → ⊩ C=A∎

there are also the following:

when ⊩ Merge[A, B, []] → ⊩ A=[], B=[]∎
⊢ Merge[[], [], []]∎ 
when ⊢ Merge[A, B, []] → ⊢ A=[], B=[]∎
when ⊢ Merge[[], B, C] → ⊢ C=B∎
when ⊢ Merge[A, [], C] → ⊢ C=A∎

Anything else about concurrency and parallelism?

Let's try the left margin again, to ease up on deeply indented nesting. That long sub-thread appears to be about inconsistency, inaccuracy, ambiguity, etc. I'm interested in your point, but your responses can be so short I can't make out the angle you want to work at the moment. Below I hope I'm not rude in suggesting what I want to hear more about; please use your time as you wish.

Meaningful information can be more than propositions. What about declarations, commands, goals, etc.?

Sure, I'm very agreeable toward relaxed and fluid approaches to ontology—however you want to slice and dice categories and definitions. (In my early 20's, now long ago, I was obsessive about researching related ideas in language, ontology, models, psychology, and philosophy of science. I endeavor not to refer to this much because it's seldom constructive, but my worldview is flexible.)

I'm not very interested in inconsistency robustness in logic programs, so I've been sitting out. But if you have more to say about the following from your OP, I'm all ears when it comes to concurrency in technology:

... correctly placed tremendous emphasis on research in concurrency and parallelism as an emerging computing paradigm

If part of your thesis is about concurrency and parallelism, I usually want to hear more about those things. But if it's about logic and building knowledge databases, that seems like a dry well to me, because it's really easy to capture entropy in the form of self-defeating categorization with respect to problem solving. I'm curious regarding your intuitions about concurrent system behavior, but not so much about logic and math until such time as it blocks progress in tasks I see ongoing.

Parallelism and Concurrency in the Actor Model

ICOT (Japanese Fifth Generation Project) correctly placed tremendous
emphasis on research in concurrency and parallelism as an emerging
computing paradigm

See following link:
Parallelism and Concurrency in the Actor Model

followup in the other thread

I'll respond to the new discussion topic by-and-by, but I'm not looking forward to it, because I'll need to explain a bit about my motivation for context. It's not clearly programming language related, though parts of it are. Still, it's a good idea to keep to a PL agenda here. After I briefly explain the sort of thing I have in mind, I'll quote small parts from your paper and comment from the POV established by the context of the idea I summarized. If you respond to any of those, I'll keep doing it for other bits of your paper I see related.