Aggregations (e.g., sets) in Logic Programs

Prolog and its derivatives have lacked adequate ways to compute with aggregations, e.g., sets.

For example, suppose there is a ground-complete predicate Link[aNode, anotherNode, aCost]
that is true exactly when there is a path from aNode to anotherNode with aCost.

When ⊩ Path[aNode, aNode, aCost]→ // when a goal is set for a cost between aNode and itself
   ⊢ aCost=0▮           // assert that the cost from a node to itself is 0

The following goal-driven Logic Program works forward from start to find the cost to finish:

When ⊩ Path[start, finish, aCost]→ 
   ⊢ aCost=Minimum {nextCost + remainingCost
                     | ⊨ Link[start, next≠start, nextCost], Path[next, finish, remainingCost]}▮
          // a cost from start to finish is the minimum of the set of the sum of the
              // cost for the next node after start and
                   // the cost from that node to finish

The following goal-driven Logic Program works backward from finish to find the cost from start:

When ⊩ Path[start, finish, aCost]→ 
   ⊢ aCost=Minimum {remainingCost + previousCost 
                      |⊨ Link[previous≠finish, finish, previousCost], Path[start, previous, remainingCost]}▮
          // the cost from start to finish is the minimum of the set of the sum of the
              // cost for the previous node before finish and 
                  // the cost from start to that Node 

Note that the above Logic Programs work together concurrently providing information to each other.

For more information see Inconsistency Robustness for Logic Programs

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

relationship of classical direct and inconsistency robust logics

I'm confused about something.

Compared to Classical Direct logic, I gather that Inconsistency Robust Direct Logic tosses out "extraneous OR introduction" and "proof by contradiction".

I notice that I.R.D.L has ¬¬Φ ⇔ Φ which I believe is actually not an axiom or theorem of Classical Direct Logic. Is that correct? (CDL has Φ ⇒ ¬¬Φ but not the bi-implication.)

How are these two approaches to negation reconciled?

Classical Direct Logic is a Mathematical foundation for IRDL

Classical Direct Logic is a foundation for Inconsistency Robust Direct Logic. Both of them have Φ⇔¬¬Φ.

Excluded middle?


8 deduction rules are given for CDL and they don't include or prove excluded middle or the bi-implication Φ⇔¬¬Φ.

(referring to the direct logic paper (page 47))

I saw an earlier version of the DL paper that included the bi-implication but had assumed it was removed on purpose specifically to keep the deduction rules intuitionist.

deMorgan and intuitionist pseudo-excluded-middle

The 8 inference rules prove DeMorgan:

  ¬(Φ ∨ Ψ)     ⓘ hypothesis

    Φ          ⓘ hypothesis
    Φ ∨ Ψ      ⓘ extraneous ∨ introduction

    (Φ ∨ Ψ) ∧ ¬(Φ ∨ Ψ)  ⓘ contradiction (via ∧ introduction)

  ¬Φ           ⓘ inference

    Ψ         ⓘ hypothesis
    Φ ∨ Ψ     ⓘ extraneous ∨ introduction

    (Φ ∨ Ψ) ∧ ¬(Φ ∨ Ψ)  ⓘ contradiction (via ∧ introduction)

  ¬Ψ          ⓘ inference

  (¬Φ ∧ ¬Ψ)    ⓘ∧ introduction

¬(Φ ∨ Ψ) ⇒ (¬Φ ∧ ¬Ψ)    ⓘ Q.E.D. (via ⇒ introduction)

and we easily get an intuistionist version of of pseudo-excluded-middle:

  ¬(Φ ∨ ¬Φ)    ⓘ hypothesis

   ¬Φ ∧ ¬¬Φ    ⓘinference by DeMorgan and a contradiction

¬¬(Φ ∨ ¬Φ)     ⓘconclusion

but I see no rule that allows for double-negation elimation (which is a good thing because otherwise we'd have excluded middle).

Inconsistency Robust De Morgan: inference and not implication

In Inconsistency Robust Direct Logic, De Morgan rules use inference and not implication:

¬(Ψ∧Φ)  ┤├T  ¬Ψ∨¬Φ
¬(Ψ∨Φ)  ┤├T ¬Ψ∧¬Φ

There is a bug in the article in HAL :-(


Does this mean demorgen's rules are elevated to axioms in direct logic?

Inconsistency Robust De Morgan are axioms

Although classical De Morgan are theorems, inconsistency robust De Morgan are axioms.

That was a good clue.

In Inconsistency Robust Direct Logic, De Morgan rules use inference and not implication:
There is a bug in the article in HAL :-(

I turned to "Formalizing common sense reasoning for scalable inconsistency-robust information integration using Direct LogicTM Reasoning and the Actor Model" from Arxive and think I understand better now.

I'll post a separate comment about that.


I am starting to find the arguments for direct logic and the 'when' this 'assert' that method of logic programming quite nice. I have found lack of sets, and the use if lists in Prolog an "interesting" choice, but I assume like in Lisp it was done for efficiency and implementation reasons.

I my own work, which I make no claims about originality, but I am slowly working from what I understand and considering how to improve it, I note that there is a correspondence between a set (of goals or assumptions) and first order logic, and between a stack and linear logic.

I think it is interesting how the data-structure, being list, stack, set, ... (others?) determines the type of logic.

Do you still search for matches depth-first?

When a match fails do you still backtrack?

Logic Programs use concurrency instead of backtracking

In ActorScript, Logic Programs use concurrency instead of backtracking.

Breadth First

Does that mean you are doing a breadth first search? Do you return the complete answer set? What happens with infinite answer sets?

It seems concurrency (breadth first) and depth first with backtracking are just different search strategies.
In theory we can substitute one for the other without changing the results, but changing the memory profile. Depth first uses a lot less memory, but can get stuck. In my work I use depth-first with iterative-deepening, which doesn't get stuck, and should give identical results to breadth first.

It would be interesting to implement direct logic with iterative-deepening and backtracking for performance comparisons.

Not enough there to be a path finding search


Logic programs can be interpreted in a way that gives you a search - what Hewitt gave can't get you to a finite constructive solution under any interpretation let alone a meaningful search and saying that it's concurrent rather than backtracking doesn't help either.

Also, specifying that self links are free not only doesn't help, it's pernicious - why would you allow/search non-moves?


[edit] Even with memoizing this doesn't get you to finite, because it allows loops an unlimited number of times. If you rewrote it to disallowed loops and memoize, you'd still have a combinatorial explosion of trying every possible non-looping path.

is a perfectly normal fact that would fit in any logic language. How is this different from them?

Still Breadth First.

I still think concurrent equates to breadth first. For each choice point you launch a new thread that searches in parallel. You allow looping (just as you do in a breadth first search a loop is not pathological like it is in depth first) and you accept the first answer you find. Breadth first guarantees that will be the shortest solution, perhaps parallel does not give you that guarantee, but if scheduling is relatively fair, you should get one of the shorter solutions. Note you don't need backtracking with breadth first, but the penalty is that you use exponentially more memory as you have to remember all intermediate steps (if you want to be able to construct the proof/path of the solution found). If there are no solutions it may not terminate.

I don't see anything

that allows it to reject unfinished paths that are already longer than the shortest found. Do you see any way that it could know to terminate?

Terminate the thread group.

You can use a region like approach, so you have a thread group, and all the parallel threads created during the search are a member of the group. As soon as you find the first solution, you terminate all threads in the group, and return the solution and optionally its path/proof.

If you are using green-threads, you simply throw away all the continuations generated from this program when you find the first solution.

Non negative costs

If the minimum is defined to shortcut rather than be a strict test then non-negative weights are enough to guarantee termination.


In this kind of logic program, do you still have a list of goals like Prolog, is it a set of goals? When forwards chaining don't you have a list/set of assumptions, instead of goals which are used in backwards chaining?

turnstiles, inference rules, and axioms

The general structure of inference and implication, in direct logic
and classical direct logic:

"Inference rules" in direct logic are sentences that contain free
variables of type "proposition". They are construction rules for
proofs out of propositions.

Inference rules are not themselves propositions because no
proposition contains a free variable of any type.

"Inconsistency Robust Direct Logic" is a name for a set of inference
rules which all theories share. These rules do not lead
to explosions in the face of a contradiction.

"Classical Direct Logic" is the inference rules of Inconsistency
Robust Direct Logic, plus some rules that do lead to
explosions in the face of a contradiction. These additional rules are
considered safe because we are very careful about what axioms to
include in Classical Direct Logic.

Classical Direct Logic also contains axioms for: types in general,
function types, discrete union types, subtypes, booleans, natural
numbers, type-based sets, etc. A bunch of basic, trustworthy stuff
which considerable experience tells mathematicians is consistent.

The officially (per Hewitt) most interesting axiom in Classical Direct
Logic is an axiom of countable, categorical induction. (I'm not
discussing that further here.)

My nomination for the second-most interesting axiom that applies to
all theories is:


  (⊢ Φ) ⇒ (⊢Τ Φ)

     ⓘ Simple and very well understood math is assumed to be
     ⓘ universally true.

That axiom says that every theorem of Classical Direct Logic
may be taken as an axiom of every theory.

Note very carefully however that the inference rules of
classical direct logic are not imported into every theory, only
the theorems.

The inference rules are not theorems because they are not

(If a theory wants to use more than just the inconsistency robust
logic rules, it must be an explicitly classical theory.)

Warning: this list of rules may have bugs.

Inconsistency robust inference rules:

    ⓘ These inference rules apply in every theory.

     ⊢Τ Ψ ⇒ Ψ

     ⊢Τ (⊢Τ Ψ∧Φ⇒ϴ) ⇔ ⊢Τ Φ∧Ψ⇒ϴ
     ⊢Τ (⊢Τ ϴ⇒Ψ∧Φ) ⇔ ⊢Τ ϴ⇒Φ∧Ψ

      ⊢Τ (Ψ⇒Φ∧ϴ) ⇒ ⊢Τ Ψ⇒Φ

       (⊢Τ Ψ⇒Φ, Ψ⇒ϴ) ⇒ (⊢Τ Ψ⇒Φ∧ϴ)
       (⊢Τ Ψ⇒ϴ, Φ⇒ϴ) ⇒ (⊢Τ Ψ∨Φ⇒ϴ)

    Implication implies inference:
     (⊢Τ Φ ⇒ Ψ) ⇒ Φ ⊢Τ Ψ

     (⊢Τ Ψ⇒Φ, Φ⇒ϴ) ⇒ ⊢Τ Ψ⇒ϴ

      ⊢Τ (Ψ⇒Φ) ⇔ ¬Φ⇒¬Ψ

    Implication infers disjunction:
      (Φ⇒Ψ) ⊢Τ Ψ∨¬Φ

    ∧ Introduction:
     (⊢Τ Φ, Ψ) ⇒ ⊢Τ Φ∧Ψ
         ⓘ proofs of Φ and Ψ comprise proof of their conjunction.

    ∧ Elimination:
     (⊢Τ Φ∧Ψ) ⇒  ⊢Τ Φ, Ψ
         ⓘ a proof of a conjunction proves both arms of the conjunction.

    ∨ Elimination:
     ⊢Τ ¬Φ∧(Φ∨Ψ) ⇒ Ψ
         ⓘ It is a tautology that "If not Φ yet either Φ or Ψ, then Ψ".

    ∨ Introduction:
     ⊢Τ Φ∧Ψ ⇒ Φ∨Ψ
         ⓘ It is a tautology that "If Φ and Ψ, then Φ ∨ Ψ".

    Disjunctive cases:
     ⊢Τ (Ψ∨Φ) ∧ (Ψ⇒ϴ) ∧ (Φ⇒Ω) ⇒ (ϴ ∨ Ω)
         ⓘ It is a tautology that "If Φ or Ψ, while Ψ would imply ϴ,   and Φ would imply Ω, then ϴ or Ω"
         ⓘ A common special case occurs when Ψ and Φ are the same,   and ϴ and Ω are the same.
         ⓘ      (Ψ∨Ψ) ∧ (Ψ⇒ϴ) ∧ (Ψ⇒ϴ) ⇒ (ϴ ∨ ϴ) ≡ Ψ ∧ (Ψ⇒ϴ) ⇒ ϴ

      ⊢Τ ((⊢Τ Ψ) ⇒ Ψ)
         ⓘ If a proposition is proved to be a tautology in a theory,    it holds in that theory.

    Inconsistency Robust Proof by Contradiction:
      ⊢Τ (Φ⇒(Ψ∧¬Ψ)) ⇒ ¬Φ
         ⓘ It is a tautology that if a proposition implies a contradiction    then the negation of the proposition holds.

    Extraneous Assumption Introduction:
      (⊢Τ Ψ) ⇒ Φ ⊢Τ Ψ

Additional inference rules of Classical Direct Logic:

    ⓘ These inference rules also apply to any theory that uses    classical logic.
    ⓘ Note: These inference lead to explosion in inconsistent theories.

    Classical Proof by Contradiction:

        (Φ ⊢ Ψ, ¬Ψ) ⊢ ¬Φ
        ⓘ Proofs that Φ leads to a proof of Ψ, and a proof of thenegation of Ψ, comprise a proof that ¬Φ holds.
        ⓘ This classical rule is not inconsistency robust, as follows:
        ⓘ Suppose that:  ⊢Τ P, ¬P
        ⓘ       P,¬P                  assumed proved contradiction     ⊢ ¬ϴ ⊢ P,¬P     extraneous assumption introduction     ⊢ ¬¬ϴ           classical proof by contradiction     ⊢ ϴ             explostion

    Classical Deduction:
        (Ψ ⊢ Φ) ⇔ ⊢ (Ψ ⇒ Φ)
        ⓘ Φ can be proved form the assumption Ψ if and only if    Ψ implies Φ
        ⓘ This classical rule is not inconsistency robust, as follows:
        ⓘ           (Ψ ⊢ Φ) ⇒ ⊢ (Ψ ⇒ Φ) is not inconsistency robust:
        ⓘ       P,¬P                  assumed proved contradiction     ⊢ Ψ ⊢ ¬P         extraneous assumption introduction     ⊢ Ψ ⇒ ¬P         classical deduction     ⊢ ¬¬P ∨ Ψ        propositional equivalence     ⊢ P ∨ Ψ          propositional equivalence     ⊢ Ψ              explosion (since ¬P)

Pre Defined Symbols?

Which symbols have pre-defined meanings in these axioms for example:

(⊢Τ Φ∧Ψ) ⇒  ⊢Τ Φ, Ψ

it appears we have to already understand what implication "=>" is, what a theory is "|-T" and what conjunction is ",".

Then all the other symbols like "/\", "\/", "¬" are being defined?

Thus starting with something like Horn clauses we can implement CDL using these axioms?

what is an axiom

Which symbols have pre-defined meanings in these axioms

Math doesn't work that way.

How does it work?

Looking at the other axiom:

(⊢Τ Φ, Ψ) ⇒ ⊢Τ Φ∧Ψ

It looks like you could express this as the following clause:

theorem(t, and(X, Y)) :- theorem(t, X), theorem(t, Y).

I suppose the conclusive answer to my question is to put all the axioms in, and try and solve come CDL problems. I think I have an idea where the problems will be in doing this, which is to do with accumulating assumptions, and the ordering of lists. Rather nicely this ties back in with the original topic, as the missing thing might well be sets.

As an aside why is this not bidirectional implication:

(⊢Τ Φ, Ψ) ⇒ ⊢Τ Φ∧Ψ
(⊢Τ Φ∧Ψ) ⇒  ⊢Τ Φ, Ψ

(⊢Τ Φ, Ψ) ⇔ ⊢Τ Φ∧Ψ

re It looks like you could

It looks like you could ....

Stop guessing.

Looks worth investigating then.

If you can't think of a reason why this would not work, then I might give it a go, as it would be good to find out if it is possible. I implemented the axioms of a Heyting algebra like this:

# conjunction as meet
rule(meet(A, B), le(meet(A, B), A)).
rule(le(meet(A, B), A), meet(A, B)).
rule(meet(A, B), le(meet(A, B), B)).
rule(le(meet(A, B), B), meet(A, B)).
rule(le(C, meet(A, B)), pair(le(C, A), le(C, B))).
rule(pair(le(C, A), le(C, B)), le(C, meet(A, B))).

# truth as greatest element
rule(A, le(A, true)).
rule(le(A, true), A).

# disjunction as join
rule(join(A, B), le(A, join(A, B))).
rule(le(A, join(A, B)), join(A, B)).
rule(join(A, B), le(B, join(A, B))).
rule(le(B, join(A, B)), join(A, B)).
rule(le(join(A, B), C), pair(le(A, C), le(B, C))).
rule(pair(le(A, C), le(B, C)), le(join(A, B), C)).

# falsehood as least element
rule(A, le(false, A)).
rule(le(false, A), A).

# implication as exponential
rule(meet(A, impl(A, B)), le(meet(A, impl(A, B)), B)).
rule(le(meet(A, impl(A, B)), B), meet(A, impl(A, B))).
rule(le(C, impl(A, B)), le(meet(A, C), B)).
rule(le(meet(A, C), B), le(C, impl(A, B))).

# inference rules, Transitive.
#rule(A, B) :- rule(B, A).
rule(A, B) :- rule(A, C), rule(C, B).
rule(pair(A, B), pair(X, Y)) :- rule(A, X), rule(B, Y).
rule(pair(A, B), pair(Y, X)) :- rule(A, X), rule(B, Y).

re I implemented the axioms of a Heyting algebra like this:

If I understand your gesture correctly, the problem is that theorems under the rules of inconsistency robust logic do not necessarily form a lattice.

Not A Lattice

I see what you are saying, but I don't yet see the problem with it not being a lattice. This is what I have so far (and this is an experiment, I am not saying this is in any way correct):

# Exchange
rule(impl(meet(X, Y), Z), impl(meet(Y, X), Z)).
rule(impl(X, meet(Y, Z)), impl(X, meet(Z, Y))).

# Dropping
rule(impl(X, meet(Y, Z)), impl(X, Y)).

# Accumulation
rule(pair(impl(X, Y), impl(X, Z)), impl(X, meet(Y, Z))).
rule(pair(impl(X, Z), impl(Y, Z)), impl(meet(X, Y), Z)).

# Implication implies inference
rule(impl(X, Y), rule(X, Y)).

# Transitivity
rule(pair(impl(X, Y), impl(Y, Z)), impl(X, Z)).

# Contrapositive
rule(impl(X, Y), impl(not(Y), not(X))).

# Implication infers disjunction
rule(impl(X, Y), join(Y, not(X))).

# Meet introduction
rule(pair(X, Y), meet(X, Y)).

# Meet elimination
rule(meet(X, Y), pair(X, Y)).

# Join elimination
rule(join(not(X), meet(X, Y)), Y).

# Join introdution
rule(meet(X, Y), join(X, Y)).

What I could do with is a few things to prove to test the implementation. For example with the Heyting algebra I used distributivity:

:- rule(meet(a, join(b, c)), join(meet(a, b), meet(a, c))).

re: I am not saying this is in any way correct

this is an experiment, I am not saying this is in any way correct)

Then you show disrespect for the forum.

The most generous interpretation is that you are mindlessly using the forum as a kind of calculator that exists to read and think for you, and hand you answers to every random query you type.

Discussions with People

It's better to discuss with people than work in isolation. Do you not discuss your ideas with friends and colleagues? People ask questions because they are interested in the answers, and someone else may be able to spot an obvious flaw that you don't see yourself.

Keean, don't dissemble

People ask questions because they are interested in the answers, and someone else may be able to spot an obvious flaw that you don't see yourself.

Flaw in what? Random guesses and herp derp changes of topic?

Keean, this is not about *you* but your approach

For all the Keeans in the world:

I have no reason to believe you are not sincere, smart, and capable. (I have no reason to believe that you are, either, but that aside.)

Casual bs'ing about a purported theory until you "get it" on the base of what interlocutors have said is taboo.

It is taboo because it leads to substantive exchanges being drowned out in noise.

It is OK to express mistaken notions about a theory if you demonstrate (a) real effort to understand the source material in relation to the discussion; (b) FORWARD PROGRESS.

Failure at (a) and (b) is what I mean by causual bs'ing about a purported theory.

Everyone who has ever become even vaguely competent (that's my classification: vaguely competent) at math has (I am pretty sure) gone through a painful period of lacking self awareness of when they are casually bullshitting.

Some evil types get stuck there and think that's a fun game. Boo to the evil types.

Meanwhile: casual bs'ing (in a context not meant for it) is taboo.

Since nobody's speaking up,

Since nobody's speaking up, and considering the moderation thread (I'd actually have done this via PM if we had that facility, but, as we don't, I'll do it publicly. Ehud - if you're unhappy with this, please feel free to nuke it)

Keean's asking questions.

You might want to look back at your replies to him over the last 24-48 hours and consider if they make you look anything other than spiteful. For example, and from where I'm sitting, the post to which I am replying, titled "Keean, this is not about you..." actually looks to be a direct attack on Keean in the "when did you stop beating your wife?" sense.

I considered saying something stronger

but I was afraid of scaring Thomas off and Keean seems to want to get information out of him.

I've been thinking for days that Keean has the patience of a saint.

Raising concerns in public

Raising concerns in public is often best. I have not followed this thread (people here know why), but from the little I read I think Simon is right.

Keean's asking

Keean's asking questions.

Keean's trying to implement a Prolog interpreter for direct logic, a subject it seems that few here but Thomas have understood or feel is all that compelling. If Thomas really believed it to be a subject worth discussing as he's claimed, I'd expect more encouragement and help. A direct logic interpreter in an existing logic language would clarify it for many people.

As for Thomas' tone, an avalanche of misunderstandings in other threads is clearly affecting this thread. Not at all a justification for rudeness or feeding someone he thinks isn't arguing in good faith though.


Tom's obviously frustrated, but flame mode isn't helpful. I don't think the flames are necessarily the biggest dysfunction in these threads, though.

. If Thomas really believed it to be a subject worth discussing

If Thomas really believed it to be a subject worth discussing as he's claimed, I'd expect more encouragement and help.

Up to a point, I agree.

Can you show me any evidence Keean has looked at the papers to try to figure out himself if his implementation is faithful?

I have read the papers

The papers seem to be badly structured and cover multiple topics, for example the CDL paper also seems to go into ActorScript, physics and philosophy, and the part relevant to CDL seems short on details. There is no concise summary of the axioms (I found the summary you posted above to be the best I have found). Have these papers been accepted for publication anywhere? You seem to have got on with them much better than I have, so any help you can give me in deciphering them is appreciated.

At the moment I am trying to prove commutativity of disjunctions, which I am failing to do. Either there needs to be some additional axiom, or I am missing something obvious. I suspect its the latter, but I cannot rule out the former.

Edit: actually the appendix of the CDL paper seems better than the main text, although there are a few errors.

I appreciate the kind of discussion you're trying to have

I have no idea what Thomas' problem is.



what Thomas' problem is

A small handful of commenters on Hewitt-related topics make a high volume of comments that:

1) Show no evidence of a commensurate effort to work through the (admittedly problematic, but actually substantive) papers. The impression one gets at first is of a commenter who is trying to learn a subject just by reading blog posts and comments.

2) Show no forward progress in gaining understanding of the materials, even after months. (Unsurprising, given (1).)

3) Through provocation and nagging, these relentless problem comments try to extract the time and attention of others in a forum where, normally, time and attention are freely given to one another's comments out of a sense of mutual good will and mutual benefit.

It is point (3) that makes me really suspicious about why this pattern persists, in spite of it being given a lot of gentle (and now not so gentle) negative feedback. These few commenters give me the persistent sense that someone is trying to use my good will against me.

Regardless of the intent or motives of anyone, engaging in behaviors like those I listed is trolling. If you want to disrupt a forum or particular members, that would be the way to do it. If you don't want to disrupt a forum like LtU, but instead make a positive contributions, then in my view you have a positive obligation to try to self monitor and avoid persistently behaving in the three ways I described above.

What really set me off most recently was being presented with a bunch of random code, with no explanation of why Keean thought it faithfully reproduced Direct Logic. He included a statement that he made no claims as to its correctness. He did not even offer an informal argument as to the code's relevance beyond "I don't see why this wouldn't work". Then he asked others, including me, to debug it for him. The relation between his code and what I wrote in the comment he replied to is very sketchy, at best. No amount of dissuasion polite or abrupt leads to anything more than further digressions from Keean.

(This is almost as (not) fun as when someone, I think it was Josh but I'm not sure, berated me months ago for simply trying to ignore the commenters. So apparently I am damned if I do, damned if I don't, and evidently not entitled to expect an ounce of mutual respect and good will from these commenters.)

Imo it's at least partly a

Imo it's at least partly a matter of contrasting cognitive strategies: there are a couple of folks here whose experience of math is fundamentally different from most LtUers, and this can lead to deep mismatching of thinking. Notice that Thomas tends to complain that others aren't doing what's needed to understand mathematics — that's likely because most people here understand mathematics by doing something different than what Thomas does. (Alas, that my theory falls short of offering a solution.)

Or our goals are different

I'm not particularly interested in understanding theories nor am I interested in modeling apart from programming, I'm only interested in logic as it relates to problem solving.

I'll learn a theory if it will directly help me find useful solutions to practical problems.

Sure. But the

Sure. But the cognitive-type thing is largely orthogonal to things like theory-versus-practice. It's not what you do with your think-engine, it's how the engine works. Some people have... let's call it a gift for structure, or perhaps a gift for neat answers (two different ways of trying to describe the same thing). In programming, that gift translates into well-organized programs. Then some people have a gift for asking the right question; in programming that gift translates into adept testing and debugging. These things take slightly different (though closely related) forms when you're doing mathematics. But when a person happens not to have one of these talents, and sets about the same tasks as someone who does have them, the one without the gift will adopt a fundamentally different strategy for doing the same thing; and these people doing the same task in totally different ways are apt to dreadfully miscommunicate and get on each other's nerves.

math works like this

The meaning of math is usually discussed using informal meta-language, typically natural language, and is not formalized. Notation used to express formal math is convention, and need not follow any logical-positivism rules about decomposition into atoms of composable units. Notation is more like a pattern, with rules announced informally like "to express abc we write xyz" and the syntax of xyz can be anything without following rules about symbol decomposition. It's a kind of recognize-the-gestalt deal.

Smaller Informal Set

This I realise is true, but my question is about the smallest informal set of rules you can work with, and then define the other rules formally.

So given only basic Horn clauses (atoms. variables, inference and conjunction) can you formally define CDL?

If you cannot, what is the smallest extra feature need to be added to Horn clauses to enable CDL to be formally defined in that system?

re how does it work

Inconsistency robust direct logic lays out some schematic rules of inference. These rules define what comprises inference (notationaly: turnstiles) in inconsistency robust direct logic. These rules do not lead to explosion in the face of inconsistency.

Classical direct logic adds to those schematic rules, allowing in some rules that would lead to explosion in the face of inconsistency. The additonial schematic rules define classical inference (notationally the naked turnstile). Classical rules of inference would lead to explosion in the face of contradiction.

The schematic rules of inference have the general form of sentences formed by using some logic operators and containing free variables of type proposition. They are applied in an argument by substituting propositions for variables of type proposition.

The sentences giving rules for inconsistency robust and classical logics do not contain free variables of any other type but propositions and theories. Theory variables occur only as the subscript of a turnstile. Hence, the discussion of inference rules does not dwell on the types of the free variables they contain. (E.g. (⊢Τ Φ, Ψ) ⇒ ⊢Τ Φ∧Ψ is a sentence, not a proposition, with free variables Τ:Variable<Theory>, Φ,Ψ:Variable<Proposition>

Construction rules for sentences and propositions are given in the papers.

The construction rules for sentences and propositions in general require an axiomatic theory of types, boolean, naturals, etc. which may be figured out pretty straightforwardly from the papers (though the papers are weak in not laying all this out compactly somewhere).

The axiomitizations of types, boolean, naturals, etc. sufficient to define these rules of inference are very conservative and do not plausibly contain any contradictions.

The construction rules for terms, hence expressions and sentences is left open-ended. Notably, a theory of computation may usefully include terms (of expressions hence sentences) which are (by axiom) the denotations of programs.


I think its going to take me some time to understand all that. Various questions occur, which I am going to have to spend some time thinking through.

One that occurs is what happens if the only type is "atoms"? Presumably the axioms of CDL still hold?

re I am going to have to spend some time thinking through.

I am going to have to spend some time thinking through.

I look forward to the respite.

One that occurs is what happens if the only type is [herp derp]?

In that case: Herp Derp.

I think I can replace propositions with Prolog atoms.

If I want to prove things about propositions in DL, I can construct part of DL where everything is a proposition (and everything is in the theory T). In which case I am only concerned about the uniqueness of variables (which are all of type proposition). I can do that when trying to prove a theorem by using unique Prolog atoms for each unique proposition in the theory.

re: I think I can replace propositions with Prolog atoms.

All you have said there is that you believe you can model the inference rules in Prolog, choosing atoms as the data structure representation for otherwise undifferentiated proposition constants. You also invoke a potentially problematic assumption about uniqueness.

Not saying much

I agree that I am not saying much, but when I expressed it more concisely, you equated it with [Herp Derp]. What is problematic about uniqueness? Propositions are actors, and all actors are unique?

Excellent summary

Thanks Thomas!

Above is an excellent summary.

However, Direct Logic is somewhat more general that you indicate in that the rules of Direct Logic are also propositions of Direct Logic.

rules of Direct Logic are also propositions of Direct Logic.

rules of Direct Logic are also propositions of Direct Logic.

Is that via the construction rule "If σ:Type, Π:Booleanσ and x:σ, then Π[x]:Proposition"? (If not, then I'm confused: via which construction rule instead?)

Rules of DL are propositions so that computers know the rules

The rules of Direct Logic are propositions so that computers (and humans) can concisely know and use the rules.

For example:

∀[Φ:Proposition]→ (⊢Φ)⇒Φ

demystifying Hewitt (re turnstiles, inference rules, and axioms)

I think the comment this is in reply to may help to demystify Hewitt's papers and help expose why it is interesting and why and can't be reduced to just another logic.

I hope that it will be given some consideration by people who want to take another look at the papers.

I found this one to be especially helpful:

Still mystifying

I'm wary of venturing into the actor ghetto at night, but scanning the above, I noticed the intro/elim rules for disjunction: You introduce disjunction by first demonstrating conjuction and then you eliminate that form in the presence of the negation of one of the terms from the conjuction? That seems nuts. Have you found an example of how that scheme might be useful?

introducing disjunction (re still mystifying)

I noticed the intro/elim rules for disjunction: You introduce disjunction by first demonstrating conjuction and then you eliminate that form in the presence of the negation of one of the terms from the conjuction? That seems nuts. Have you found an example of how that scheme might be useful?

That jumped out at me too but I did not look into it until your prompt.

Having read the papers I knew off the top of my head I could look for an answer in a 2011 presentation by Eric Kao.

I have not evaluated these slides but: Kao (and Hewitt) use "restricted V-introduction" in proof 2a in this presentation:

De Morgan maybe?

I skimmed, but didn't find anything really on point in those slides except a bullet claiming that DirectLogic's rules were reasonable. The only rule I see that would prevent one from modeling V with conjunction is de Morgan's law, which I don't see in your list above, but which Hewitt said is an axiom. So that's maybe a useful way to introduce (A or B) -- by proving ~(~A and ~B).

De Morgan maybe?

I skimmed, but didn't find anything really on point in those slides

Are you saying that the rule of restricted OR-introduction can be eliminated from proof 2a?


. The only rule I see that would prevent one from modeling V with conjunction is de Morgan's law, which I don't see in your list above,

De Morgan appears on page 73 of the paper I linked as an allegedly provable propositional equivalence. There is a distinction between implication and inference at work here.

Are you saying that the rule

Are you saying that the rule of restricted OR-introduction can be eliminated from proof 2a?

I'm not sure. Could you look at the derivation from line 7 to line 8? Wasn't that first V supposed to be flipped to an /\ by the application of De Morgan? Or am I too tired to be posting?

I will note that proof 2a seems to use De Morgan in a crucial way. Hewitt indicated here that De Morgan was an axiom. I don't think it would be possible to derive it from the other axioms you've listed, because all of those seem to be true even if you read V as asserting conjuction. If I'm correct about that (and it's very possible I've missed something), then that would imply you can't prove a theorem which is not consistent with that interpretation of V.

re: De Morgan axiom?!?

Hewitt indicated here that De Morgan was an axiom.

What Hewitt said was more nuanced than that. He wrote:

Although classical De Morgan are theorems, inconsistency robust De Morgan are axioms.

  Classical Rule: ¬Ψ∨¬Φ ⊢ ¬(Ψ∧Φ)  
  Inconsistency Robust Rule: ¬(Ψ∧Φ)  ⊢Τ  ¬Ψ∨¬Φ



  ¬Ψ∨¬Φ ⊢ ¬(Ψ∧Φ)


	Assume: ¬Ψ ∨ ¬Φ
	Hypothesize: (Ψ ∧ Φ)

           Ψ              ⓘ ∧ elimination
           Ψ ∧ (¬Ψ ∨ ¬Φ)  ⓘ ∧ introduction
           ¬Φ             ⓘ ∨ elimination
           Φ              ⓘ contradiction (via ∧ elimination)

        Inference: ¬(Ψ ∧ Φ)

        (¬Ψ ∨ ¬Φ) ⇒ ¬(Ψ ∧ Φ)    ⓘ inconsistency robust proof by contradiction
       ¬Ψ∨¬Φ ⊢ ¬(Ψ∧Φ)      ⓘ Q.E.D. (via classical deduction)

  Inconsistency robust:

    ¬(Ψ∧Φ)  ⊢Τ  ¬Ψ∨¬Φ


       Assume: ¬(Ψ∧Φ)

       Hypothesize: Ψ
           Hypothesize: Φ
             Ψ ∧ Φ             ⓘ contradiction (via ∧ introduction)
           Inference: ¬Φ
       Ψ ⇒ ¬Φ                  ⓘ inconsistency robust proof by contradiction
       ¬Φ ∨ ¬Ψ                 ⓘ QED: implication infers disjunction


I misread his "classical De Morgan" as referring to classical non-Direct Logic mathematics and thought he intended that it was an axiom in Direct Logic.

I'm going to bow out of this subthread for lack of time. I don't have good intuition about how to do inconsistency robust reasoning.

Unable to Prove Commutativity of Disjunction

I can see how to prove commutativity of conjunction with conjunction elimination and introduction:

A /\ B => A, B => B /\ A.

How do you prove disjunction is commutative?

Disjunction is commutative by equivalences for ∨

In Inconsistency Robust Direct Logic, disjunction is commutative by equivalences for ∨.

Which axioms?

I understand Matt's proof below, which looks correct if long. I don't understand what you mean by equivalences.

Do you mean the summary if axioms above is missing:

A \/ B <=> B \/ A

I think you may be able to

I think you may be able to do it like this:

Assume:  A \/ B          -- 1

Hypothesis:  ~(B \/ A)   -- 2

  ~~(~B /\ ~A)           -- 3, De Morgan 2

  ~B /\ ~A               -- 4, Double Neg Elim 3

  ~B                     -- 5, /\ Elim 4

  ~A                     -- 6, /\ Elim 4

  B                      -- 7, \/ Elim 5, 1
  A                      -- 8, \/ Elim 5, 1

  B \/ A                 -- 9, \/ Intro 7 8, Contradiction of 2

  ~~(B \/ A)             -- 10, Negation of hypothesis 2

  B \/ A                 -- 11, Double Neg Elim 10

[Sorry, I'm leaving]

Prove De Morgen First?

Okay, that's longer than I was expecting, and I have to have already proved De Morgen (in classical where its not an axiom). This is helpful.

De Morgan: not allowed for proof by contradiction in IRDL

De Morgan does not produce implications for use in proof by contradiction in Inconsistency Robust Direct Logic.

That's thrown me

I wasnt expecting that. If De Morgen is an axiom in IRDL, why can't it be used in proofs like other axioms. Are there two types of axioms?

Don't confuse IRDL's "implication" with IRDL derivability

In IRDL, the ⇒ implication represents a more powerful kind of derivability than IRDL's own derivability, so we can't use it in IRDL modus ponens. (IRDL's modus ponens uses turnstile ⊢ instead of implication.)

In particular, in IRDL, we can do proof by contradiction under implication, but we can't do proof by contradiction at the top level.

|- vs =>

Yes, I was just noticing there is a difference, but I haven't fully worked it through yet. There is:

Implication implies inference:
     (⊢Τ Φ ⇒ Ψ) ⇒ Φ ⊢Τ Ψ

but not the other way around.

Is modus ponens missing from the summery of axioms posted above, or am I just missing it for some reason?

re IRDL's modus ponens uses turnstile ⊢ instead of implication.

IRDL's modus ponens uses turnstile ⊢ instead of implication.

Modus ponens of implication is a special case of disjunctive cases.

AFAIK, chaining of arguments is a consequence of soundness.

Disjunctive Cases and Associativity

In other logics "A /\ B /\ C" is considered not well formed, and one of the conjunctions would need to be bracketed like "(A /\ B) /\ C". There are already commutativity rules (from Hewitt's comment above), should there be an associativity rule too?

Associativity of ∨ is a consequence of equivalences for ∨

Associativity of ∨ is a consequence of equivalences for ∨.


Okay, I think I have found the list of equivalences p.73, "Formalizing common sense reasoning for scalable inconsistency-robust information integration using Direct Logic Reasoning and the Actor Model":

Substitution of equivalent propositions:
 1. (Ψ⇔Φ) ⇒ (¬Ψ)⇔(¬Φ)
 2. (Ψ⇔Φ) ⇒ ((Ψ ∨ Φ)⇔(Φ ∨ Φ))
 3. (Ψ⇔Φ) ⇒ ((Ψ ∨ Φ)⇔(Φ ∨ Φ))
 4. (Ψ⇔Φ) ⇒ ((Ψ ∧ Φ)⇔(Φ ∧ Φ))
 5. (Ψ⇔Φ) ⇒ ((Ψ ∧ Φ)⇔(Φ ∧ Φ))
 6. (Ψ⇔Φ) ⇒ ((Ψ ├T Φ)⇔(Φ ├T Φ))
 7. (Ψ⇔Φ) ⇒ ((Φ ├T Ψ)⇔(Φ ├T Φ))
 8. (Ψ⇔Φ) ⇒ ((Ψ⇒Φ)⇔(Φ⇒Φ))
 9. (Ψ⇔Φ) ⇒ ((Φ⇒Ψ)⇔(Φ⇒Φ))
10. (F⇔G) ⇒ (∀F⇔∀G)

(2 and 3) and (4 and 5) look the same, is this correct?

Propositional substitutions

The corrected version of allowed primitive substitutions is as follows where ≐ means substitutable:

 0. (Ψ=Φ) ⇒ Ψ ≐ Φ
 1. (Ψ≐Φ) ⇒ (¬Ψ)≐(¬Φ)
 2. (Ψ≐Φ) ⇒ ((Ө∨Ψ)≐(Ө∨Φ))
 3. (Ψ≐Φ) ⇒ ((Ψ∨Ө)≐(Φ∨Ө))
 4. (Ψ≐Φ) ⇒ ((Ө∧Ψ)≐(Ө∧Φ))
 5. (Ψ≐Φ) ⇒ ((Ψ∧Ө)≐(Φ∧Ө))
 6. (Ψ≐Φ) ⇒ ((Ψ├T Ө)≐(Φ├T Ө))
 7. (Ψ≐Φ) ⇒ ((Ө├T Ψ)≐(Ө├T Φ))
 8. (Ψ≐Φ) ⇒ ((Ψ⇒Ө)≐(Φ⇒Ө))
 9. (Ψ≐Φ) ⇒ ((Ө⇒Ψ)≐(Ө⇒Φ))
 10. (Ψ≐Φ) ⇒ ((Ψ⇔Ө)≐(Φ⇔Ө))
 11. (Ψ≐Φ) ⇒ ((Ө⇔Ψ)≐(Ө⇔Φ))
12. (F≐G) ⇒ (∀F≐∀G)

Edit: Fixed minor typo above.

rules of construction

Conway, in his call for a Mathematician's Liberation movement (the interlude part of On Numbers and Games) wrote:

Among the permissible kinds of construction we should have:

(i) Objects may be created form earlier objects in any reasonably constructive fashion.
(ii) Equality among the created objects can be any desired equivalence relation.

In this case, propositions are constructed from sentences with no free variables, and also from earlier constructed propositions using connectives such as AND and OR.

By a trivial but tedium-relieving equivalence relation, all n-ary disjunctions of the same terms can be understood as equal regardless of the bracketing or order of terms in a particular sentence.

I believe the list of permissible propositional substitutions implied by equivalence of propositions is intended to characterize such an equivalence relation over the construction rules for propositions.

2 and 3 still the same

Should that be:

2. (Ψ≐Φ) ⇒ ((Ө∨Ψ)≐(Ө∨Φ))
3. (Ψ≐Φ) ⇒ ((Ψ∨Ө)≐(Φ∨Ө))

Commutativity using Equivalences.

Im still having trouble with this. How do I prove:

a /\ b <=> b /\ a

Using those equivalences? For conjunction I have:

A = B => (A /\ C) = (B /\ C)
A = B => (C /\ A) = (C /\ B)

re Using those equivalences?

Those are substitution rules, not equivalences.

The equivalence relation is not exhibited because it is tedious but obviously trivial.

You can backwards chain to the equivalence relation from the exhibitions in the text and common sense.


So there are:

 0. (Ψ=Φ) ⇒ Ψ ≐ Φ
 1. (Ψ≐Φ) ⇒ (¬Ψ)≐(¬Φ)
 2. (Ψ≐Φ) ⇒ ((Ө∨Ψ)≐(Ө∨Φ))
 3. (Ψ≐Φ) ⇒ ((Ψ∨Ө)≐(Φ∨Ө))
 4. (Ψ≐Φ) ⇒ ((Ө∧Ψ)≐(Ө∧Φ))
 5. (Ψ≐Φ) ⇒ ((Ψ∧Ө)≐(Φ∧Ө))
 6. (Ψ≐Φ) ⇒ ((Ψ⊢TӨ)≐(Φ⊢TӨ))
 7. (Ψ≐Φ) ⇒ ((Ө⊢TΨ)≐(Ө⊢TΦ))
 8. (Ψ≐Φ) ⇒ ((Ψ⇒Ө)≐(Φ⇒Ө))
 9. (Ψ≐Φ) ⇒ ((Ө⇒Ψ)≐(Ө⇒Φ))
10. (Ψ≐Φ) ⇒ ((Ψ⇔Ө)≐(Φ⇔Ө))
11. (Ψ≐Φ) ⇒ ((Ө⇔Ψ)≐(Ө⇔Φ))
12. (F≐G) ⇒ (∀F≐∀G)

Substitutions are not trivial (if you want capture avoidance), and it seems unusual to define substitution rules rather than axioms when defining a logic (as they are syntactical rather than logical rules). I am not quite sure how to handle these yet. Can they be replaced by logical axioms? What is the reason for using substitutions instead of axioms?


13. (Ψ⇔Φ) = (Ψ⇒Φ)∧(Φ⇒Ψ)        Logical Equivalence
14. ⊢T Ψ⇒Ψ                     Reiteration 
15. (⊢T Ψ∧Φ⇒Ө) ⇔ ⊢T Φ∧Ψ⇒Ө      Exchange 1
16. (⊢T Ө⇒Ψ∧Φ) ⇔ ⊢T Ө⇒Φ∧Ψ      Exchange 2
17. (⊢T Ψ⇒Φ∧Ө) ⇒ ⊢T Ψ⇒Φ        Dropping
18. (⊢T Ψ⇒Φ, Ψ⇒Ө) ⇒ ⊢T Ψ⇒Φ∧Ө   Accumulation 1
19. (⊢T Φ⇒Ψ, Ө⇒Ψ) ⇒ ⊢T Φ∧Ө⇒Ψ   Accumulation 2
20. (⊢T Φ⇒Ψ) ⇒ Φ ⊢T Ψ          Implication implies inference
21. (⊢T Ψ⇒Φ, Φ⇒Ө) ⇒ ⊢T Ψ⇒Ө     Transitivity
22. (Ψ⇒Φ) ⇔ ¬Φ⇒¬Ψ              Contrapositive
23. (Φ⇒Ψ) ⊢T Ψ∨¬Φ              Implication infers disjunction

Is that the complete set of rules for IRDL?

From the above numbered rules (0 ... 23), should I be able to prove the following propositional equivalences and inferences (a ... u):

a. Self Equivalence
b. Double Negation
c. Idempotence of /\
d. Commutativity of /\
e. Associativity of /\
f. Distributivity of /\ over \/
g. De Morgan for /\
h. Idempotence of \/
i. Commutativity of \/
j. Associativity of \/
k. Distributivity of \/ over /\
l. De Morgan for \/
m. Contrapositive for =>
n. Absorption of /\
o. Absorption of \/
p. /\ Elimination
q. /\ Introdution
r. \/ Elimination
s. \/ Introduction
t. Disjunctive Cases
u. Inconsistency Robust Resolution

The word "hold" implies that (a ... u) can be proved from the above rules (0 ... 23). Is that correct?

Finally there are:

24. ⊢T ((⊢T Ψ) ⇒ Ψ)            Soundness
25. ⊢T (Φ⇒(Ψ∧¬Ψ)) ⇒ ¬Φ         IR Proof by Contradiction

As these are defined after the above propositional equivalences and inferences, is it correct that rules (24) and (25) are not needed to prove (a ... u)?

General structure pt. 2 re "Rules."

Is that the complete set of rules for IRDL?

It's unclear to me. The Kao paper from the 2011 workshop suggests it's really an open question what can be included.

is it correct that rules (24) and (25) are not needed to prove (a ... u)?

You certainly need those two for proofs. It's in some sense not an important question what minimum set of IRDL rules is sufficient to infer the rest of them. What's more interesting his how big and how close to familiar logic the IRDL rules can be made. Their independence or non-independence as axioms is not of central concern.

(Of pretty central concern is the goal that CDL differ from IRDL as little as possible.)

My suggestion for further reading is to see what you can figure out about Terms "evolving" (e.g. Terms as the incrementally produced output of programs.). (Just suspend disbelief temporarily and accept as hypothesis that this as something useful to do. You're more interested in how it would work than what the metaphysics of it might be.)

Next consider that inference is a total computable process.

Also that we can define a DL theory of Actor semantics (e.g. using ActorScript).

Close all the circles and, in those papers, is a foundational theory of Mathematics grounded in types and the process of construction. This theory, in principle at least, contains a compact meta-circular implementation of itself and a semantic theory of what the procedural aspects of the meta-circular stuff means.

The theory of computation in there, Actors, includes a physically realistic and general theory of communication. The meta-circular stuff describes a generalized theorem prover. A computational network can be modeled in this world as nodes running that generalized theorem prover, loaded up with domain-specific theories, and engaged in receiving and transmitting information with other similar nodes.

You have also asked a few times those annoying questions about what it would be like to code actors all-the-way-down.

Some of Hewitt's stuff that's out there contemplates the other direction: What kind of organizational structures are appropriate to large networks of this kind, especially when it must be assumed all of the sub-theories involved and the nodes themselves will contain inconsistencies (just like real world "wetspace" organizations).

It's a pretty Grand design.

One step at a time

I appreciate the plan, and I am interested. But I also know a bit about myself, and I know that I need to go slowly and understand the depths of each bit independently. For example CDL or IRDL can be studied as a propositional logic independently of the rest of the grand plan. At the moment I can't even prove commutivity of the basic conjunction and disjunction operators. I think I need to implement IRDL/CDL in the way I am trying (or understand why I can't) before I can move on.

Edit: having just read that classical logic is defined by the theorems not the axioms, which I did know, and have used as an argument for generic programming (generic functions = theorems, interfaces = axioms), I think I have an idea how to proceed. What I need to do is define IRDL in terms of tautologies, that enable all the theorems to be proved. So I now understand that I should be more interested in the complete set of theorems, rather than axioms.

The questions I now have: are the theorems above the complete list? Are there some specific theorems about inconsistency robustness that need to hold, so that I can be sure I am not just implementing standard propositional logic.

Inconsistency Robust Logic is a special case of Classical

Thomas is correct in that Inconsistency Robust Logic is a special case of Classical that attempts to be as close to the latter as possible.


So is it easier to say which classical theorems should not hold, and which (if any) additional theorems should hold, if we assume classical propositional logic as the staring point?

How do we know that it is possible to define an axiomatic system in which precisely the classical theorems we wish still apply, but not the ones we don't want.

I would assume that for some combinations of theorems it is not possible to define an axiomatic system?

Boolean propositions are a very special case of propostions

Boolean propositions (using just negation, conjunction, and disjunction) are a very special case of propositions.

What other kinds of propositions are there?

Surely a proposition is something that is true or false? Propositional logic is concerned just with statements consisting of propositions?

How does this relate to my question about theorems?

Beyond Boolean propositions: using types, numbers, ⇒, ⊢, ∀, etc.

Beyond Boolean propositions, there are propositions using types, numbers, ⇒, ⊢, ∀, etc.

It is very important not to confuse the type Proposition with the type Boolean.

The Type Proposition

So I have a proposition that is trivially true like "5 = 5". and another that is trivially false "5 = 6". I can then use these propositions in propositional logic like "(5 = 5) /\ (5 = 6)" and in manipulating these propositions, all the theorems of propositional logic should hold, like for example:

(5 = 5) /\ (5 = 6) <=> (5 = 6) /\ (5 = 5)

So we can replace the propositions with variables like:

A /\ B <=> B /\ A

We don't care whether the propositions consist of numbers, strings, or XML documents internally, they are things that are true or false. The proposition "A /\ B" is a proposition that is only true if both proposition A and proposition B are true.

It is probably better to distinguish the values for a Proposition from a Boolean, so I could use "proved" & "not proved", instead "true" & "false".

Of course, both Ψ and ¬Ψ can hold in a theory.

Of course, both Ψ and ¬Ψ can hold in a theory.

Inconsistency robust logic enables reasoning with inconsistent information.

Aggregation in Datalog

The Dyna language is interesting. It allows you to replace the usual nondeterministic choice | of Datalog with another operator like + or min. In normal datalog you use rules like this:

    ancestor(i,j) :- parent(i,k), ancestor(k,j)

The rule is a disjunction over k. In Dyna you can replace conjunction and disjunction with other operators, e.g. + and * to express matrix multiply:

    A(i,j) += B(i,k)*C(k,j)

This rule is a sum over k (because of += instead of -:).

Shortest paths:

     shortestPath(i,i) min= 0
     shortestPath(i,j) min= edgeCost(i,k) + shortestPath(k,j)

This is with min and +.

Dyna: Extending Datalog For Modern AI