## Incompleteness Theorems: The Logical Necessity of Inconsistency

Some readers might be interested in this article which is published at

where a link to the video of a Stanford Logic colloquium on the article is available.

In the video, âŠ¢ is proposed as the "ultimate" :-)

## Comment viewing options

### Carl,

Incompleteness Theorems: The Logical Necessity of Inconsistency

### What's the point?

I've read some of your material about Direct Logic each time you've posted here, and I'm sympathetic to your critique of early 20th century logic and mathematics (which still reigns supreme), but I'm still unclear what you hope to gain by formalizing incompleteness and inconsistency within your system.

An excellent practical example (and even better, on-topic for LtU) is the type system of Haskell. At a practical meta-level, we already tolerate incompleteness and inconsistency in the logic of the Haskell type system.

Consider the type of functions from some type A to some type B.

We accept incompleteness for this fragment of the logic because we know that there are going to be programs (i.e. "sentences") that, when executed would take something of type A and return something of type B, but that nonetheless these programs will not type check.

We accept inconsistency (of a somewhat restricted sort) because we know that a function that type checks with this type either will take something of type A and return something of type B or it will not return at all.

In spite of this meta-tolerance of inconsistency and incompleteness, the type system still does useful work because it is internally consistent and complete, or at least attempting to be so.

I can't see how giving up the game completely and incorporating incompleteness and inconsistency from first principles, even locally and internally, helps to build a system that can make useful distinctions about a program at all.

âŠ¢ is proposed as the "ultimate" :-)

I'm afraid "(Syntactic) Turnstile the Ultimate" doesn't quite have the same ring. ;-)

### I'm still unclear what you

I'm still unclear what you hope to gain by formalizing incompleteness and inconsistency within your system.

What about: Distribution, Partitioning Tolerance, and Concurrency within the system?

[I would suspect that by formalizing inconsistency, one can better control it.]

### Global vs. Local

What about: Distribution, Partitioning Tolerance, and Concurrency within the system?

My own assumption is that while these properties might introduce global inconsistency, you still want local consistency (and probably some kind of localized completeness too).

If you throw these properties out completely from the outset, I don't see how you can make even restricted claims that you are confident of.

### What systems do you design and work with?

Marc,

It would probably help David, Carl Hewitt and myself greatly to know what kinds of system's you design and work with for a living. If you want a narrowcasted explanation of an especially general mathematical theory, you need to work too, to help the presenter understand your vantage point.

The "I don't get it" hands-up high school math class gesture just frustrates people like Hewitt into not answering. To be fair, you are trying to be more concrete than "I don't get it", but your example (Haskell) seems to box you into a very small box. In other words, your problem comes across as "This is not Haskell therefore I don't get it."

### Logic pudding

Since this discussion has spread to another thread, I'll respond here.

It would probably help ... myself greatly to know what kinds of system's you design and work with for a living.

Utterly irrelevant. By and large my LtU interests and my professional interests intersect only in an abstract and indirect way.

Suffice it to say that I'm more than passingly familiar with distributed systems, logic and have spent a reasonable amount of time thinking about logic for distributed systems with the express purpose of investigating PL possibilities for using them.

If you want a narrowcasted explanation of an especially general mathematical theory, you need to work too, to help the presenter understand your vantage point.

No, I want a generally mathematically-accepted presentation of a general mathematical theory.

It is normal when introducing a new logic to make some proofs to show what basic logical properties it has and what kinds of claims it can (and can't) make. In particular, you want to show some sort of soundness, i.e that the whole thing "works" as intended and doesn't fall apart when you put all the pieces together.

When you propose somewhat unusual principles like "propositions that entail their own inconsistency", you probably have double duty on this front... Bold claims require strong evidence.

When proposing a logic to handle a specific domain, it would seem reasonable that you need to show examples how this proposed logic will handle both basic and problematic cases in that domain. Moreover, if you want to claim that some pre-existing logic can't handle that particular domain, it is normal to show a specific case or cases where modeling with the inadequate logic breaks or fails to make an important distinction in that domain.

Lastly, if there is a claim that a particular PL embodies a particular logic in some way, it would be helpful to have it spelled out convincingly how this is so.

After reading a fair chunk of the linked material about Direct Logic, I still haven't found or grasped any of these elements for it.

If I just haven't found them or understood them, then I may have more work if I want to assess Direct Logic. If they aren't there, then it would be my opinion that Direct Logic isn't a logic, but a program to develop a logic with certain properties at some later date.

but your example (Haskell) seems to box you into a very small box. In other words, your problem comes across as "This is not Haskell therefore I don't get it."

I really don't know where you got that. An example is just that, an example. I could have picked any other Turing-Complete typed language and it would have the same usefulness as an example.

The point of the example is that some systems already exist with some of the characteristics that are touted for Direct Logic, but which are more conservative towards established logical principles.

One last little element, taken from the other thread:

I did not say whether Direct Logic was good, bad or ugly.

It would probably help David, Carl Hewitt and myself greatly to know

My apologies if I misconstrued this as you claiming to be one of the obvious "three wise men" to clarify and defend Direct Logic. You can see how with this mistaken reading, I took you to be endorsing it.

### As I understand it, Direct

As I understand it, Direct Logic aims to avoid (or perhaps control) introduction of new inconsistencies that weren't present in the source information... whereas classical logic allows you to prove pigs fly and that the moon belongs to your wife from contradictory information that a stock was at 11.1 and 11.2 at a given moment in time.

How do you define 'local', anyway?

Tolerating inconsistency seems different than embracing it.

[Note: I'm not endorsing Direct Logic as a best approach for tolerating inconsistency.]

### Definitions

How do you define 'local', anyway?

Roughly, a property is local if it holds over a well-defined subsystem, but not necessarily over the whole system.

Tolerating inconsistency seems different than embracing it.

I agree, but it isn't entirely clear to me how this distinction is being enforced in this case.

### Your well-defined subsystem

Your well-defined subsystem depends on the whole system.

In other words, there is a bidirectional dependency between the whole system and the subsystem.

Hewitt's framework breaks this dependency; a subsystem is not wholly dependent on the schema of the whole system [edit: and the whole system does not have to check each well-defined subsystem for schema conformance, because in a dynamically distributed, dynamically federated system, it may not be able to].

### Dependent?

Your well-defined subsystem depends on the whole system.

In other words, there is a bidirectional dependency between the whole system and the subsystem.

I don't see why this would be so.

The way I phrased my definition, there are things that are true of the sub-system that are not true of the whole system and vice-versa.

That sounds pretty independent to me, in logical terms.

### Spatial or Context-Sensitive Properties

There are many properties true of a sub-system that depend upon the whole system. A physical example: magnetic field at a point. A logic example: attribute values in an attribute grammar. One might call these properties 'spatial', in that they vary over space, and perhaps can even be measured for a point or volume in space, but cannot be computed or predicted without context. (One might alternatively call these properties 'context-sensitive'.)

Your wording - "a property is local if it holds over a well-defined subsystem, but not necessarily over the whole system" - is ambiguous with regards to these spatial properties. What does "holds over a subsystem" mean? Do you mean properties intrinsic to the subsystem? Or do you also include properties defined for the subsystem with respect to the whole system?

I believe that 'consistency', e.g. of information, is among these 'spatial' properties. Information is provided by context, and is rarely intrinsic to a subsystem. Under this belief, 'local consistency' is (ambiguously) either a nonsense phrase (if 'local' excludes 'spatial') or a useless one (since we cannot locally compute or locally enforce spatial properties, though heuristics and Kalman filters might help).

### Clarification of terminology

There are many properties true of a sub-system that depend upon the whole system.

Yes there are systems with sub-systems where this is true. The kind I was discussing is not like that.

I'm using the word local in a fairly standard way.

I believe that 'consistency', e.g. of information, is among these 'spatial' properties. Information is provided by context, and is rarely intrinsic to a subsystem.

I'm really not quite sure what you mean by a "spatial" property here.
However, consistency has a specific meaning in logic, though this is entirely compatible with the notion of consistency of information, say in a database.

If you want an example of local consistency that you may have heard of, think of eventual consistency.

Let's say your birthday is recorded in two different nodes of a distributed database. It may be fine that they each node has a different date for your birthday (which may be different again from your actual birthday), but you have much bigger problems if there are two different dates for your birthday in any one of those nodes.

So long as each node has only one date for your birthday, your birthday is locally consistent in each of those sub-systems, even if globally the values for your birthday is currently inconsistent. Eventual consistency says that at some point, the different locally consistent nodes will be reconciled, at which point the dataset will be globally consistent.

### Eventual Consistency and Information Hiding

an example of local consistency that you may have heard of, think of eventual consistency.

Eventual consistency is a model for tolerating inconsistency. The stronger forms of eventual consistency even place an epsilon on some metric for the amount of inconsistency at any given moment.

If you assume equal rights for replicating information among all nodes, and you assume that replication is the only means to access information, then eventual consistency may result in the sort of local consistency you describe.

However, that assumption doesn't hold in general, and certainly doesn't hold for the domains targeted by Direct Logic. Information must be protected for business and security reasons, and many nodes have only indirect access to information held by other nodes. This information is accessed by distributing the query or update, rather than by asking for a replica of the protected information. A distributed query can indirectly access different values for 'birthday' from different replicas, in which case the combined result will involve some inconsistency.

With 'eventual' consistency, we are promised that this combined result should eventually become consistent - at least if the information settles. But we cannot necessarily determine when it is inconsistent, nor can we assert local consistency as a consequence of eventual consistency.

A useful question is how effectively a developer can control when and where these momentary inconsistencies might appear. Some formalization will likely help in this endeavor.

I'm really not quite sure what you mean by a "spatial" property here.

Was my full paragraph describing it in the last post insufficient? What confused you?

### More clarification

Was my full paragraph describing it in the last post insufficient? What confused you?

I got the general idea, but it wasn't specific enough for me to be sure how I would define a new "spatial" property in a way that is relevant to a logic.

Eventual consistency is a model for tolerating inconsistency.

Of course. That's what we are talking about: tolerating global logical inconsistency, but requiring local logical consistency.

Thus, eventual consistency (by itself) does not offer any guarantees of local or logical consistency.

Think about what would happen if one of the nodes (possibly the "authoritative" one if that is how consistency resolution is done) is confused about (i.e. has multiple different dates for) your birthday. Or if all of them are. Any sane system has to have consistency somewhere, even if it is restricted in some way. Otherwise, how do you propose to resolve conflicts? Random selection?

Anyway it was supposed to be an example to illustrate the idea.

I guess I've gotta stop using examples: too many literal thinkers around. ;-)

### Any sane system has to have

Any sane system has to have consistency somewhere, even if it is restricted in some way. Otherwise, how do you propose to resolve conflicts? Random selection?

You assume the ability to recognize conflicts, which is generally hindered by information hiding and variations in propagation delay.

But when we do recognize conflicts, there are many techniques for resolution. Random selection (indeterminism) is one, and non-determinism (with backtracking) is another. Sometimes it might be reasonable to 'average' different information resources, perhaps a weighted average as for Kalman filtering. One could develop a voting system. You can arbitrarily prioritize one source of information above the other.

Sanity is whatever local reasoning you can perform when composing partially defined subsystems. This requires much predictable behavior, and logical consistency is certainly desirable! But I don't believe it necessary.

I guess I've gotta stop using examples: too many literal thinkers around. ;-)

And lateral thinkers happy to take rides on wild tangents. ;-)

Eventual consistency is a spatial-temporal property; to define it requires a definition of 'time'.

For logical consistency, it would be more relevant to focus on timeless example, such as composing information from multiple partially-defined resources and attempting to perform local reasoning about logical consistency. And, if you want such consistency, what must you sacrifice to achieve it? (Security? Scalability? Modularity?)

### The map is not the territory

You assume the ability to recognize conflicts, which is generally hindered by information hiding and variations in propagation delay.

Remember, we are talking about a logic here. A major purpose in using a logic is to clarify, refine and prove desirable properties of certain kinds of systems. If the logic you propose is just as muddy and complicated as the domain you propose to reason about, why not cut out the middle man and use ordinary reasoning in your domain?

Even in that case you have to assume that you can, at least some of the time, tell the difference between P and ¬P, which is the essence of recognizing a conflict.

For logical consistency, it would be more relevant to focus on timeless example

Just so. And shortly after I posted the previous example I thought of a more suitable ( and, even better, more appropriate for LtU) example of local consistency: variable scope.

Choose your favorite PL with scoped variables. Assume that it has at least module (class, etc.) level variables and function level private variables.

Let's say I have two functions f1 and f2 in module M. In f1 I define a boolean variable p of value true. In f2 I define a boolean variable p of value false. In M, p is undefined.

So if I look at the combined symbol table for the whole module, I find that p is not consistent. But in every individual scope, p has a single unambiguous value. So p has local consistency across scopes.

Now let's say I drop that constraint (which most of the time we take for granted), and I allow for inconsistent "scopes", then I might have a scope where p == !p returns true, and the value of p becomes useless. This can quickly infect the whole system so that no distinctions can be made at all.

Now Direct Logic claims to be paraconsistent, which suggest some kind of scoping, i.e. local consistency, but it is unclear to me how this scoping is supposed to happen if you are explicitly allowing inconsistency everywhere in the system.

The very idea of "resolution of inconsistency" implies being able to create new scopes that are consistent, and so far I've been completely unable to understand how Direct Logic proposes to do this in light of some of its axioms.

To end this off, I'm still looking for evidence in this thread that anyone actually understands Direct Logic. And I'm afraid at this point that that includes Carl Hewitt. ;-)

### Re: Purpose of Logic

A major purpose in using a logic is to clarify, refine and prove desirable properties of certain kinds of systems. If the logic you propose is just as muddy and complicated as the domain you propose to reason about, why not cut out the middle man and use ordinary reasoning in your domain?

Various logics aim to be composable, modular, computationally tractable, incremental, stable, parallel, distributed or mobile, et cetera. Many logics have purpose in effective information processing, simulation, interactive simulation, animation, design, planning, AI development, natural language processing, development of control systems. A quick study will find all sorts of information about stochastic and bayesian logics, fuzzy logics, soft constraints, temporal or bitemporal reasoning.

I suspect you've some idealized 'purpose of logic' in mind, probably developed while working with some specific application of logic. How well does it hold up to the wide variety of logics and applications of logic? What does "ordinary reasoning in your domain" mean that so effectively distinguishes it from a logic?

[Tentatively, I'm willing to call a computation system 'logical' if it supports ad-hoc goal-oriented queries, backtracking, backwards chaining, and associative + commutative expression and organization of the rules.]

### Suspicious minds

I suspect you've some idealized 'purpose of logic' in mind, probably developed while working with some specific application of logic.

I'm afraid I'm confused by your suspicions, and some of the tone of this thread. They seem to be based on the belief that you are the expert and I'm a newb in need of enlightenment. You do know that I've been studying this kind of stuff for a long time, right?

### I'm quite aware you're more

I'm quite aware you're more advanced in the subject matter than I am. That means you should be a lot further along than I in understanding just how little you actually know. ;-)

I don't pretend to know 'a major purpose of logic' that isn't specific to particular applications or classes of logics. And I've presented plenty of applications of logic that would reject the purpose you presented. Naturally, you who has "been studying this kind of stuff for a long time" should be at least as aware of the various applications of logic as I am.

So why, if not idealism or preference, did you present that particular 'purpose of logic' with the implication that it should apply to the purpose of Direct Logic and others?

### Agnosis

That means you should be a lot further along than I in understanding just how little you actually know. ;-)

If I didn't know there were many things I didn't know, I wouldn't have asked a question in this thread in the first place. ;-)

So why, if not idealism or preference, did you present that particular 'purpose of logic' with the implication that it should apply to the purpose of Direct Logic and others?

I can't seem to find a reference to "the purpose of logic" in my posts. However, any logic is supposed to embody a system of reasoning about something (often about reasoning itself).

To reason about something, you need to make distinctions about it. To make distinctions, there must be properties that either hold or don't hold, which is the essence of consistency.

For this reason, some people would require that a logic worth the name is consistent (and hence would reject a paraconsistent logic).

Even if you accept paraconsistency, there is still a problem that a little bit of inconsistency, like a virus, tends to spread to the rest of the system unless you take some real steps to stop it.
As you say, you may have to drop some inference rules or make some other restrictions on the structure of propositions. or something else.

This is one of the key parts of Direct Logic that I still don't understand from my reading of the material, and it confuses me even more to think about how deeply embedded inconsistency would be realized within a PL in any useful way.

### Supposition

any logic is supposed to embody a system of reasoning about something (often about reasoning itself)

I disagree. Logics are applied often and effectively to controlling systems. Further, logic models aren't always "about something"; they may be first-class entities rather than reflections of some outside system.

To make distinctions, there must be properties that either hold or don't hold, which is the essence of consistency.

The essence of consistency is that each property either holds or doesn't hold. To make distinctions, we only need some properties that either hold or don't hold. ;-)

For this reason, some people would require that a logic worth the name is consistent (and hence would reject a paraconsistent logic).

Those people probably don't work with or reason about distributed systems or large software systems.

Even if you accept paraconsistency, there is still a problem that a little bit of inconsistency, like a virus, tends to spread to the rest of the system unless you take some real steps to stop it.

I'll grant that inconsistencies can propagate across chains of inferences, but this is quite restricted compared to principle of explosion. I'll also grant that mechanisms should exist to detect and control inconsistency.

If we can both tolerate and control inconsistency, then it is possible to manage inconsistency in a modular manner rather than handling it at every step in the reasoning.

As you say, you may have to drop some inference rules or make some other restrictions on the structure of propositions. or something else. This is one of the key parts of Direct Logic that I still don't understand from my reading of the material,

Direct Logic has rules for absorption and conjunction-to-disjunction (rather than disjunction introduction) and has four weak forms of self-refutation (to replace proof-by-contradiction). These are described on pages 11 and 13 of the paper.

it confuses me even more to think about how deeply embedded inconsistency would be realized within a PL in any useful way

The inconsistency already exists. A man with one watch always knows what time it is. A man with two watches is never sure. If you have two competing sources of information, they will generally be inconsistent. If you have two competing sources of commands, they will generally be inconsistent. Perhaps also review the CAP theorem.

The question is the degree to which the programs developed in the PL will automatically tolerate this inconsistency.

The reason you find this confusing goes back to that initial assumption of yours: that "any logic is supposed to embody a system of reasoning about something" Carl Hewitt doesn't make that assumption. He wants to use logics for constructing software, for controlling systems, ideally right from the (often inconsistent) specifications. Direct Logic is intended as a basis for general-purpose distributed logic programming. You are looking at Direct Logic and attempting to see something else, perhaps a tool for software verification.

### Suppository

I disagree. Logics are applied often and effectively to controlling systems. Further, logic models aren't always "about something"; they may be first-class entities rather than reflections of some outside system.

Nothing here is incompatible with what I said.

"Reasoning about reasoning" was intended to cover "studying logic for its own sake" or in your words "logic as a first-class entity".

If you are using a logic to construct a language or to construct a program, you are presumably reasoning about that language or program as you work. (If you disagree, assume that you aren't reasoning while you construct and see where that gets you... ;-) )

Those people probably don't work with or reason about distributed systems or large software systems.

This is neither here nor there. I made clear in my starting point that I accept the need for some kind inconsistency-tolerance in logic for distributed systems. That idea is not in dispute.

What is unclear at this point is whether the inconsistency-tolerance describe for Direct Logic actually accomplishes what it sets out to do. And I haven't been convinced of that yet.

### No Meta-level Reasoning

If you are using a logic to construct a language or to construct a program, you are presumably reasoning about that language or program as you work.

Sure, I am reasoning about that language or program. But I'm not necessarily using the logic language to reason about the program. Using a logic program to construct a program does automatically grant me support for the meta-level reasoning to use the logic to reason about properties of the program.

You are 'presuming' a meta-level of reasoning that does not (necessarily) exist [in form of a concrete logical theory].

Which is why what I said above is, in fact, incompatible with what you had said earlier.

What is unclear at this point is whether the inconsistency-tolerance describe for Direct Logic actually accomplishes what it sets out to do.

Direct Logic accomplishes at least half of what it set out to do, which is to prevent explosion and therefore allow a open distributed logic programming.

The remaining concern is whether Direct Logic is expressive enough to cover interesting temporal-logic programs, or whether it is too expressive to support pluggable and modular composition.

### Ipso facto

You are 'presuming' a meta-level of reasoning that does not (necessarily) exist.

Clearly this is true.

### Variable Scope Example

I agree that a logical inconsistency does exist if, at a given moment, we must constrain a single variable by contradictory assertions. However, different variables with scoped names are still different variables. Asking me to "drop that constraint" is causing the same sort of dissonance as asking "what would be a pump if it wasn't a pump?" or "now let's assume that the water isn't wet".

If we wish to observe inconsistency for variables, perhaps we should consider a case where a single variable is constrained by inconsistent assertions. For example, a single variable representing wrench effort for a vehicle might be assigned a value of ≤0.10 and ≥0.30 at a given moment by different concurrent agents (perhaps in a concurrent constraint language). This is the sort of logical inconsistency that exists, on a regular basis, in the real world.

Often it is preferable to resolve these inconsistencies. Soft constraints (i.e. assertions with different priorities, aka 'preferences') can help. Biases can also help resolve inconsistencies: a 'bias' is for reasoning to fail in a predictable direction. In the above case, the desired bias might be towards slower movement. (The rule of the road is: if you aren't sure what to do, slow down!)

But it is also useful to tolerate inconsistencies, such that they don't cause significant trouble when they exist.

### Paraconsistent Logics

you have to assume that you can, at least some of the time, tell the difference between P and ¬P, which is the essence of recognizing a conflict [. . .] The very idea of "resolution of inconsistency" implies being able to create new scopes that are consistent

I apologize for misleading you by walking that tangent. You asked earlier about resolution of conflicts to consistency, and so I answered, but I did not note that the question was tangential.

It is true that resolution of conflicts, whether automated implicitly or explicitly, requires recognizing them. But the mistake is assuming the need to resolve conflicts. If we can tolerate conflicts, then we should not need to resolve them. And paraconsistent logics are all about toleration of inconsistency.

To tolerate inconsistency means we lose the rule 'from a contradiction, we can conclude anything'. This requires tweaking the inference rules - weakening them. Where intuitionist logics lose the excluded middle (¬¬P â‡’ P), the paraconsistent logics lose something else. Depending on the logic, it might lose the contrapositive rule ((A → B) ∧ (¬B) â‡’ ¬A), or at least weaken it.

Paraconsistent logics tolerate logical inconsistency, thus, by making fewer inferences and fewer conclusions.

For Direct Logic in particular, you could review the paper whose link was buried on that website Carl Hewitt referenced in the OP. I'll see if I can find the exact inference rules Direct Logic drops.

For Direct Logic in particular, you could review the paper whose link was buried on that website Carl Hewitt referenced in the OP. I'll see if I can find the exact inference rules Direct Logic drops.

Yes, reading the paper was my first step before posting here. In fact, I also had read the ActorScript paper. And some other stuff linked from his site.

I took it for granted (as an axiom, so to speak) that anyone who was responding to me in this thread had also read the paper (and presumably understood it, if they presumed to tell me I just hadn't worked hard enough yet).

Unfortunately, in LtU threads ex falso quodlibet does hold, so I guess my end of the discussion is reduced to ⊥ now.

At this point I would be happy if anyone could explain even a toy problem and how Direct Logic would be used successfully to reason about it, I would probably be in a better position to determine whether it is worth pursuing further for the areas that interest me.

### Insane Assumptions

I took it for granted (as an axiom, so to speak) that anyone who was responding to me in this thread had also read the paper (and presumably understood it, if they presumed to tell me I just hadn't worked hard enough yet).

Well, that wasn't very rational or logical-minded of you. And you've been on this LtU forum for longer than I, too! I could make another comment about bad habits and idealism interfering with reasoning... but I probably shouldn't; after all, in relative terms, I'm the newb.

I responded to: "I'm still unclear what you hope to gain by formalizing incompleteness and inconsistency within your system." A reasonable response to this doesn't require reading the paper. Experimenting with effective ways to tolerate, control, and resolve logical inconsistency in distributed systems has been a subject I've studied for a very long time. And, as I noted earlier, the same motivations apply for Direct Logic. I need not read the paper to answer a question about motivations.

Most of my knowledge about Direct Logic comes from reading an older Actor Script paper. I don't recall much, other than that I was not impressed (albeit, I was certainly more put off by the presentation than the content), so I've been reluctant to expend effort reading this new paper unless someone said something interesting about it.

Unfortunately, in LtU threads ex falso quodlibet does hold, so I guess my end of the discussion is reduced to âŠ¥ now.

And I say pigs fly.

### There coming to take me away

Well, that wasn't very rational or logical-minded of you.

If I were 100% rational and sane, I would have retired from LtU a long time ago. ;-)

"I'm still unclear what you hope to gain by formalizing incompleteness and inconsistency within your system." ... Experimenting with effective ways to tolerate, control, and resolve logical inconsistency in distributed systems has been a subject I've studied for a very long time.

"Tolerating inconsistency" and "formalizing inconsistency into your system" are two very different things.

That's why I gave the Haskell example, to show that I understood the utility of a system that tolerated inconsistency, but which still relied on a consistent basis to make its interesting distinctions.

Your Haskell example doesn't demonstrate logical inconsistency, i.e. conflicting assumptions and information. The notion that A→B is really shorthand for (A∨⊥)→(B∨⊥) is not inconsistency.

"Tolerating inconsistency" and "formalizing inconsistency into your system" are two very different things.

I agree. Of course, once you formalize inconsistency, you have much better chance to recognize it, control it, avoid it, or find ways to tolerate it.

### Does too

The notion that Aâ†’B is really shorthand for (Aâˆ¨âŠ¥)â†’(Bâˆ¨âŠ¥) is not inconsistency.

There was a great post on a Haskell mailing list a few years ago that explained exactly why lifted types break consistency, but my search-fu has failed me.

To explain fully why lifted types are a form of inconsistency is probably beyond the scope of this thread, but it would involve at least the dreaded Curry-Howard, and maybe even the abominable topoi, and I haven't finished digesting my breakfast yet.

### Lifted types can violate

Lifted types can violate parametricity in cases such as Maybe A. But ⊥ has no observable properties, and so does not have this problem.

### A better answer

The notion that Aâ†’B is really shorthand for (Aâˆ¨âŠ¥)â†’(Bâˆ¨âŠ¥) is not inconsistency.

As a community service (as in "does this count towards my community service, your honour") I think I can explain this without going too far afield.

First, that really should be A → B (lifting is general only on the right).

This will be equivalent to A → B ∨ A → ⊥.

Next you have to know that A → ⊥ is the same as ¬A.

Now let's say I "prove" A by giving you a token (i.e. value) of type A, and so A holds in this system.

Next I "prove" A → ⊥ by giving you a function that takes an A and then doesn't return. Since I've previously proved A, I have now shown that ¬A holds as well, and we have a contradiction, i.e. an inconsistency.

The system still makes useful distinctions because of how we interpret non-termination as a special case in the semantics rather than as a first-class member of the logic.

### really should be A â†’

really should be A â†’ B (lifting is general only on the right)

That's a matter of choice, I believe. Laziness allows for A to be appropriately meaningful on the LHS. It is possible to have functions that exchange A→B, if they make no observations on A.

you have to know that A â†’ âŠ¥ is the same as Â¬A

That seems quite questionable.

The system still makes useful distinctions because of how we interpret non-termination as a special case in the semantics rather than as a first-class member of the logic.

I see. So the logical theory you are using to decide this inconsistencylogic is, itself, inconsistent2 with the semantics of the system for which it is deciding inconsistencylogic?

### Questionably questionable

you have to know that A â†’ âŠ¥ is the same as Â¬A

That seems quite questionable.

In fact, it is quite standard in both classical and intuitionistic logic.

### Curry-Howard Equivocation

I question that the ⊥Intuitionist Absurdity or Negation you are using for that assertion is the same as the ⊥Haskell NoReturn that you were using for the function descriptor.

### Moe-Howard Correspondence

I question that the âŠ¥Intuitionist Absurdity you are using for that assertion is the same as the âŠ¥Haskell NoReturn that you were using for the function descriptor.

And I question why you are pulling out every sophistry you can in this discussion.

I've pretty much stuck to standard (if simplified) accounts of this stuff.

You are welcome to reject the standard account if you like, but if your purpose is to understand my point as it relates to Direct Logic, I don't see what that accomplishes.

If instead you are playing some weird drinking game that makes somebody drink every time I respond, surely they must be on the floor by now.

### You claim malign

You claim malign 'sophistry'? Haskell non-termination is related to indecision (of termination), whereas negation is another concept entirely.

To the best of my understanding, your point does not relate to Direct Logic. But I won't argue it any further; if you believe your point valid, do your own work and relate it by line and figure to the paper Hewitt provided.

### Benign sophistry

You claim malign 'sophistry'?

"Overzealous hair-splitting" then, if you find that a less malign accusation.

if you believe your point valid, do your own work and relate it by line and figure to the paper Hewitt provided

The onus is on Hewitt to prove how his approach differs from and improves upon the standard ones. My original point was that he has not done that, or maybe has but I can't find it. No one has said anything since to change that impression.

Who deputized you (and for that matter Z-Bo) to speak for him anyway?

### "Overzealous hair-splitting"

"Overzealous hair-splitting" then, if you find that a less malign accusation.

That's much nicer.

Who deputized you to speak for him anyway?

I've not spoken for Hewitt; any statements I make are my own. You've asked questions, i.e. about the utility of logics and systems that tolerate inconsistency, that are of personal interest to me.

the onus is on Hewitt to prove how his approach differs from and improves upon the standard ones. My original point was that he has not done that, or maybe has but I can't find it

It's in the paper, or at least the 'differs from' aspect is. I thought you read it?

(I'll grant that I needed to read very carefully. Penetrating the presentation of the paper and premature introductions of features (such as reification, abstraction, modular theories) was quite painful.)

### Bad hair day

It's in the paper, or at least the 'differs from' aspect is.

I saw assertions and programmatic rationales, that you have been repeating.

I want proofs and practical examples showing the benefits and abilities of the system.

This is pretty standard, basic stuff when presenting a new logic.

Despite the claim of being a "minor adjustment" to classical logic, there are some things in this paper that anyone familiar with classical logic is going to find very odd. There need pretty detailed explanations and proofs to show that they in fact accomplish what the paper asserts they accomplish if Hewitt wants to make a claim on the attention of such people.

I've not spoken for Hewitt.

Such a passionate defense both before and after having read the paper might look to an outside observer like you were.

### Hewitt's presentation leaves

Hewitt's presentation leaves much to be desired, I agree. He doesn't effectively explain the problem being solved. I understand only because I've spent much time studying similar problems, independently, and thus understand his vague references and I see where Direct Logic would fit.

Despite the claim of being a "minor adjustment" to classical logic, there are some things in this paper that anyone familiar with classical logic is going to find very odd.

I agree. Direct LogicTM adds a lot of things you don't typically see in descriptions of classic logic - reification, abstractions, modular theories. But the core or essence of Direct LogicTM is to take Classic Logic and weaken a couple inference rules (disjunction injection, general proof-by-contradiction).

The other additions could just as easily be added to Classic Logic. I'll grant there may be concerns about whether these additions allow ex falso quodlibet.

a passionate defense both before and after having read the paper might look to an outside observer like you were

That only means I'm passionate about the subject. I didn't speak much about the paper before reading it.

### Other Examples

Perhaps clearer would be an example in concurrent constraint programming. Inconsistency is quite obvious: binding a variable to two incompatible values. For example, (x ≥ 10),(x ≤ 15) is 'consistent', whereas (x ≥ 15),(x ≤ 10) is 'inconsistent' because no solution exists. A consistent concurrent constraint program would be one that never performs a tell that would introduce an inconsistent constraint. This can be related to logical consistency, with agents working together to build a theorem.

In general, recognition of inconsistency in a concurrent constraint program is at least a hard problem (likely undecidable, in general) and that assumes you have authority to view every variable, every concurrent agent, and every constraint in the entire system. When we start talking 'pluggable/modular/mobile' concurrent constraint systems, where some constraint are provided as input from external systems, the necessary analysis becomes impossible. Since you cannot prevent inconsistency, you have little choice but to manage or tolerate it.

That said, you can often identify and prevent a very useful subset of inconsistencies: those that have no dependency upon external inputs. This is similar to your 'local consistency', except that it accounts only for a subset of consistency concerns.

There are also some interesting variations on constraint/logic systems where, instead of a solution, the main decision is: 'does a solution exist?'. In that case, one is using 'consistency' of a composed system as the distinguishing property, the basis for queries and control. One must be able to predict and control consistency in order to reason about composition, so those issues are even more important than usual. This is a possibility I've been considering for generative grammar based computation.

### Inconsistent 'inconsistent'

For example, (x â‰¥ 10),(x â‰¤ 15) is 'consistent', whereas (x â‰¥ 15),(x â‰¤ 10) is 'inconsistent' because no solution exists.

This isn't the same meaning of 'inconsistent' that is relevant to logic.

Under a normal interpretation of what those constraints mean, a system that said that there existed a value that fulfilled the second constraint would be inconsistent, but the sentence itself is just a sentence, and therefore neither consistent nor inconsistent.

### Quibbling 'Inconsistent'

a system that said that there existed a value that fulfilled the second constraint would be inconsistent, but the sentence itself is just a sentence, and therefore neither consistent nor inconsistent

And a logical theory is just a bunch of letters strung together until you put it in context. Indeed, nothing is ever inconsistent without a system that interprets it as such.

This isn't the same meaning of 'inconsistent' that is relevant to logic.

Just because the sentences were written with infix notation doesn't mean they aren't logical sentences. Given a few rules for partial ordering, one can easily derive from the pair that GT(x,12) ∧ ¬GT(x,12).

### Distribution doesn't require inconsistent logics

Since you cannot prevent inconsistency, you have little choice but to manage or tolerate it.

This is just false. If you're going to use a logic to implement constraint solving logic to sort out the positions of elements in your UI, and thus constraints can be subject to inconsistencies that need to be resolved heuristically, let me recommend that you keep that possibly inconsistent logically isolated from the fragment that does type checking and mathematics. If you have a widget that you can "prove" is both width 10 and width 20, it's not a very good idea to resolve that by adding the fact 10 = 20.

To me, the idea of dealing with buggy programs by working in an inconsistent logic is ... not a good one. Responding to Godel's incompleteness theorems by adopting an inconsistent logic is like responding to news that "most accidents occur near home" by moving.

### Distribution does require logical inconsistency

Distribution does require logical inconsistency - unless you plan to reject a number of properties important to most distributed systems in-the-large (open system, decentralized authority, security, modularity, pluggability, and scalability).

This isn't to say that the logics themselves are 'inconsistent'. Rather, it is information that is inconsistent. If you plan to use logic programming to develop and construct these distributed systems, the theories described in these logics will be logically inconsistent.

If you have a widget that you can "prove" is both width 10 and width 20, it's not a very good idea to resolve that by adding the fact 10 = 20.

Agreed. But I don't see why you state this as an objection. The ability for a logic to not immediately infer the fact 10 = 20 seems to fall under 'managing or tolerating' inconsistency.

To me, the idea of dealing with buggy programs by working in an inconsistent logic is ... not a good one.

That's fine. Paraconsistent logics aren't inconsistent logics. Paraconsistent logics are consistent logics that tolerate inconsistent theories.

### Does not

As long as your logic is just proving things of the form "Hypothesis |- Conclusion" (e.g. "Width=10, Width=20 |- 10=20") there's no problem, and there's also no inconsistency. That is, you can just explicitly track possible theories, then if you find contradiction between some hypotheses, you can use that to update your beliefs. Describing this as "working in an inconsistent / paraconsistent logic" is misleading and sensational. I gather that DirectLogic actually is inconsistent (in the sense of P and not P) and I'm quite skeptical of such an idea.

### Direct Logic is a subset of Classical Logic

I gather that DirectLogic actually is inconsistent (in the sense of P and not P) and I'm quite skeptical of such an idea.

You are wrong - unless Classical Logic is also inconsistent. The set of valid inference rules in Direct Logic is a proper subset of those in Classical Logic.

you can just explicitly track possible theories, then if you find contradiction between some hypotheses, you can use that to update your beliefs

Aye. But the question is: by which rules may you use this contradiction to update your beliefs? In classic logic, you can use a contradiction to add any belief you want. In direct logic, this is not the case.

### I don't know much about

I don't know much about Direct Logic, but from the link Z-Bo provided above (without the Time Cube style font shifting):

Because the incompleteness theorem is self-proved, every theory in Direct Logic is self-provably inconsistent, e.g., âŠ¢TUninferableT and âŠ¢TÂ¬UninferableT, which is an inconsistency. The proof of inconsistency goes as follows: [...]

That doesn't sound like classical logic to me.

Regarding your second question, I don't think there is one right way to update your beliefs when you've found a contradiction. If the contradiction arises from some combination of things that are all supposed to hold, then you're essentially at an assertion failure. How you deal with such failures should depend on the architecture of your application. I don't want "just carry on, nothing's consistent anyway" baked into the core of my software.

### That doesn't sound like

That doesn't sound like classical logic to me.

It is classical logic, with a few of the classic inference rules traded for weaker forms. (Those weaker forms can be proven from more general inference rules in classical logic, but must be primitive in Direct Logic.)

Carl Hewitt, however, made several mistakes in his presentation of Direct Logic that it took a while to muddle past. For example:

• Hewitt dives right into reification of theories and 'round-tripping' between Logic and a serialized XML format. This should be separated into another paper, especially since it can be applied independently of Direct Logic.
• Hewitt uses subscripts on the inference turnstile in order to modularize his theories. This allows different theories - each with different axioms and assumptions - to be developed separately and interact. However, introducing these subscripts to people who are still attempting to understand the basics of Direct Logic was premature. If you are working with just one theory, you are free to drop the subscripts.
• Hewitt presents abstraction and higher-order logics a bit early. To progressively 'build' the higher-order Direct Logics, with reference to the same in classic logic, would have been nicer.
• Hewitt uses way too many quotes, and they distract. I was treated to half of Through the Looking Glass when reading his old Actors paper. At least this one has more variety.

Any contradiction you infer in Direct Logic you would also infer in Classic Logic. (The converse is not true.) [I'm assuming the same choices for higher-order logic and reification.] Direct Logic avoids explosion ('tolerates' inconsistency) because it is possible to derive a contradiction but not derive some other property. I.e. it is possible to infer A and ¬A, yet fail to infer B (all within the same theory).

I don't think there is one right way to update your beliefs when you've found a contradiction.

Sure. But you answer the wrong question. The right questions are: Do you believe there is at least one wrong way to update your beliefs when you've found a contradiction? Can a logic reflect this?

I don't want "just carry on, nothing's consistent anyway" baked into the core of my software.

Nor do I. Nor, from all evidence, does Hewitt.

With classic logic, either 'everything' is consistent or 'nothing' is consistent. With a paraconsistent logic, there are shades of gray where only some things within a theory are inconsistent. The modularization of theories offer points of control - the ability to construct a consistent theory that depends upon an inconsistent one.

without the Time Cube style font shifting

Read the paper. It's a bit more thorough, and the fonts look nicer in a PDF viewer.

### I'm assuming the same

I'm assuming the same choices for higher-order logic and reification.

It sounds like you're taking "Direct Logic" to mean just the system of basic inference rules, and not whatever axioms of abstraction and reification there are that lead to contradictions. Fine, I don't care what "Direct Logic" means.

Do you believe there is at least one wrong way to update your beliefs when you've found a contradiction?

There's nothing wrong with the reasoning Direct Logic eliminates. If I poke a hole in my time deprivation chamber and notice that there's no traffic on the expressway, I can reason that it's not rush hour.

Anyway, this conversation looks to be heading all over the place. I'll reiterate my position that there is no need to embrace inconsistency in concurrent applications. A well designed application can distinguish between the things that are known to hold and the things that are approximate or merely probable.

### Distribution vs. Concurrency

I'll reiterate my position that there is no need to embrace inconsistency in concurrent applications.

I'll remind you that this is about distributed applications, which adds some challenges beyond those of concurrency. If we have a carefully confined concurrent app, I'll grant that there are ways to avoid observable 'local' inconsistencies.

I doubt your assertion about 'well designed applications' for real-world applications. But distributed applications will deal with inconsistency whether or not they are well-designed.

There's nothing wrong with the reasoning Direct Logic eliminates.

Similarly, there is also nothing wrong with the reasoning that Intuitionist Logic eliminates. The elimination isn't because the reasoning leads to incorrect conclusions, but rather because the reasoning is computationally problematic.

you're taking "Direct Logic" to mean just the system of basic inference rules

He did state, several times, that Direct Logic was the minimal fix. ;-)

Anyway, this conversation looks to be heading all over the place.

Granted. I suppose we can let it terminate.

### I meant distributed

I meant distributed applications. I've done a fair amount of distributed programming and understand the challenges, but don't think inconsistent or inconsistency tolerant logics are going to offer much in the way of a solution.

### Whether logic programming

Whether logic programming will prove an effective basis for distributed application development remains an open question. Hewitt certainly hasn't closed the lid on it. [I imagine the ability of developers to reason about performance and security will be big issues that resist the adoption of distributed logic programming.]

An inconsistency intolerant logic would be a poor basis for an attempt, especially if we wish to support the same sort of open composition, security, and scalability achieved with message-passing systems. It would be difficult to maintain a logic-programming 'style' with open composition if any contradiction (even one involving information totally unrelated to the current inference) could 'explode' on you and render your observations useless. Preventing ex falso quodlibet greatly reduces the set of contradictions (from composed theories) that one might need to detect and resolve heuristically.

It isn't a matter of inconsistency tolerant logics themselves providing a solution. You'll need some extra features for a complete solution. But, assuming your goal is open distributed logic programming, you'd be wise to select a logic that tolerates inconsistency.

### Direct Logic is for Cloud Computing

From Hewitt on LtU:

Direct Logic has an important advantage over classical logic in that it provides greater safety in reasoning using inconsistent information. This advantage is important because information about the data, code, specifications, and test cases of cloud computing systems is invariabl[y] inconsistent and there is no effective way to find the inconsistencies.

The focus here is certainly cloud computing systems and software for it - specifications, code, data, test cases.

### Video of Stanford Logic Colloquium on Direct Logic

Folks,
The video of a recent Stanford Logic Colloquium on "Wittgenstein versus GÃ¶del on the Foundations of Logic" is available here:

http://wh-stream.stanford.edu/MediaX/CarlHewittEdit.mp4

Cheers,
Carl

### A larger task

Thanks for your perceptive comment.

Haskell is an excellent start on implementing functional reasoning. However, it suffers from not being able to do concurrent reasoning. You might be interested in ActorScript(TM), a superset of Haskell including its type system, that can also do explicit concurrent reasoning using Logic Programming, that has been published in arXiv:0907.3330 available at the following link:

http://arxiv.org/abs/0907.3330

Regards,
Carl

### Toy Problems

Marc says: I would be happy if anyone could explain even a toy problem and how Direct Logic would be used successfully to reason about it, I would probably be in a better position to determine whether it is worth pursuing further for the areas that interest me.

Reasoning in Direct Logic is very much the same as reasoning in classical logic, with two exceptions to prevent explosion:

• disjunction introduction (P âŠ¢ P ∨ Q) is lost. Instead there are a few forms of absorption, and a rule to trade conjunction for disjunction (p13).
• general proof by contradiction (P,¬P âŠ¢ Q) is traded for four restricted forms of self-refutation (p11). These are called:
• self infers opposite
• self infers argument for opposite
• argument for self infers opposite
• argument for self infers argument for opposite

So, to see a difference in expression, we need toy problems that classically involves a proof-by-contradiction or a disjunction introduction. Unfortunately, I'm blanking on good classical problems. Perhaps you could find one?

I'm a bit curious as to whether an intuitionist direct logic would be sufficient for most computation.

### Direct Logic seems to

Direct Logic seems to generalize a logical system developed in many AI text books such as Russell and Norvig. It is simply predicate logic using modus ponens. Modus ponens eliminates proof by contradiction because there is no "NOT" symbol, and disjunction introduction seems to equate to multiple rules with the same THEN clause. This can be avoided in favor of multiple IF syntax (see my link below). Modus ponens is incomplete according to Russell and Norvig but not inconsistent (?) by my "logic" because there is no "NOT" symbol.

I have used this system on a practical level for at least twenty years, and have always found it to be, well, practical. I also find that one to one unification is good enough and prevents many cases of non-termination, also Boolean logic can be embedded within predicates with functions. There are many other "practical" hacks and features one can find here.

I have also been told with good authority that Lewis is an ASM language although it was developed without any knowledge of the ASM school.

### Catch-22 in Direct Logic

Quoting from "Common sense for concurrency and inconsistency tolerance using Direct LogicTM and the Actor Model" at http://arxiv.org/abs/0812.4852:

Inconsistency tolerant theories can be easier to develop than classical theories because perfect absence of inconsistency is not required. In case of inconsistency, there will be some propositions that can be both proved and disproved, i.e., there will be arguments both for and against the propositions.
A classic case of inconsistency occurs in the novel Catch-22 [Heller 1995] which states that a person â€œwould be crazy to fly more missions and sane if he didn't, but if he was sane he had to fly them. If he flew them he was crazy and didn't have to; but if he didn't want to he was sane and had to. Yossarian was moved very deeply by the absolute simplicity of this clause of Catch-22 and let out a respectful whistle. â€˜That's some catch, that Catch-22,â€™ he observed.â€
In the spirit of Catch-22, consider the follow axiomatization of the above:

• 1. p: Able[p, Fly], Â¬Fly[p] â”œCatch-22 Sane[p] ï© axiom
• 2. p: Sane[p] â”œCatch-22 Obligated[p, Fly] ï© axiom
• 3. p: Sane[p],Obligated[p, Fly]â”œCatch-22 Fly[p] ï© axiom
• 4. â”œCatch-22 Able[Yossarian, Fly] ï© axiom
• 5. Â¬Fly[Yossarian] â”œCatch-22 Fly[Yossarian] ï© from 1 through 4
• 6. â”œCatch-22 Fly[Yossarian] ï© from 5 using Self Infers Opposite
• 7. p: Fly[p] â”œCatch-22 Crazy[p] ï© axiom
• 8. p: Crazy[p] â”œCatch-22 Â¬Obligated[p, Fly] ï© axiom
• 9. p: Sane[p], Â¬Obligated[p, Fly] â”œCatch-22 Â¬Fly[p] ï© axiom
• 10. â”œCatch-22 Sane[Yossarian] ï© axiom
• 11. â”œCatch-22 Â¬Fly[Yossarian] ï© from 6 through 10

Thus there is an inconsistency in the above theory Catch-22 in that:

• 6. â”œCatch-22 Fly[Yossarian]
• 11. â”œCatch-22 Â¬Fly[Yossarian]

The above situation is a disaster for Classical Logic because (among other things) the following immediately follows in Classical Logic:

• ?. â”œCatch-22 MadeOf[Moon, GreenCheese]

Fortuntely, the above inference does not hold in Direct Logic :-)

### And then?

Thus there is an inconsistency in the above theory Catch-22 in that:

• 6. â”œCatch-22 Fly[Yossarian]
• 11. â”œCatch-22 Â¬Fly[Yossarian]

I'm not sure what we've gained by this. We know exactly what we did when we started out: that Yossarian is hosed.

Moreover, we could have formulated this inconsistent theory in any logic formalism. The difference being that most other formalisms would tell us that this is a broken theory, and that if we want it to be a system that works, we need to make changes to the premises.

Since in the construction of a software system a contradiction would manifest itself as a deadlock, or an infinite loop or some other infelicity, it is unclear to me how globally accepting inconsistency helps us to build more reliable systems.

In other words, what does Direct Logic allow us to do that we couldn't already before, especially when it seems to give up the most useful thing we already had?

As an aside, the whole point of Catch-22 is that the system is designed to make it essentially impossible to get out of flying.

I would have been much more impressed with a logic system that could detect this and conclude consistently:

• â”œCatch-22 Fly[Yossarian]

### Inconsistency is Recursively Undecidable

Detecting inconsistency in theories like Catch-22 is recursively undecidable in the worst way. We don't even know how to start.

### ...so why not make the whole system undecidable?

Detecting inconsistency in theories like Catch-22 is recursively undecidable in the worst way. We don't even know how to start.

In the general case, of course it is. In a specific case such as this, a person can spot it pretty well, and probably could prove it, perhaps with some reformulations.

Other people have done work on these kinds of "Liar's Paradox"-type problems, notably Jon Barwise with, in my opinion, some success.

Regardless, it still isn't clear to me how embracing unrestricted undecidability enhances our ability to reason about systems.

### Recursive Undecidability follows from High Expressiveness

Direct Logic has high expressiveness in order to make arguments (inferences) concise and understandable. Recursive undecidablity follows directly from high expressiveness.

### Baby and Bathwater

Recursive undecidablity follows directly from high expressiveness.

Yes, of course.

For example, many useful programs can be written in a decidable language, but some of those are much more conveniently expressed in a non-decidable language.

We accept the risk that we might write a program that is undecidable for the convenience of expressiveness, but we don't then conclude that we are best off treating every program like it is undecidable. Many specific programs are provably decidable by simple inspection.

A logical system that can make claims like "either x is undecidable or P(x)" for some property P is quite different from one where we say "for all x, x is undecidable".

### Inconsistency Recursively Undecidable for *Classes* of theories

It's not that Inconsistent[T0] is recursively undecidable for some particular theory T0. Instead the predicate Inconsistent is recursively undecidable for some *class* of theories. For example, consistency is recursively decidable for the class of Boolean theories of Direct Logic. However, the Boolean theories of Direct Logic are not very expressive.

### From theory to practice

Inconsistency Recursively Undeciable for *Classes* of theories

Let's take another, more concrete and practical approach.

Since DL is supposed to be for reasoning about distributed systems, can you give me a simple example of a problem from distributed systems where reasoning about it in DL (especially using its inconsistency-tolerant features ) will help to make some useful judgment about a particular system.

Put another way, can you demonstrate a simple theory of a simple system in DL that helps me to make that system better in some way.

### Practical payoff is in larger systems

The practical payoff for the use of Direct Logic comes in larger systems that have thousands of (known and unknown) inconsistencies among specifications, use cases, and implementation. The unknown inconsistencies are particularly treacherous in using Classical Logic.

### Liberating Constraints

what does Direct Logic allow us to do that we couldn't already before, especially when it seems to give up the most useful thing we already had?

A paraconsistent logic allows you to reason about different flavors and degrees of (partial) consistency. In classical logic a system is either 100% consistent or 100% inconsistent (due to explosion). A paraconsistent logic can make useful decisions even regarding inconsistent systems.

Large software systems consist of hundreds of independently developed, independently maintained, interacting components. If you must assume a 'consistent system' for your logic to work, you cannot reason at all about large software systems. Paraconsistent logics don't support as many inferences about small systems, but they enable many more inferences about large systems.

in the construction of a software system a contradiction would manifest itself as a deadlock, or an infinite loop or some other infelicity, it is unclear to me how globally accepting inconsistency helps us to build more reliable systems

Even with a logic that tolerates inconsistency, we can still selectively detect inconsistencies and target them for elimination. Just because I am willing to tolerate inconsistency doesn't mean I'm going to embrace it!

### Round and round the mulberry bush...

A paraconsistent logic allows you to reason about different flavors and degrees of (partial) consistency.

You keep saying this like you're telling me something I don't know. I'm really not sure what you think you are adding to this conversation by repeating this.

Just because I am willing to tolerate inconsistency doesn't mean I'm going to embrace it!

To reiterate what I've been saying since the very first post in this thread, it is unclear to me whether Direct Logic gives you the choice whether to embrace inconsistency or not.

As near as I can tell, this paper asserts that Direct Logic is paraconsitent rather than proving it. That it is paraconsistent in any meaningful way is made unlikely by the fact it tolerates explicitly inconsistent theories.

The principle that something is sound so long as you only make sound inferences isn't a principle of soundness likely to appeal to most people interested in logic, and it is not clear how it would be useful in practice.

I'm open to being shown otherwise, but telling me over and over what I had grasped from the first really doesn't help with that.

### Monkey and the Weasel

It is unclear to me whether Direct Logic gives you the choice whether to embrace inconsistency or not.

If I'm forced to accept an inconsistency that does not affect any conclusions of interest, that is (for all computational purposes) the same as not having any inconsistency. Your question - the one you've been reiterating since the first post - is not necessarily a relevant one. A value of paraconsistent logics is that not all inconsistencies are relevant.

Hewitt claims that all Direct Logic theories are 'inconsistent' in that one can prove incompleteness (using reification and abstraction). However, if Direct Logic is paraconsistent, this will not affect other conclusions of interest.

Whether Direct Logic (including reification and abstraction) is paraconsistent, I'll leave to Hewitt to prove.

That it is paraconsistent in any meaningful way is made unlikely by the fact it tolerates explicitly inconsistent theories.

Huh? That strikes me a bit like saying "if it is water resistant in any meaningful way is made unlikely by the fact that it tolerates explicitly wet conditions".

The principle that something is sound so long as you only make sound inferences isn't a principle of soundness likely to appeal to most people interested in logic, and it is not clear how it would be useful in practice.

Could you explain how this objection applies to paraconsistent logics?

telling me over and over what I had grasped from the first really doesn't help with that

It isn't clear to me that you've really grasped it, and your words continue to suggest otherwise. You recently asked the question:"can you demonstrate a simple theory of a simple system in DL that helps me to make that system better in some way?" Your question makes as much sense to me as looking at a seatbelt or airbag and asking, "How does this prevent collisions?" Making the system 'better' could be achieved in any logic that can detect [and prevent] inconsistency. The advantage offered by a paraconsistent logic involves reasoning in an inconsistent (imperfect, real-world) system that might be beyond your authority to make 'better'. We don't need any seatbelts if everyone has 20/20 foresight and drives safely. We don't need any inconsistency tolerance if everyone shares the same assumptions and rational reasoning.

### Pop goes the weasel

If I'm forced to accept an inconsistency that does not affect any conclusions of interest, that is (for all computational purposes) the same as not having any inconsistency.

This is almost exactly the principle of "local consistency" that you argued against so strenuously earlier in this thread.

For a given system, you have to show that this property holds, i.e. that you have some way to decide that any given inconsistency can't affect some purpose you are interested in.

Could you explain how this objection applies to paraconsistent logics?

I'm not sure what you are asking here, but to give you an idea what a useful paraconsitent logic would look like, take Relevant Logic. Basically there are clear (i.e. decidable) restrictions on what premises can go into a derivation, so irrelevant inconsistent premises "can't be seen" by conclusions where they don't apply. So even if the global theory being worked with is inconsistent, the local theory supporting a conclusion need not be.

This is another case that demonstrates what I was earlier calling "local consistency".

Support in making the system 'better' could be achieved in any logic that can detect inconsistency.

Ah, we agree completely at last.

If you can show me how DL can and does detect this inconsistency, I will know what I've been trying to find out.

Given that Carl Hewitt is emphasizing in this very thread that you can't detect inconsistency, and given that inconsistency is "baked right into" DL, I'm left with the feeling that I started with: that maybe it can't.

### Validity of arguments is recursively decidable in Direct Logic

Given any alleged argument (proof) for a theory in Direct Logic, its validity is recursively decidable.

A huge problem with Relevance Logic (AKA Relevant Logic) is that inference is extremely restrictive and awkward to the point that mathematicians don't use it. For example, many of the standard natural deduction rules and Boolean equivalences do not hold in Relevance Logic.

Of course, as in every proper logic, validity of arguments (proofs) is also recursively decidable in Relevance Logic. However, it is more complicated in Relevance Logic than in Direct Logic.

### Proof?

The validity of arguments (inferences) is recursively decidable in Direct Logic.

Is there some proof of that that I have just somehow missed?

### Validity of arguments is recursively decidable in Direct Logic

See page 15 of the paper.

### Localizing Inconsistency?

almost exactly the principle of "local consistency" that you argued against so strenuously earlier in this thread.

I disagree. There is no structurally defined subsystem one can point at at and say: 'this subsystem is logically consistent'. The property I described does not involve any well-defined subsystem common to the set of arguments. And a subsystem consistent for one argument might be inconsistent for another.

That property is thus much weaker than local consistency, if not entirely distinct. However, it might be strong enough to support your goals. I suspect you could achieve your local consistency (for relevant inferences) if you apply some self-discipline when expressing and composing the theories.

you have to show that this property holds, i.e. that you have some way to decide that any given inconsistency can't affect some purpose you are interested in.

In DL, you can use names and atoms for this purpose. There are no inference rules in Direct Logic that allow you to introduce a new name (i.e. for a proposition, theory, or object) on the RHS of the turnstile that wasn't already present somewhere on the LHS of the turnstile.

Thus, by controlling the namespace, you can control where inconsistencies might propagate. (You might also be able to apply some object capability model principles.)

[Addendum:] That said, you use the word "can't" to connote static analysis of effect of potential inconsistency. I used "does not" because more dynamic applications of logic may be just as concerned about actual inconsistencies in a system rather than merely potential ones.

If you can show me how DL can and does detect this inconsistency, I will know what I've been trying to find out.

You can test both (â”œTA) and (â”œT¬A). If both hold, you've discovered (detected) inconsistency in T. You will only find the inconsistencies you test for because inconsistencies not relevant to your tests will not be discovered. You may be able to test for broad classes of inconsistency via tests involving existential or second-order propositions.

Given that Carl Hewitt is emphasizing in this very thread that you can't detect inconsistency

Some tests will be undecidable, of course. And others will be impossible to detect due to modularity concerns. You can't find all actual inconsistencies without limiting expressiveness. (It isn't difficult to find all potential inconsistencies, but the question is how many false positives you hit.) These issues aren't unique to DL.

[Edit:] I think what Hewitt was saying above is that there is no general and decidable way to answer "Given arbitrary T, is there a non-trivial A such that (â”œTA) and (â”œT¬A)?". (Where a 'trivial' A is one that holds independently of T, such as Hewitt's argument that incompleteness is a form of inconsistency.) You cannot decide 'inconsistency' in this general sense, though you might be able to decide it if you carefully restrict T.

### Reset

Maybe we can get somewhere starting from a different tack, which for me would be:

How does Direct Logic differ from Justification Logic?

### Baseline

How does Direct Logic different from Justification Logic?

I was actually able to grasp what what JL is, how it works and some example problems it handles better relative to other solutions just by reading the paper.

Other than that, I can't really tell. ;-)

### Is it logic or is it model theory?

This applies to both Justification Logic and Direct Logic.

I have been reading about model theory lately and notice a pervasive mixing of model theory and classical logic throughout computer science and AI literature. This is serious because consistency and completeness are defined differently in the two theories. If you have time take a look at chapter one of "An Introduction to good Old Fashioned Model Theroy" Is this a play on words, "An Introduction to Good Old Fashioned AI". GOFAI?

Much of Direct Logic reads like model theory. The three basic ideas: direct argument, no proof by negation, and no disjunction introduction, read like model theory. But here is the problem. Please correct me if you think I am wrong. The consistency and completeness definitions being used are from proof theory not model theory! Using model theory Direct Logic is probably both consistent and complete. I say this because that is the case with modus ponens system I mention above.

### So far, only proof theory for Direct Logic

So far, we only have a proof theory in "Common sense for concurrency and inconsistency tolerance using Direct LogicTM and the Actor Model" at http://arxiv.org/abs/0812.4852

However, it does include a proof theoretic account of soundness.

Cheers,
Carl