Types are fundamental to both logic and computation

Types are fundamental to both logic and computation:
* Types are fundamental to avoiding contradictions in logic. For example, proper use of types avoids all the known paradoxes. See Types in logic.
* Types are fundamental to computation. For example, in the Actor Model it is necessary to know the type of Actor in order to send it a message. See Types in computation.

The issue of types arose in another LtU discussion where it seemed tangential but since the topic is fundamental, it is worthy of discussion because there seems to be a great deal of confusion and controversy on the part of both theoreticians and practitioners.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

With faith?

What do you mean? With faith I suppose. At some point you simply assume either a 'textual' formalism or faith. Since there is no textual formalism, I assume people use some faith to assume it's correct.

Simpler than that.

I mean I get the Peano axioms. What does axiom 2 as written above say? To me it reads: for all i, its successor is not zero.

So where exactly does it define what the successor of any i is?

It doesn't

Looks like people assume some generative property not explicitly stated. Usually, they'll write the something like 'Nat is the smallest fixed point given by the following two generators' or something like that.

-

Duplicate (on mobile).

Simpler than that.

I mean I get the Peano axioms. What does axiom 2 say? To me it reads: for all i, its successor is not zero.

So where exactly does it define what the successor of any i is?

There is no induction scheme

As far as I can see at a glance, there is no induction scheme so you simply cannot prove a lot, so probably you can do what you did without rendering the system inconsistent, but I am not sure.

Seems like it.

I think this is what I have been trying to point out. These are not exactly the Peano axioms as I read them. I read this system as:

- Nat is a type that contains zero.
- successor is a function from Nat to Nat.
- The successor of any Nat is not zero.
- If i is the same element of Nat as j, then the successor of i is the same as the successor of j.

I read this system as the Peano axioms

Looks pretty close to me. I'll compare Hewitt's notation and your gloss thereof to Peano's Axioms on MathWorld:

0:ℕ

Your version: Nat is a type that contains zero.

MathWorld's version: Zero is a number.

Successor:ℕ

Your version: successor is a function from Nat to Nat.

MathWorld's version: If a is a number, then the successor of a is a number.

∀[i:ℕ]→ Successor[i]≠0

Your version: The successor of any Nat is not zero.

MathWorld's version: zero is not the successor of a number.

∀[i,j:ℕ]→ Successor[i]=Successor[j] ⇨ i=j

Your version: If i is the same element of Nat as j, then the successor of i is the same as the successor of j. (Note that you've gotten the implication backwards.)

MathWorld's version: Two numbers of which the successors are equal are themselves equal.

∀[P:Boolean]→  Inductive[P]⇨ ∀[i:ℕ]→ P[i] 
  where
    ∀[P:Boolean]→
      Inductive[P] ⇔
        (P[0] ∧ ∀[i:ℕ]→ P[i]⇨P[i+1])

Your version: (You seem to have chosen to forget that Hewitt posted this axiom.)

MathWorld's version: If a set S of numbers contains zero and also the successor of every number in S, then every number is in S.

Guesswork

Yes, I guessed that that other axiom was somehow involved, but it wasn't clear. This axiom uses things not defined in other axioms like '1' and addition. It looked separate because it was not using the successor function to 'add 1'.

+1

This axiom uses things not defined in other axioms like '1' and addition. It looked separate because it was not using the successor function to 'add 1'.

Yeah, that's true. The articles actually use Successor[i], but Hewitt's posts here use i+1. I think this was just an attempt at making the axiom look more self-evident to anyone who isn't looking at the other Successor axioms.

Sure confused me.

Okay so if I can read successor of I there it makes a whole lot more sense.

Double Post

.

Heading to the second page

You wrote this:

successor(z) = a -- Note: equals a
successor(a) = b
successor(b) = a -- Note: equals a 

His last axiom is this:

∀[i,j:ℕ]→ Successor[i]=Successor[j] ⇨ i=j

Since succ z = succ b, we must have b = z. I was assuming that you intended z, a and b to be distinct, so this is a contradiction. If I wasn't allowed to assume that, then note that we have succ a = b = z, contradicting the axiom that zero is not the successor of any natural.

From some of the other things you've said, I don't think you're thinking about this style of axiom the right way. This is a logical presentation of the semantics of symbols zero, succ, and Nat. We don't have definitions of these symbols that we're reasoning from. In Peano arithmetic, the axioms are all we're allowed to know about these symbols.

forall i . i:Nat /\ successor[i]

I have no idea what that might mean. It looks like you're using a natural number as a proposition.

The first three axioms

I see what you are saying, but it still does not make sense to me. Not in the way the standard presentation of the Peano axioms does.

Questions:

- Zero is defined in Nat, but no other objects are declared in Nat.

- Successor is defined as a function from Nat to Nat.

At this point the only object in Nat is zero.

- The successor of any object in Nat is not zero.

At this point successor is not valid as zero is the only object in Nat there is no function from Bat to Nat that can return non zero.

I guess there is no point in going further until I understand where I am going wrong.

You assume generative properties whereas you have axioms

At this point the only object in Nat is zero.

This is simply not true. A number of axioms were stated, without generative side remarks, so you know _nothing_ about the cardinality of the set Nat. Only thing you know is that 0 is a member (one axiom), and applying succ to a member gives you another, possibly the same, member (another axiom).

Other elements

So we can assume that Nat contains all known objects (one of which is zero).

In this case my suggestion that some object 'a' or 'b' is in Nat cannot be refuted. So the successor of a could be b (as b is not the object zero).

So we introduce the forth axiom, that says if two objects are the same, their successor is the same. So if the successor if a is b, and the successor of b is a, this cannot be refuted by these axioms.

Yah.

That's what I said. The third and fourth axioms aren't sufficient to exclude 'strange' elements from the set Nat. You either explicitly make clear that it's the smallest fix point as generated by the two generators, or you state induction.

Induction already stated

or you state induction

Hewitt stated an induction axiom above. When Keean Schupke failed to uncover the other four axioms in Hewitt's articles, Hewitt finally posted "Of course, other Peano/Dedekind axioms (beyond the categorical induction axiom) are as follows."

Ah

Oh. I see. I stepped in halfway during the conversation it seems.

Axiomatic semantics

You're taking a more constructive approach, but that's not how the axiomatic approach works. We start with a universe of values that we don't know anything about. The symbols denote elements of this universe.

The only thing we know from looking at the syntax is that Nat is a set of values in the universe, zero is a value in the universe, and succ is a function defined on some part of the universe.

Then we start to learn things about these symbols through the axioms. One of the axioms tells us that the value zero belongs to the set Nat. One of the axioms tells us that zero is not the successor of any element in Nat. etc.

Missing Axioms

In this case there must be some axioms missing because I cannot make Hewitt's axioms add up to natural numbers. See my example above where two objects in Nat can be in a cycle so that succ a is b, and succ b is a. None of those four axioms would reject this.

Edit, actually I see that if you assume successor is a total function it does work because if we start with any pair of objects, if they are the same, then their predecessors must be the same all the way back to zero.

See my last post

No, a cycle in succ violates the injectivity of succ (succ n = succ m only if n = m). See my previous post to you.

Responding to your edit: you don't have to assume succ is total. That's one of his axioms - succ: Nat ^ Nat.

Terminating the recursive definition of equality.

Dont we need an axiom for "0 = 0" to terminate the recursion? As we know the successor of two objects being equal implies the two objects are equal. We recurse on the definition of equality but we need to stop at "0 = 0".

We don't need 0 = 0. That's built into equality.

We haven't discussed the axioms for equality, but you should assume that equal can be substituted for equal.

As Ross noticed, you might have misread the injectivity axiom. It doesn't say that i = j --> succ(i) = succ(j). That direction follows immediately from equaltiy - equal can be substituted for equal. Injectivity is the other way. It says that a function never maps two distinct values x =/= y onto the same value succ(x)=succ(y).

Recursion and termination aren't fundamental concepts in this setting. The only tricky axiom is induction. It's the one that rules out 'infinite naturals' [perhaps of the form succ(succ(succ ...forever)))]. In some sense (with first order interpretation), it doesn't even rule out these non-standard naturals. But it lets you prove everything you'd want to prove about the Naturals, so that even if there are nonstandard naturals, they don't cause a problem. The other axioms are straightforward.

Nat Test

So given an arbitrary object X, how do I test if it is a nat, given only the above axioms?

Category error

In the classical axiomatic approach, you could use the axioms to prove that a certain expression belongs to the set of Naturals. There generally aren't any axioms that would allow you to prove that an expression isn't a Nat. So for example, false could be the same value as zero. You can't prove that it is or isn't. It's syntactically valid to ask whether Nat[false] but probably not very sensible since it won't be addressed by any of the axioms.

Note: I have very little idea how Hewitt's system works. He has some combination of types and sets but I haven't ever seen it documented well. I'm not going to spend any more time trying to decipher it from what he's made available, but I won't tell you what to do.

Equality in the definition.

I am wondering how, given we know only that if succ X = succ Y, then X = y, how we use this in the case where we only have one object we wish to test.

To me it looks like these axioms control the generation of the set of Natural numbers. We would then use ordinary equality to check if an object is in the set. This seems very inefficient. There must be a better way?

"One object we want to test"

This phrasing loses me. We don't have objects. We have expressions. Is that what you mean? And what do you mean by "test"? We use the axioms to prove things about expressions. They don't provide an algorithm to test anything.

Types

When we say X:Nat we are saying x is a nat, and we imply a predicate to prove x is a nat. The sane as the Prolog predicate Nat.

Prolog is a logic plus an

Prolog is a logic plus an algorithm that drives inference. Here, we're just discussing the logic. We have a set of legal inference rules but no algorithm under consideration that drives which should be applied in a given situation. There is an implicit "test" for whether a given expression is a Nat - iterate through all possible derivations until you find one that proves the expression is a Nat - but it's only a semi-decision procedure (it never answers "no"; it just diverges).

Separate logic from Prolog

The discussion was about Peano axioms and logic, so I think it's best to discuss them in *that* context. The Peano axioms that Hewitt mentioned are the standard 5 ones (if you count the induction one, that he talks about in the text before the list).
The fact that first-order logic admits non-standard model is part of a good (university-level) introduction logic — which doesn't make it trivial.

But first, you should realize that the Peano axioms are (per se) in classical logic, and they are not constructive. If P is a proposition, it doesn't mean you can run a program to test if it's valid. That's especially true once you start talking about infinite objects.

- You need to distinguish between the program that computes equality, and the concept of mathematical equality. For instance, you can talk about the equality of infinite objects like real numbers, even though you can't test equality of reals by comparing all the digits. Similarly, you can't inspect the set of all naturals, which is also infinite. What is true is that `x \in N` is equivalent to `\exists y \in N (x = y)`, but none of this is runnable.

- Similarly, you need to distinguish between a set as an abstract collection of elements for which there's a membership predicate, and anything on a computer. You can't test the membership predicate.

(In fact, you can also define "constructive Peano numbers", and they're also inefficient).

Now, how do you know, formally, if X is a number? That's not a well-formed question per se, and it can be formalized in different ways. Now, unfortunately, I'll have to handwave my way a bit.

If you used type theory, there would not be a set of arbitrary things put together that contains the naturals. If you used standard first-order set theory and picked a model of Peano arithmetic, mapping the set of naturals to a set N*, you can form the proposition x \in N*, and figure out whether you can prove it from your assumptions on x — for an arbitrary x, of course you can't. Moreover, since first-order logic isn't categorical, your N* could be uncountable — essentially because you can't state the induction axiom but need to approximate it.

Finally, if you formalize naturals in second-order logic, I think you could have a separate sort of naturals — where "sorts" work intuitively like types. So a variable has a sort, either the sort of naturals or some other one. (First-order logic also has sorts, but in the usual mathematical foundations, there's only the sort of sets, because everything's a set).

Is second order logic sound?

Using second order logic we can prove things like all sets of real numbers have a maximum. However is there a problem? We don't have a system for proving the consistency of arithmetic (mathematical proofs rely on arithmetic to prove the consistency of other systems). In the end arithmetic is an appeal to intuition, it is consistent because it appears intuitively consistent,but its not provable.

On this basis how can second order logic prove all sets of reals have a maximum? There could be an inconsistency in arithmetic (as we only intuitively know its consistent), so are second order proofs really proofs? If anything they are dependent on the assumption of natural arithmetic being consistent. Perhaps we are better off with constructive first order methods?

Translating SOL to FOL.

See: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.17.2975&rank=1

So this seems to provide a translation into first order logic of things like the Peano axioms. Other sources cite the Peano axioms as an example of something which cannot be expressed in first order logic. Obviously the the soundness of second order logic could depend on this result (or at least my undetstanding of it).

Standard first-order logic

Enough of these questions arise already in usual systems (like first-order logic) and can be discussed in that context, and I think that's what we should do, before studying a new system.

Higher order logic or Type Theory

I think most of my questions can be addressed in higher order logic, or type theory.

Agreed...

... and they get very different answers — see the other reply I just posted. (In type theory, there's usually no universal type, so you can't even pick an arbitrary x).

"it doesn't even rule out these non-standard naturals"

Induction would guarantee the well-foundedness or Noetherianess of the set/universe you're talking about so this isn't true.

You could prove a membership predicate with induction which would clearly state that each member is 'generated from' a finite number of applications of succ to 0.

The 'smallest fixpoint' and induction statement should imply each other.

Edit: Possibly your post is unclear? It seems you first state induction is sufficient and then not. I guess I don't get your post.

First order interpretation

I have a parenthetical there that mentions first order interpretation. You can interpret second order logic by its first order encoding, in which case induction doesn't get rid of the non-standard models.

Fine

I don't know that manner of interpretation, I stand corrected.

"A cycle in succ violates the injectivity of succ"

It doesn't; this isn't true. If, stated without induction, you would have succ a = b and succ b = a then a trivial element inf where inf = succ inf would suffice. It just wouldn't be grounded in 0, just a 'loose' element apart from the other members which you would think of as numbers.

Injectivity simply isn't enough. You need the 'smallest set generated by' or induction.

True

Or without the zero axioms, you could have succ a = b and succ b = a. That's injective. I must have meant that it violates injectivity in the presence of the other axioms. :)

-

Double post

-

Double post.

Limiting to FOL

I deliberately chose to restrict to first-order arithmetic in my dissertation, because there are so many choices about how to add 'higher-order reflection' to the framework I gave, and I did not want to complicate the thesis, which I wanted to keep attractively simple. Briefly, the ways of getting higher-order that i considered are:

  1. The Church route, by making FOA into a logical framework that can represent itself and other logics, as is done in HOL;
  2. Adding Martin-Loef style predicative universes, as is done in Agda;
  3. Adding System F-like impredicative quantification over propositions, yielding a system a bit like the Calculus of Inductive Constructions;
  4. Adding a type of sentences to the language, the way that Tarski pioneered, together with a T-scheme (the details look quite like Martin-Loef style predicative universes, in fact)
  5. Using one of the many ways of adding modalities to allow reflection, e.g., the way that is done in provability logic

All of these are known ways to move from intuitionistic arithmetic to something recognisably higher-order, and all interfere type-theoretically with the inference rules for classical logic in hairy ways. And all differ in the details.

Also note that the system I describe in my dissertation is richer, although a conservative extension of Heyting/Peano arithmetic, in that it has quantification over higher-type functionals; i.e. it is like HA^omega.

Categorical axiomatization of N is crucial for Computer Science

Categorical axiomatization of N is crucial for Computer Science because nonstandard models are not allowed.
Unfortunately, the methods that you considered for your thesis didn't work out for Computer Science :-(
However, as illustrated above, types do the job nicely.

It's all type theory

All of the above techniques are done in Martin-Loef type theory.

Types have been a work in progress ever since Russell.

Gradually making progress, types have a work in progress ever since Russell.

See Types in Classical Direct Logic

"crucial for Computer

"crucial for Computer Science" is a statement so vague as to be devoid of meaning. First-order logic is weak, yes, but all you can prove with it is true for standard naturals, and people have been using it. Maybe they were using kludges to work around its limitations, but they managed.

Now, I also don't like first-order logic, and also logicians are unhappy about the choices they have to make, but they've accepted it as a fact of life, because of various tradeoffs.

In particular, because of Gödel's theorems, the proof theory of second order logic is not complete. You claimed that Gödel's proof didn't apply, to which see here: http://lambda-the-ultimate.org/node/4784?from=200&comments_per_page=200#comment-86181. So I'm not gonna buy that.

For some reason, which I don't quite get, mainstream logicians object to having an incomplete proof theory — that is, in second order logic there are sentences that are true in all models, but can't be proven. If you like different tradeoffs, you ought to acknowledge the downsides and explain why they don't matter.

First order logic is fine

The difference between first and second order logic is just in the way that we interpret the symbols and quantifiers. I usually don't respond to his claims about the necessity of second order logic, but ZFC, a first order one-sorted theory, is the defacto standard basis for mathematics and incidentally can prove the consistency of the second order version. So the claim that it's unsuitable for "computer science" seems questionable at best. When you're using a formal system to prove things in a machine checked way, why do you care about the possibility of non-standard models?

But shame on you for resurrecting this thread. :) [Edit: Oh wait, that was your other post -- never mind]

Infinite integers in models of FOL axioms are highly undesirable

Infinite integers in models of FOL axioms are highly undesirable in CS, e.g., model checking.

Also, important properties of concurrent systems cannot be proved in first-order logic because first-order axioms are compact which is contrary to unbounded-nondeterminism of Actor Model.

Limitations of first-order logic

A Computer Science challenge is to usefully formalize the induction axiom:

(P[0] ∧ ∀[i:ℕ]→ P[i]⇨P[i+1])⇨ ∀[i:ℕ]→ P[i]

First-order logic has a complicated kludge involving schemas :-(

However, types provide an elegant solution:

∀[P:Boolean]→(P[0] ∧ ∀[i:ℕ]→ P[i]⇨P[i+1])⇨ ∀[i:ℕ]→ P[i]

SOA can formalise induction

Second-order arithmetic (SOA) can also formalise that first-order induction axiom, and SOA is a two-sorted first-order theory with Henkin semantics.

How does second-order logic formalize Peano/Dedekind induction?

How does second-order logic formalize Peano/Dedekind induction?

Quantification over propositions

SOA uses the language of SOL but it is not SOL, since it has the Henkin semantics. It has quantification over properties of natural numbers, which allows induction to be formalised as you stated it.

Quntifying over all uncountably many predicates using types

Types provide an elegant solution to quantifying for all uncountably many predicates:

∀[P:Boolean]→(P[0] ∧ ∀[i:ℕ]→ P[i]⇨P[i+1])⇨ ∀[i:ℕ]→ P[i]

Other TLA logics have adopted obscure kludges that are evidently too embarrassing to state explicitly ;-)

Bottom (⊥) doesn't mean divergence here

Bottom (⊥) doesn't mean divergence here. It just means the least element of a partially ordered set (which, incidentally, is also how divergence came to be named ⊥ -- as the least element in a domain theory model). In this context, the ordered set is the set of propositions, of which false is considered the least. In short, bottom here just means false.

⊥: arbitrary diverging computation or arbitrary contradiction

⊥ has two distinct uses:

computation: an arbitrary diverging computation 

logic: an arbitrary contradiction

There is no ordering on propositions in Inconsistency Robust logic, in which ⊥ makes no sense.

ActorScript the Ultimate

OK, so you did mean to deny that false is a useful proposition. Then Charles' response to you was more on point. I should have guessed that that the context had implicitly switched to inconsistency robust logic.

What happens if you define "false" in your inconsistency robust logic as forall[Φ:Prop]. Φ? Does this not produce an embedding of intuitionistic logic?

⊥ is nonsense in Inconsistency Robust logic

⊥ is nonsense in Inconsistency Robust logic.

The following proposition (which is useless) says that every proposition holds:

∀[Φ:Proposition]→ Φ

The above proposition is not equivalent to the following proposition:

MadeOfCheese[Moon] ∧ ¬MadeOfCheese[Moon]

⊥ was introduced to designate an arbitrary contradiction; not to denote an explosion.

Intuitionistic logic is not inconsistency robust :-(

Well, you've used your

Well, you've used your version of 'not' there. I meant you define another function 'not' on predicates by:

false = ∀[Φ:Prop]→ Φ
not P = P ⇨ false

Does that give an embedding of intuitionistic logic in your system?

"False" is not a proper proposition; it doesn't say anything

False is not a proper proposition because it doesn't say anything.

Furthermore, False is meaningless in inconsistency robust logic.

Also the following function is obscure:

Not[p:Proposition]:Proposition ≡ (p ⇨ ∀[Φ:Prop]→ Φ)

Why not just write the following:

Not[p:Proposition]:Proposition ≡ ¬p

Again, intuitionistic logic is not inconsistency robust.

Why not just write the

Why not just write the following: [...]

Because I was asking about embedding intuitionistic logic. Consider:

P ∧ Not[P] ⇨ False

This would seem to hold in your logic under the definitions of Not and False given above. Wouldn't it?

"False" is nonsense in inconsistency robust logic

The following does not hold in Inconsistency Robust Direct Logic:

(MadeOfCheese[Moon] ∧ ¬MadeOfCheese[Moon])⇨ ∀[Φ:Proposition]→ Φ

In fact, even the following much weaker proposition does not hold:

(MadeOfCheese[Moon] ∧ ¬MadeOfCheese[Moon])⊢ ∀[Φ:Prop]→ Φ

Of course, the above is absolutely necessary for inconsistency robustness.

Last try


Absurd ≡ ∀[Φ:Prop]→ Φ
Not[p:Proposition]:Proposition ≡ (p ⇨ Absurd)

MadeOfCheese[Moon] ∧ Not[MadeOfCheese[Moon]] ⊢ Absurd

True that?

[Edit] and moreover:

MadeOfCheese[Moon] ∧ Not[MadeOfCheese[Moon]] ⊢ Ψ

Inconsistency robustness: counter to traditional logics

Inconsistency robustness goes counter to traditional logics, e.g., classical and intuitionistic.

What you have above is just a verbose way of expressing the following in Inconsistency Robust Direct Logic:

(MadeOfCheese[Moon] ∧ ¬MadeOfCheese[Moon])⊢ ∀[Φ:Proposition]→ Φ

Of course, the above proposition does not hold because it goes absolutely counter to inconsistency robustness.

Not ¬. Who's there?


Foo ≡ ∀[Φ:Prop]→ Φ
Bar[p:Proposition]:Proposition ≡ (p ⇨ Foo)

MadeOfCheese[Moon] ∧ Bar[MadeOfCheese[Moon]] ⊢ Foo

Is this version derivable?

Implication: *very* strong in Inconsistency Robust Direct Logic

In Inconsistency Robust Direct Logic, the following (from above)

Bar[MadeOfCheese[Moon]]

is just a verbose way of expressing ¬MadeOfCheese[Moon] because implication is very strong.

Modus Ponens

I thought you supported modus ponens. Something like this:

P ∧ (P ⇨ Q) ⊢ Q

No? It's hard for me to see implication as useful without modus ponens.

Modus Ponens holds in Inconsistency Robust Direct Logic

Modus Ponens holds in Inconsistency Robust Direct Logic in the following more general form:

Φ,(Φ⊢Ψ)⊢Ψ

Consequently,

Φ,(Φ⇨Ψ)⊢Ψ

Why can't I apply Modus Ponens, then?


Foo ≡ ∀[Φ:Prop]→ Φ
Bar[p:Proposition]:Proposition ≡ (p ⇨ Foo)

M ≡ MadeOfCheese[Moon]

M ∧ Bar[M] = M ∧ (M ⇨ Foo)  -- By definition of Bar
            ⊢ Foo             -- By Modus Ponens

Inconsistency Robust logic supports Garbage-in Garbage-out

Inconsistency Robust direct logic supports garbage-in garbage-out.
The simplest form is as follows:

GarbageGarbage

If you put in really bad garbage like following, then you will have toasted your theory:

∀[Φ:Proposition]→ Φ

A more indirect form is as follows:

Φ,(Φ⇨Garbage)⊢Garbage

Of course, to get Garbage out from the above it is necessary to put in both of the following:

Φ
Φ⇨Garbage

Garbage as absurdity

My point was just that you can encode intuitionistic logic in your system. People can put False and Not as defined above into a library, encode their propositions using those instead of your built in false and ¬, and then work with an intuitionistic fragment of your system. This encoding isn't inconsistency robust, because it supports:

Φ, Not[Φ] ⊢ Ψ

It sounds like you think this is a bad idea, characterizing False above as garbage, but that's the idea of absurdity: it's a proposition you can't ever prove.

Intuitionistic logic is ***not*** inconsistency robust

Since intuitionistic logic is not inconsistency robust, it is not a fragment of Inconsistency Robust Direct Logic.

You cannot subvert users of Inconsistency Robust Direct Logic by posting some garbage propositions in libraries that they decline to use.

In addition to other reasons that intuitionistic logic is not a suitable foundation for Computer Science, it must be avoided in order to have inconsistency robustness.

Unfortunately, Lycke's logic: not a suitable foundation for CS

Unfortunately, Lycke's logic (linked above by Keean Schupke) is not a suitable foundation for Computer Science.
For example, the following evidently does not hold:

Φ⇨Ψ, Φ⇨¬Ψ ⊢ ¬Φ

Also of course, Lycke's logic is not strictly speaking intuitionistic logic :-(

Various explosions

I like the idea of avoiding both of the following kinds of explosion:

Φ, ¬Φ ⊢ Ψ (classical logic)
Φ, ¬Φ ⊢ ¬Ψ (intuitionistic logic)

Does Inconsistency Robust Direct Logic avoid the following kind of explosion?

(∀[Φ:Proposition]→ Φ) ⊢ Ψ

It seems to me that we might want this to be limited to implication instead, somehow:

⊢ (∀[Φ:Proposition]→ Φ)⇒Ψ

Warning about potentially toxic propositions

Thanks!

Unfortunately, the following proposition is extremely toxic:

∀[Φ:Proposition]→ Φ

It would be interesting to see what can be done about warnings for potentially toxic propositions.

Use your post above as a warning

If you figure out how to detect encodings of intuitionistic logic (sounds undecidable), I think you should use your text above.

c:\projects> directlogic myproof.dl
myproof.dl (7): Intuitionistic logic is ***not*** inconsistency robust. Since intuitionistic logic is not inconsistency robust, it is not a fragment of Inconsistency Robust Direct Logic. You cannot subvert users of Inconsistency Robust Direct Logic by posting some garbage propositions in libraries that they decline to use. In addition to other reasons that intuitionistic logic is not a suitable foundation for Computer Science, it must be avoided in order to have inconsistency robustness.

Upping the ferrous content ;-)

Upping the ferrous content ;-)

Heyting Algebra + Exponent.

That's not an axiom in Heyting algebra, and intuitionistic logic is a Heyting algebra with an exponent I think.

Edit: I have realised what is going on, there are two triples:

disjunction-introduction + disjunctive syllogism + transitivity
reductio-ad-absurdum + rule-of-weakening + double-negation-eliminaion

that lead to the principle of explosion.

Inconsistency Robust Direct Logic has a proof by contradiction

Inconsistency Robust Direct Logic has a form of proof by contradiction as follows:

(Φ⇒(Ψ∧¬Ψ))⊢¬Φ

Apparent contradiction with paper.

Quoting from "Formalizing common sense reasoning for scalable inconsistency-robust information coordination using Direct Logic™ Reasoning and the Actor Model":

Inconsistency Robust Direct Logic is a minimal fix to Classical Logic without the rule of Classical Proof by Contradiction
(Ψ├(Φ∧¬Φ))├ ¬Ψ

This seems to contradict the above?

IRDL has IR Proof by Contradiction; not the Classical version

Inconsistency Robust Direct Logic has Inconsistency Robust Proof by Contradiction instead of Classical Proof by Contradiction.

Consequently, the above quotation from the article is correct. (Please note the subtle difference between inference and implication.)

PS. The article can be found here:
Formalizing common sense reasoning