Mathematics self-proves its own Consistency (contra Gödel et. al.)

The issue of consistency has come up in many LtU posts.

Some readers might be interested in the following abstract from Mathematics self-proves its own Consistency:

                   Mathematics self-proves its own Consistency
                          (contra Gödel et. al.)

                             Carl Hewitt
                         http://carlhewitt.info

That mathematics is thought to be consistent justifies the use of Proof by Contradiction.
In addition, Proof by Contradiction can be used to infer the consistency of mathematics 
by the following simple proof:
   The self-proof is a proof by contradiction.
     Suppose to obtain a contradiction, that mathematics is inconsistent.
     Then there is some proposition Ψ such that ⊢Ψ and ⊢¬Ψ¦.
     Consequently, both Ψ and ¬Ψ are theorems
     that can be used in the proof to produce an immediate contradiction.
     Therefore mathematics is consistent.
The above theorem means that the assumption of consistency
is deeply embedded in the structure of classical mathematics.

The above proof of consistency is carried out in Direct Logic [Hewitt 2010]
(a powerful inference system in which theories can reason about their own inferences).
Having a powerful system like Direct Logic is important in computer science
because computers must be able to carry out all inferences
(including inferences about their own inference processes)
without requiring recourse to human intervention.

Self-proving consistency raises that possibility that mathematics could be inconsistent
because of contradiction with the result of Gödel  et. al. that
if mathematics is consistent, then it cannot infer its own consistency. 
The resolution is not to allow self-referential propositions
that are used in the proof by Gödel  et. al.
that mathematics cannot self-prove its own consistency.
This restriction can be achieved by using type theory
in the rules for propositions
so that self-referential propositions cannot be constructed
because fixed points do not exist. 
Fortunately, self-referential propositions
do not seem to have any important practical applications.
(There is a very weak theory called Provability Logic
that has been used for self-referential propositions coded as integers,
but it is not strong enough for the purposes of computer science.) 
It is important to note that disallowing self-referential propositions
does not place restrictions on recursion in computation,
e.g., the Actor Model, untyped lambda calculus, etc.

The self-proof of consistency in this paper
does not intuitively increase our confidence in the consistency of mathematics.
In order to have an intuitively satisfactory proof of consistency,
it may be necessary to use Inconsistency Robust Logic
(which rules out the use of proof by contradiction, excluded middle, etc.).
Existing proofs of consistency of substantial fragments of mathematics
(e.g. [Gentzen 1936], etc.) are not Inconsistency Robust.

A mathematical theory is an extension of mathematics
whose proofs are computationally enumerable.
For example, group theory is obtained
by adding the axioms of groups to Direct Logic.
If a mathematical theory T is consistent,
then it is inferentially undecidable,
i.e. there is a proposition Ψ such that
¬⊢TΨ and ¬⊢T¬Ψ,
(which is sometimes called incompleteness)
because provability in T
is computationally undecidable [Church 1936, Turing 1936].

Information Invariance is a
fundamental technical goal of logic consisting of the following:
  1. Soundness of inference: information is not increased by inference
  2. Completeness of inference: all information that necessarily holds can be inferred
Direct Logic aims to achieve information invariance
even when information is inconsistent
using inconsistency robust inference.
But that mathematics is inferentially undecidable (incomplete)
with respect to ¬ above
does not mean incompleteness
with respect to the information that can be inferred
because it is provable in mathematics that ¬⊢TΨ and ¬⊢T¬Ψ.

The full paper is published at the following location:
Mathematics self-proves its own Consistency

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

No fixed points in the typed lambda calculus

There are no fixed points in the typed lambda calculus and consequently they cannot be used to construct self-referential sentences a la Gödel.

applying Goedel's 2nd to derive a contradiction in DL

Below, I have tried to sketch how to apply Goedel's 2nd incompleteness theorem to derive a contradiction in DL.

I don't understand how the definitions given evade Goedel's 2nd theorem.

Which step in the following is wrong, and why:

I think it is possible in DL to describe a Goedel number of sentences. So I'd like to stipulate that we have:

Encode : Sentence ⟶ ℕ

such that Encode[s] gives a Goedel number for sentence s

Decode :  ℕ ⟶ Sentence

such that

(∀ s : Sentence) s = Decode[Encode[s]]

I believe there is no problem defining a model of provability in DL such that:

Provable : ℕ ⟶ {0, 1}

such that Provable[Encode[s]] is 1 iff s is provable.

Provable is, of course, a partial function.

I'm less sure but see no restriction preventing us from proving, in DL:

Theorem(?):

(∀ s : Sentence) (Provable[Encode[s]] = 1) ⇒ (⊢ ⌈ s ⌉) ⇒ ⌈ s ⌉

If I understand the definitions correctly, I can define a schema for Goedel sentences this way:

GS ≡ « λ [x : ℕ ] ¬ (Provable[X] = 1) » 

meaning that if X is a natural number then

GS[X] ≡ « ¬ (Provable[X] = 1) »

so I can trivially define:

GSN : ℕ ⟶ ℕ

GSN[x] ≡ Encode [GS [x]]

I can compute a least fixed point for GSN. I don't think that DL's restrictions on syntax prevent this step but I could be misunderstanding something:

Gn ≡ LeastFixedPoint [GSN]

Now I have a full fledged Goedel sentence, G:

G ≡ Decode[Gn]

At a meta level we know with certainty that either DL can not prove G or else DL is inconsistent. DL doesn't claim to evade Goedel's first theorem. The issue for Goedel's 2nd is can we now use DL's theorem of self consistency, with G, to derive a contradiction within DL.

I'm not sure but I don't see why we can't also prove in DL:

Theorem(?):

⊢ ⌈ G ⌉ ⇒ ⌈ ¬ G ⌉

Using the theorem of consistency we then trivially have a theorem:

⊬ ⌈ ¬ G ⌉

and therefore:

⊬ (Provable [Encode [G]] = 1)

which is a proof of G and thus a contradiction.

Which step here doesn't actually work in DL and why? Can someone please walk me through it?

Are you sure DL allows

Are you sure DL allows pattern matching on Sentences as would be needed to write 'Encode' in step 1?

Are you sure DL allows

Are you sure DL allows pattern matching on Sentences as would be needed to write 'Encode' in step 1?

We do not need "pattern matching" to define Encode. Defining equality for the enumerable set of sentences is enough to define Encode.

To encode a sentence, S, start enumerating sentences from shortest to longer. Count them as you go. When you get to S' which is equal to S, the current count is the encoding.

You're right

By "pattern matching" I just meant some way to inspect them, but you're right that equality is good enough and he does seem to give that.

You hit trouble here, though

I believe that, while your construction of Encode by enumerating will work, he doesn't have an axiom that will let you prove that it works. So you can't prove encode is total as needed for this step:

(∀ s : Sentence) (Provable[Encode[s]] = 1) ⇒ (⊢ ⌈ s ⌉) ⇒ ⌈ s ⌉

That troubling step

That troubling step is sure an interesting one and I think that if DL works as advertised then there has to be something that prevents that step from being valid.

What I don't see, though, is anything that prevents that step from being valid. I've stared at the definitions footnoted by footnote 28 and I guess I just don't understand what they mean well enough to see this (or else Hewitt has made an error).

Let's hope we hear from him or someone else who better understands DL.

Footnote 28

If I understand correctly, the scheme around that footnote just imposes a type discipline on parameterized sentences. So this syntax:

If f is a [σ]↦Sentence and t is a σ then f[t] is a Sentence

Just means this:

If  f: a -> Sentence  and t: a, then  f t: Sentence

And is just meant to rule out "monsters" like this:

f = \x. ~(x x)
Monster = f f = ~(f f) = ~Monster

Which is fine and good but does nothing to block Gödel's theorems.

Trivial?

Pardon possible ignorance on my part here, but why is this not a trivial theorem in light of DL's "Principle of Integrity" : (⊢ P) ⇒ P ?

I read that last implication

I read that last implication informally as "which implies" rather than as a trivial implication proposition.

Still don't get it

Given that "Integrity" appears as a valid deduction step on page 22 of the paper in the OP, surely you don't mean by "that last implication" the one in the principle of integrity?

And if you're referring to the last implication in the theorem to be proved in DL ... how else would one formalize, in DL, the informal "which implies"?

Sorry if I'm missing something obvious.

I read it like this

Tom wrote:

(∀ s : Sentence) (Provable[Encode[s]] = 1) ⇒ (⊢ ⌈ s ⌉) ⇒ ⌈ s ⌉

I read the last arrow as "which implies" or, in other words, as this:

(∀ s : Sentence) (Provable[Encode[s]] = 1) ⇒ (⊢ ⌈ s ⌉), 
and thus (by Integrity),
(∀ s : Sentence) (Provable[Encode[s]] = 1) ⇒ ⌈ s ⌉

Thanks!

That's crystal clear now. Thanks!

Typed lambda calculus does not have fixed points.

The typed lambda calculus does not have fixed points. Self-referential sentences cannot be constructed because:
* The primitive sentences of mathematics are not self-referential
* Sentence constructors (logical connectives) do not construct self-referential sentences out of sentences not self-referential.
* Because of the typing restriction, a lambda-calculus functional for constructing sentences must take as its argument something that has already been shown to be a sentence.

Which of Tom's steps does

Which of Tom's steps does this comment address? Responding to serious inquiry with talking points may work well for presidential candidates, but doesn't seem to me to be good form for a scientist.

ad hominems

Responding to serious inquiry with talking points may work well for presidential candidates, but doesn't seem to me to be good form for a scientist.

Why don't you go ahead and ask him he's stopped beating his dog, while you're at it?

Because I haven't seen him

Because I haven't seen him beat his dog.

re: Typed lambda calculus does not have fixed points.

* Because of the typing restriction, a lambda-calculus functional for constructing sentences must take as its argument something that has already been shown to be a sentence.

Does that mean that in DL this is nonsense:

Decode :  ℕ ⟶ Sentence

such that [....]

because natural numbers are not sentences and therefore no such functional is permitted because of typing constraints.

And this is nonsense:

GS ≡ « λ [x : ℕ ] ¬ (Provable[X] = 1) » 

meaning that if X is a natural number then

GS[X] ≡ « ¬ (Provable[X] = 1) »

because, again, GS here has a forbidden type?

Those restrictions certainly throw a monkey wrench in Goedel's way of building up G within DL itself.

slowly over marblehead

And these draconian restrictions about sentence formation are justified precisely because when we conflate the meta level of syntax with the subject of the theory itself that we get into these strange word games like Goedel sentences. And a kind of meta-claim here is that these draconian restrictions on syntax don't actually get in the way of proving anything useful.

Is there a clean, concise, and rigorous document that simply lays out DL as a formal system, in carefully explained baby steps, comparable to the first chapter of books about other formal systems?

Something like the stuff on pages 9 and 10 of the paper which is the topic, but more fleshed out and standing on its own?

Good luck

Tom,

I realize that you think that DL is based on deep understanding of Godel, but I suspect that it is based on deep misunderstanding.

Carefully read page 6 of his document. Hewitt seems to believe that Godel used integers to encode statements in such a way that they were overtly self-referential. In other words, he seems to think Godel used an integer encoding to write down a proposition like this:

P = (P is not provable).

In other words, the right hand side explicitly references P rather than "godel number of P". And the attempt to bar this scheme thus focuses on preventing fixpoints in sentence construction. Again I cite page 6 as evidence of this. At the top of the page he states that this mechanism will get him past the 2nd incompleteness theorem. Then at the bottom you can see specifically the examples that the typing scheme rules out (Curry's paradox and Lob's paradox). I see no acknowledgment that Hewitt understands that this still leaves him vulnerable to Godel's technique.

Also, I think you're reading the type scheme wrong. I think it's as I suggested above. You can construct a function N -> Sentences just fine. The rules aren't very clear, though, from this document, so I could be wrong. For example, there doesn't seem to have a rule that says that an identifier is a term, but there is one that says a name is a term. Is that the same thing?

Whether there is anything deep going on here is a separate question to the one of whether DL actually turns out to be consistent. I don't think your proof goes through as is, but there may be ways to fix it (e.g. do the quantification on the arithmetic side where you know that every sentence has the correct form). But I'm sufficiently convinced that there's nothing deep going on here that I don't particularly care whether DL turns out to be consistent.

As you may have inferred from the tone of some of my comments, Hewitt's approach of making over the top claims, belittling the work of Godel and others, and then responding to inquiry with canned slightly relevant talking points rubs me the wrong way and I have difficulty not responding. Maybe it's not my place to complain about it, but oh well... too late :). I'm going to pipe these Hewitt threads to /dev/null for a while. Enjoy the quiet.

P.S. I would add "peer reviewed" to the front of your wish list for DL documentation.

the type restriction

You can construct a function N -> Sentences just fine.

Not internally to DL itself, no you can't. The type restriction is the only thing standing in your way and is in that sense draconian. It's justified because it prevents Goedel's 2nd theorem construction (and similar constructions) from going through yet does not in any way seem to impede doing everything useful you'd actually want to do.

The construction in Goedel's 2nd collapses meta-reasoning about a formal system with reflective reasoning within the system itself. Hewitt's syntactic restriction on sentences puts a very sharp limit on how far that kind of self-absorbed nonsense can go.

The paper is a really frustrating presentation but the idea it conveys is as plain as the face on which your nose resides.

Wrapping up

Could you explain how you're reading the rules? Here are the ones I find relevant:

1) Boolean, Term, Sentence, List, and Set are a Type.
2) If σ1, σ2  are a Type, then Lam[σ1]->σ2 and σ1 X σ2 are a Type
3) If s is a Sentence, σ is a Type, and i is an identifier, then 
    ≪lam[i : σ]→  s≫ is a Lam[σ]↦Sentence and I is a term.
4) If f is a Lam[σ]->Sentence and t is a σ then f[t] is a Sentence

So 1) says that these primitive types are types and 2) says that product and sum types are types.

I read 3) as attempting to say that if s is a sentence with free variable i that ranges over a type, sigma, then we can form a lambda term and it has product type. The phrasing isn't good, but I think that's what it means.

And 4) just says that if you have such a typed function, you can apply it and get a sentence.

Are you thinking that the fact that Nat isn't in the list of primitives is somehow key? Even though we can form functions from Set and List? In isolation, I could entertain some other such interpretation, but look at the quotes like this everywhere:

[28]Consequently, self-referential sentences cannot be constructed because fixed points do not exist for sentences because of typing constraints.

What do you think that means? It seems clear to me that his goal is to allow functions with types like N -> Sentence without allowing general fixpoints in the creation of Sentences.

re: wrapping up

I don't think that Hewitt's write-up for the formal system is rigorous.

His recent comment to me clarifies what is meant (emphasis added):

* Because of the typing restriction, a lambda-calculus functional for constructing sentences must take as its argument something that has already been shown to be a sentence.

This makes it essentially impossible for DL to reason internally about the queer features of Goedel sentence G.

That comment makes perfect

That comment makes perfect sense to me as referring to blocking self-reference. See page 6, Curry's paradox. The type system blocks Curry from being a statement because you can't prove that Fix(Diagonalize) = Diagonalize[Fix(Diagonalize)] is a proper invocation until you've first proven Fix(Diagonalize) to be a Sentence.

The quote doesn't mean functions from e.g. Boolean aren't allowed. Clearly the type system is set up to explicitly allow functions from non-Sentence domains. He just means that if the function takes a Sentence argument, you can't pass in a garbage expression and then prove it to be a Sentence a posteriori. This rules out overt self-reference.

Matt, I think have got it!

Matt, I think have got it!

Cheers,
Carl

Sorry for the duplicate post

Sorry for the duplicate post. It looks like LtU inserted a page break in the middle of this post confusing both me and my browsers :-(

Reading this thread again,

Reading this thread again, because of a new thread on Hewitt.

LtU didn't figure that out with certainty AFAICS, but the restrictions might prevent Gödel's proof from going through.

However, this point is critical:

And a kind of meta-claim here is that these draconian restrictions on syntax don't actually get in the way of proving anything useful.

The beauty of Gödel's work is that he prevents this objection, because all his machinery is standard arithmetics. Intuitively, proof-checking is so simple you can encode it as a "very simple" program (technically, a primitive recursive one), which always terminates. Since the input, the program and the output are all syntactic things — that is, you could encode them in a computer, and then read off the memory the bit patterns as huge numbers — standard arithmetic can be used to talk about this program. In fact, Gödel even used more standard arithmetic (prime numbers) for his encoding, instead of computer memory, which makes this feel even more like "standard mathematics".

This works so well that even common type theories (such as Coq) allows encoding Gödel's proof, even though they use types as Gödel likes so much.

Gödel's "self-referential" propositions are harmful for CS

Gödel's "self-referential" propositions are harmful for CS.

Gödel exploited a hole in his notation for mathematics to use fixed points to create the "self-referential" sentence "I am not provable."

However, using types such fixed points do not exist. Consequently, Gödel's "self-referential" proposition does not exist.

There are no practical uses for "self-referential" propositions in Computer Science. Also, allowing such "self-referential" propositions leads to mathematical inconsistencies.

See Gödel's "self-referential" propositions are harmful for Computer Science

There are no practical uses

There are no practical uses for "self-referential" propositions in Computer Science. Also, allowing such "self-referential" propositions leads to mathematical inconsistencies.

No, it doesn't. Self-referential propositions give rise to incompleteness due to undecidability.

Practicality of self-referential propositions?

Mathematics is consistent if only because mathematicians will keep it that way. So Hewitt's favorite foundation, Classical Direct Logic, reflects social reality by having a theorem stating its own consistency.

By Gödel's second incompleteness theorem (though I'm not sure I really know the details), it would be tricky to add self-referential propositions to a system that already has a self-proof of consistency, at least without introducing actual inconsistency. And what would we gain? Hewitt claims there are no practical uses for self-referential propositions.

Self-referential propositions

As far as I can tell, Hewitt's approach to avoiding Godel's incompleteness theorem through types is based on a misunderstanding of how it works. He seems to think that Godel employed some kind of a loophole to construct a sentence that directly expresses "this statement is not provable." What Godel actually did almost certainly goes through just fine in Direct Logic, except we could never obtain a formal definition Direct Logic to provide the details.

Preventing self-referential propositions is usually required to prevent inconsistency ("this statement is false"), not incompleteness. Incompleteness you can't avoid, by Godel.

Gödel's "I am not provable." used a loophole he created

Gödel's "I am not provable." used a loophole he created in his notation for mathematics that mistakenly allowed fixed points.
Such fixed points do not exist using types.

Not so

Godel's result would not be celebrated if it relied on overtly self-referential propositions that could be blocked with a type system. He observed that statements about arithmetic (none of them self referential) can themselves be encoded in arithmetic, as can proofs of statements. He then constructed a proposition of the form "Proposition #N is not provable" where N happens to be the very numeral of that proposition. You can't block this proposition with a type because it's well typed. The fixed point construction used to find the appropriate N just involves the standard rules of arithmetic and logic. You can block this construction but only by blocking the ability to do basic reasoning about numbers.

inside and outside

With pen and paper, in principle, someone can construct Goedel sentences (aka compute certain fixed points) even in whatever Hewitt's system is. I am pretty sure that Hewitt is saying that, in his system, you can not deduce from such an actual self-referential sentence that it is self-referential. I.e.: Here is a Goedel sentence, X, but in the system you can't actually prove that X is a Goedel sentence. Because "types".

Whether he is correct about that or not I'm not sure because I've never seen any attempt to present the system in a lucid, rigorous way.

Why it might be important or interesting is a mystery to me.

Maybe

I've know you've attributed some such position to Hewitt in several places in this thread, but I haven't ever seen him stake out such a position in a clear way. Everything I've read from him is consistent with him just misunderstanding the theorem. Feel free to point to a counterexample. I agree with your last point that even if you're right about what he has in mind, the importance needs to be explained.

re maybe

My guess as to Hewitt's intent is condensed around this old comment, I think:

comment from a couple years ago

The relevant quote from Hewitt:

* Because of the typing restriction, a lambda-calculus functional for constructing sentences must take as its argument something that has already been shown to be a sentence.

The problem types solve

Let

X : Sentence -> Sentence 
X = λx. (x x) is not provable

Then,

X X = (X X) is not provable.

would give a Godel sentence. This construction is not permissible, though. Why? "Because of the typing restriction, a lambda-calculus functional for constructing sentences must take as its argument something that has already been shown to be a sentence."

This is what I think Hewitt means, but as I know you know, this misses the role of Godel numbering to make due without this kind of overt self-reference.

Afaics the result doesn't

Afaics the result doesn't need the system itself to be able to determine which sentence is a Gödel sentence; at heart it's a diagonalization, the beauty of which is that you can be sure the diagonal has an element in every row even if you don't know which row you're interested in. (I've been exploring this stuff lately, to get my thinking straight.)

Rosser's version, too

I agree that a simple way to approach Godel's theorem is by first noting that Arithmetic is strong enough to encode the question of whether a Turing machine halts. Then things become simple. Here's a nice blog post that shows an extension of this idea to Rosser's incompleteness theorem.

Cool

Thanks. :-)

Enumerating strings does not create Godel's "self-ref" sentence

Simply enumerating strings does not enable the creation of Godel's "self-referential" sentence.

From page 38-39 of Classical Direct Logic, there is the following explanation:

In Direct Logic, fixed points on propositions do not exist and consequently Gödel's proofs are not valid. Even in the closed theory ℕ, a “self-referential” sentence cannot be constructed by enumerating strings for sentences. Let SentenceFromStringWithIndex be a procedure that enumerates sentences that can be produced by parsing mathematical strings of the closed theory ℕ and IndexOfSentenceFromString be a procedure that returns the index of a sentence that can be obtained by parsing a mathematical string.

I_am_not_provable:Proposition ≡ ⌊SentenceFromStringWithIndex.[Fix[Diagonal]]⌋
where Diagonal.[i:ℕ]:ℕ ≡ IndexOfSentenceFromString.[⌈⊬ℕ ⌊SentenceFromStringWithIndex.[i]⌋⌉]
The fixed point operator Fix cannot be defined using the typed lambda calculus because of the following:
ℕtoℕ ≡ [ℕ]↦ℕ
Helper.[f:ℕtoℕ]:ℕtoℕ ≡ [x:([⍰]↦ℕtoℕ])]→ f.[x.[x]]
Fix.[f:ℕtoℕ]:ℕtoℕ ≡ (Helper.[f]).[Helper.[f]]
The missing type ⍰ does not exist.

Consequently, the above attempt does not construct a string S that ⌊⌈S⌉⌋ ⇔ ¬⊢⌊⌈S⌉⌋

Furthermore, it is possible to prove in Classical Direct Logic that such a string does not exist.

That diagonal is wrong

I believe you're correct that this approach won't work, because Diagonal here is likely to be monotone increasing. It thus won't have a fixpoint (Tom, doesn't this comment apply to your previous proof sketch as well? Why does LeastFixedPoint [GSN] exist?)

For this reason, the approach Godel originally used was not to enumerate sentences, but to enumerate formulas. I think in your approach we can consider a formula to be a proposition with a free variable, which we can take to be a Natural number. In other words:

Formula ≡ [ℕ]↦Prop
IndexToFormula : [ℕ]↦Formula
FormulaToIndex : [Formula]↦ℕ

Now define

Diag[i:ℕ]:Prop ≡ ⊬ IndexToFormula[i][i]

Bad ≡ FormulaToIndex[Diag]

Diag[Bad] = ⊬ IndexToFormula[Bad][Bad]
= ⊬ Diag[Bad]

As you have done, we can restrict to the formulas that can be defined by parsing a mathematical string, since you consider your set of sentences to be uncountable. Then the definition of Diag should be included in that set, since I've just given its definition.

Since Proposition uncountable, no one-to-one [Proposition]↦ℕ

Since Proposition is uncountable, there is no one-to-one [Proposition]↦ℕ

Consequently, Proposition cannot be indexed(enumerated) by ℕ.

Feels like meta-theory confusion to me

I didn't follow this discussion but it looks like meta-theory confusion to me.

The point of Godel is that if you block his argument inside a theory that doesn't imply you've proven your system consistent since Godel then used a meta-theoretical argument. The argument still holds.

First-order dogma: non-existent theory/meta-theory distinction

Gödel's first-order dogma insists on a non-existent theory/meta-theory distinction whereas theories about theories are just more ordinary theories in the specialized field of meta-mathematics.

I might not like that an infinite number of primes exists

Well. I am following this discussion half-heartedly since there's nothing more interesting on LtU so I am going to state something which may be very uninformed.

Godel's proof is about a weak object logic with can do basic arithmetic, but the proof, I gather, is stated in ordinary mathematics; pretty much employing the conceivably strongest meta logic there is.

As an analogy to your argument: I might not like that an infinite number of primes exists. I therefore restrict an object logic that it can only reason about the first 10,000 numbers or so. I cannot prove anymore, from within the object logic, that there are an infinite number of primes. I then state, within the object logic, that a finite number of primes exist; something which cannot be refuted from within the object logic.

What did I do? Did I give a complete, sound, object logic, or, through meta-logical reasoning, can I derive that the object logic is now unsound?

Looks to me you did the second, the object logic now is unsound. Well, that's what I am betting upon. Your object logic doesn't have a model anymore.

Foundational bootstrapping maintenance

Godel's proof is about a weak object logic with can do basic arithmetic, but the proof, I gather, is stated in ordinary mathematics; pretty much employing the conceivably strongest meta logic there is.

Well, for something to be a foundation of mathematics, it would have to be suitable as a meta-logic for everything, including itself. Gödel's incompleteness theorems talk about FOA, and they actually prove they use no meta-reasoning that can't be expressed in FOA, which is why they're such interesting results for foundationalism.

But I do think your broader caution against confusing the meta-logic with the object logic is good. Consider it this way:

If Classical Direct Logic can talk about itself, but it can't come to all the same conclusions for a slight variation of itself, then it's hardly going to be a good framework as we study and innovate on the way we do mathematics. How would we construct slight variations of Classical Direct Logic within itself?

This reminds me of stories about how tough it is to modify a bootstrapped compiler.

If I try to reason about numbers, I throw away numbers

I think Godel is that harsh. If I try to think completely about numbers, then I throw away (the model of) numbers. Because there's always the meta-logical/mathematical Godel argument about arithmetic which only relies on some form of quotation/enumeration (and nobody doubts quotation as a perfectly valid manner of reasoning mathematically); and that argument renders the model invalid (trivializes it into nothing, if you want to be more correct) for any form of completeness for arithmetic.

As all the Wikipedia pages state, it's either soundness or completeness but not both.

I have no idea what Hewitt is doing, I didn't read the paper, but he either cannot reason about arithmetic, or he has simply proven that his logic doesn't have a model. That's my bet.

There is no way out.

Up to isomorphism, there is just one model of ℕ in Direct Logic

Up to isomorphism, there is just one model of the Peano/Dedekind axioms for ℕ in Direct Logic.

Unfortunately, in the first-order logic used by Gödel there are nonstandard models with infinite integers. Consequently, first-order logic is unsuitable for the foundations of Computer Science.

By the argument of Church/Turing, the Direct Logic axiomatization of ℕ is inferentially incomplete.

How do I calculate using this model?

As the 2nd order model of natural numbers is abstract, how do I write the number two? How do I add two numbers together?

Formulas are enumerable

If you can't enumerate formulas then your axioms haven't modeled them completely. I don't see anything novel or interesting about this limitation of your approach.

Strings are countable but propositions are uncountable

Strings are countable but propositions are uncountable because there is a distinct predicate for every real number that holds for just that number.

Ok

Ok. That's fine. But then why doesn't the above argument go through with strings? I'm mobile right now so I'm not going to attempt to mark it up, but with some casts to and from string it should still work.

There are no casts in math from propositions to strings

There are no casts in mathematics from propositions to strings.

Not required

If you study the above argument, you'll see that it doesn't require the ability to find the string corresponding to an arbitrary proposition. It requires only beig able to encode the formula for Diag. You can do that manually if you want - outside of direct logic. So the map from naturals to formulas doesn't need to be onto (or 1-1 for that matter)... It just needs to include Diag in its image.

Types in Direct Logic mean that fixed points do *not* exist.

Types in Direct Logic mean that fixed points do not exist. For example,

Let SentenceFromStringWithIndex be a procedure that
enumerates sentences that can be produced by parsing mathematical
strings of the closed theory ℕ and
IndexOfSentenceFromString be a procedure that returns
the index of a sentence that can be obtained by parsing a
mathematical string.

I_am_not_provable:Proposition ≡ ⌊SentenceFromStringWithIndex.[Fix[Diagonal]]⌋
    where  Diagonal.[i:ℕ]:ℕ ≡ IndexOfSentenceFromString.[⌈⊬ℕ ⌊SentenceFromStringWithIndex.[i]⌋⌉]
The fixed point operator Fix cannot be defined using the typed lambda
calculus because of the following: 
   ℕtoℕ ≡ [ℕ]↦ℕ
   Helper.[f:ℕtoℕ]:ℕtoℕ ≡ [x:([⍰]↦ℕtoℕ])]→ f.[x.[x]]
   Fix.[f:ℕtoℕ]:ℕtoℕ ≡ (Helper.[f]).[Helper.[f]]
The missing type ⍰ does not exist.

Consequently, the above attempt does not construct a string S that ⌊⌈S⌉⌋ ⇔ ¬⊢⌊⌈S⌉⌋

Ttyl

In some ways you seem like a nice guy, but your discussion style in this forum is reminiscent of a support center in Bangalore.

Technical discussions on LtU can be difficult & frustrating :-(

Technical discussions on LtU can be difficult and frustrating :-(

This particular topic requires extreme precision.

Comments and suggestions on how to improve the articles are greatly appreciated!

It would be great to have better tools.

True

This topic does require precision. Quotation (with eval) is known to be dangerous. Change the negated turnstile in my definition of Diag into negation and you might have consistency issues. If you ever publish details on Direct Logic beyond a grammar, we might be able to tell. Good luck.

Liar Paradox derives an inconsistency using fixed points

I think that you are alluding to how the Liar Paradox [Eubulides of Miletus] derives an inconsistency using fixed points:

Liar:Proposition ≡ Fix[f] // note that this fixed point does not exist in Direct Logic   
    where  f[Φ:Proposition]:Proposition ≡ ¬Φ
1) Liar ⇔ ¬Liar    // fixed point
2) ¬Liar           // proof by contradiction from 1)
3) Liar            // chaining 1) and 2)

That's one way to do it

With quotations, the fixed points can sneak up on you, though.

Types do an excellent job of making fixed points not exist

Types do an excellent job of making fixed points not exist.

There is no way for them to creep in :-)

Double post :-(


Formulas are not enumerable

If you can't enumerate formulas then your axioms haven't modeled them completely.

That's right! Hewitt's "mathematics is open" argument claims to show that if the theorems of Classical Direct Logic could be enumerated, there would be a contradiction. This suggests (though I'm not sure it actually proves) that the Proposition and [ℕ]↦Proposition types can't be enumerated either.

Still, I think your construction might work. Although Hewitt claims propositions are uncountable, most of the formal structure for proofs like "mathematics is consistent" or "mathematics is open" seems to stay entirely within a fragment that can be formalized the usual way. If a full Gödel-based inconsistency proof fits within that fragment, then you only need to index that fragment's formulas.

IMO, this is unfortunate. I hope no fragment is powerful enough for that.

I don't see anything novel or interesting about this limitation of your approach.

I have a system in progress that has the same limitation, which is partly why I find this so interesting. Namely, it's a module system, and while I could postulate that the number of modules installed is countable, I don't think that's a question module authors would care to ask. They might care whether the modules are consistent; at least I do.

Slightly different

If you can't enumerate formulas then your axioms haven't modeled them completely.

That's right! Hewitt's "mathematics is open" argument claims to show that if the theorems of Classical Direct Logic could be enumerated, there would be a contradiction.

Hmm. By 'theorem', he means proven theorem, whereas I just mean a proposition with a free variable -- a function definition, basically. Function definitions, being finite data, should be trivially enumerable. If you haven't modeled the formulas well enough to be able to treat them as ordinary data, then it is possible to "assert consistency" without inconsistency, but you haven't really avoided Godel because you haven't actually asserted consistency in a meaningful way. I don't think that approach is interesting.

How is this related to your system? Which limitation do you mean?

Function definitions, being

Function definitions, being finite data, should be trivially enumerable.

I think Hewitt would say something about how there's a sentence for every real number. In the articles (e.g. Inconsistency Robustness in Foundations), I think this is supposed to be justified by the more compelling idea that the notation itself is built of live Actors. (But I'm not sure this idea is actually needed in any of the proofs under discussion.)

In my module system sketch, a function definition may reference an import, and imports refer to authors, who are not necessarily enumerable. (I'm not sure this is crucial to any of the proofs either. The proofs would probably only need to cite a finite number of authors, which perhaps could be specifically included in the enumeration.)

If you haven't modeled the formulas well enough to be able to treat them as ordinary data, then it is possible to "assert consistency" without inconsistency, but you haven't really avoided Godel because you haven't actually asserted consistency in a meaningful way.

Aren't there systems in between, where a theorem of consistency at least has some use, but it doesn't lead to inconsistency? Hmm, after studying this all day, I get the impression most definitions of proof verification require at least as much induction as system IΔ0 to account for the inductive structure of proofs, and that system is already too powerful to prove its own Herbrand Consistency (in the sense of "Herbrand Consistency" defined in that paper, since the notion of consistency itself somehow changes at that level). Any weaker system would have a hard time being its own meta-logic, or even a meta-logic of anything, unless perhaps the structure of a proof was somehow not inductive. (Willard's self-verifying theories are apparently weaker, but paywalls keep me from knowing what meta-logic Willard uses.)

Anyway, all I'm hoping for with this is to find a place we can investigate top-level claims about mathematics, without being so fickle as to avoid claiming we're being consistent. Secondarily, I find proof by contradiction to be self-evident only because it's based on an implicit claim of consistency, so it would be fun to turn it around. If I simply don't get to enjoy either of those things, that's not too bad.

To state the obvious, adding

To state the obvious, adding in an uncountable infinity of additional sentences is irrelevant to whether the countable subsystem is consistent, unless the logic is nonmonotonic. I don't atm recall a claim of nonmonotonicity (although tbh the general trend has been to make more exotic claims rather than fewer).

The countable subsytem

Yes, but I was speaking up because I didn't think it was absolutely clear we could call anything "the countable subsystem" or that we could prove inconsistency from it.

At this point I'm mostly convinced we can (although the details are unclear), so I'm just explaining myself.

I don't atm recall a claim of nonmonotonicity (although tbh the general trend has been to make more exotic claims rather than fewer).

I think Inconsistency Robust Direct Logic is a higher-order extension of Ketonen and Weyhrauch direct logic, which is like relevance-logic-meets-classical-logic. That's non-monotonic by design, but any given lemma makes sure to state explicitly (with a turnstile) the non-monotonic assumptions it's using, and I've never seen Hewitt put uncountably many assumptions in a turnstile.

Classical Direct Logic is the same extension applied to classical logic rather than direct logic, so I do think that Classical Direct Logic is monotonic.

The self-proof of consistency is in Classical Direct Logic, and inconsistency would be a big dealbreaker there, so it seems like that's the system under suspicion.

I don't think it's inconsistent

I thought about it more and decided DL probably isn't inconsistent. I don't see any reason that you couldn't give erasure semantics to Direct Logic as follows:

- Replace ⊢_A B with A -> B
- Erase quoting. ⌈x⌉ becomes x, ⌊x⌋ becomes x
- Replace Sentence[t] and Expression[t] with t

I don't know the details of Direct Logic enough to say for sure. Maybe this wouldn't work with "inconsistency robustness". But the fragment I understand would support these semantics and therefore isn't inconsistent due to the erasable features above.

Erasing to Girard's paradox?

We're on different parts of this. Quoting is the part I understand the least of all, but I think I can tell you "inconsistency robustness" is unheard of in Classical Direct Logic.

Considering the Proposition and Type types, I wonder what consistent system you think it would erase to. I'd be worried about Girard's paradox.

I've only skimmed Charles Stewart's dissertation, but it's all about classical theories with propositions-as-types. Maybe erasing to one of those theories is an option.

Girad's paradox concerns the nonexistent universal type

Girad's paradox concerns the nonexistent universal type.

Charles Stewart's dissertation results are limited to first-order logic :-(

Oops

Girad's paradox concerns the nonexistent universal type.

Oops, good point. I forgot this system's polymorphism uses dedicated syntax for type parameters at the definition level, rather than using a Type type.

Since Proposition uncountable, no [ℕ]↦Proposition is onto

Since Proposition is uncountable, no [ℕ]↦Proposition is onto Proposition and instead picks out an arbitrary proper subset of Proposition.

Consequently, Proposition cannot be indexed(enumerated) by ℕ.

Still pondering this diagonalization

Formula ≡ [ℕ]↦Prop
IndexToFormula : [ℕ]↦Formula
FormulaToIndex : [Formula]↦ℕ

Now define

Diag[i:ℕ]:Prop ≡ ⊬ IndexToFormula[i][i]

Bad ≡ FormulaToIndex[Diag]

Diag[Bad] = ⊬ IndexToFormula[Bad][Bad]
= ⊬ Diag[Bad]

Although there were some problems with this (in particular the assumption that FormulaToIndex can actually be implemented), I still wonder whether it's possible to patch this up using a different Formula type.

It's not that I have any particular idea for how to patch it up. I just don't follow Hewitt's argument for impossibility: "The missing type ⍰ does not exist."

At this point my working assumption is that it is indeed impossible to patch it up while keeping the use of ⊬, but it possible to replace ⊬ with an elaborate formula that explicitly states every inference rule of Classical Direct Logic. This Gödel sentence would then fail to interact with the self-proof of consistency, so it wouldn't be a problem.

(This came up again when I mentioned my lingering doubt in this thread.)

Although strings are enumerable; sentences are not

Although strings are enumerable in Classical Direct Logic; sentences are not enumerable. Consequently, propositions are also not enumerable.

See pages 9-12 of Classical Direct Logic.

Philosophers made ideological commitment to "First-order Thesis"

Contrary to the practice of general mathematics, many philosophers made an ideological commitment to the "First-order Thesis" because it gave them employment by making it possible to prove certain results about first-order theories that are false for practical mathematics.

See pages 33-38 of Classical Direct Logic.

Philosophers celebrated Gödel's results: their jobs depended

Philosophers celebrated Gödel's results because their jobs depended on adherence to the first-order dogma. Since philosophers couldn't prove anything significant about practical mathematical theories, they cut them down to unrealistic first-order theories where results could be proved (e.g. compactness) that did not hold for practical mathematical theories.

In the famous words of Upton Sinclair:

“It is difficult to get a man to understand something, 
when his salary depends on his not understanding it.” 

Not helpful

Assuming that all of the mathematicians of the last century of being part of a big conspiracy doesn't seem like a very useful thing to do.

First-order dogma is too limiting leading to many problems.

Mathematics is doing fine. However, the first-order dogma is too limiting leading to many problems. Consequently, mathematical foundations of Computer Science must reject the first-order thesis.

It's important to understand underlying motivations of philosophers to understand how they got trapped in the first-order dogma.

Skeleton of deriving inconsistency by Gödel

Preventing self-referential propositions is usually required to prevent inconsistency ("this statement is false"), not incompleteness. Incompleteness you can't avoid, by Godel.

I was talking about inconsistency, though. :)

What Godel actually did almost certainly goes through just fine in Direct Logic, except we could never obtain a formal definition [of] Direct Logic to provide the details.

Hm, right. Let's see if I understand what kind of formal definition you're looking for and how it would be susceptible to Gödel's theorems.

Hewitt's "mathematics is open" argument claims the theorems of Classical Direct Logic can't be computationally enumerated, since otherwise (if the proof is correct) there's a contradiction in the form of the Kleene-Rosser paradox. So we at least shouldn't expect the Proposition type to be classified by any closed, computationally enumerable formal system.

If we seek that kind of formal system for Classical Direct Logic anyway, we'll have to say the Proposition type has instances we can't construct. Whatever system we settle on, maybe it's better to call it a closed fragment of Classical Direct Logic. I think Hewitt is already showing us the rules for this fragment, but not necessarily in a well organized and fully fleshed out way. (Hewitt does mention something about uncountable syntaxes that identify real numbers or actors, but I don't see those being necessary for the argument at hand, so maybe we can take the fragment that excludes those.)

Gödel's first incompleteness theorem might go through for the closed fragment. If so, I expect it to show that the closed fragment is incomplete for its own theorems. The Gödel sentence P would be equivalent to saying that P cannot be proved in the closed fragment.

As trumpeted in this thread, Classical Direct Logic self-proves the consistency of all the provable propositions in its own Proposition type. If we can show that every fragment-provable fragment-proposition is a provable member of the Proposition type, then we can show that the closed fragment is consistent itself.

If Gödel's second incompleteness theorem goes through, we can take the proof of consistency for the closed fragment and use it to derive a contradiction. That would topple not only the closed fragment but the open interpretation as well.

I don't know enough to fill in the details, and some of my leaps of faith might be leaps to the wrong conclusions. Is this similar to the approach you're thinking of? You're pretty confident that some approach would work, successfully proving inconsistency?

Closed theory ℕ incomplete by argument of Church/Turing

The closed theory ℕ is inferentially incomplete by the argument of Church/Turing.

Since there are no "self-referential" sentences in the closed theory ℕ, Gödel's results are invalid.

See page 14-15 of Classical Direct Logic.

Gödel's "self-ref" prop leads to inconsistency [Wittgenstein]

Gödel's "self-referential" propositon leads to inconsistency by an argument of Wittgenstein.

See pages 6-7 of Classical Direct Logic.

Mistake?

The PDF you're linking to is titled "Inconsistency Robustness in Foundations" and has only 56 pages.

Page numbers in "Inconsistency Robustness" offset from HAL

Ross,

Thanks for noticing this.

The page numbers in "Inconsistency Robustness" are offset from the page numbers in the PDF in HAL.

I have corrected the page numbers in the postings above to correspond to the page numbers of the current PDF in HAL.

Hewitt, you contradict yourself, please explain

Because of the typing restriction, a lambda-calculus functional for constructing sentences must take as its argument something that has already been shown to be a sentence.

You elsewhere wrote that, on the contrary, a function taking numbers to sentences was permitted.

Direct Logic even allows some functions from integers to sentences,

A number is not a sentence and so you have contradicted yourself.

Will you please clear up your self-contradiction (or admit and explain your mistake)?

Direct Logic allows *some* functions from integers to sentenc

The following is a type: Lambda[Integer]↦Sentence. However, you have to obey the rules to do anything with it.

re: Direct Logic allows *some* functions from integers to senten

The following is a type: Lambda[Integer]↦Sentence. However, you have to obey the rules to do anything with it.

In which step of the proof sketch do you assert that a rule is being violated or do you agree that DL contains a self-proof of inconsistency?

It is not obvious that your post *exactly* corresponds to DL

It is not obvious to me that everything in your long post *exactly* corresponds to Direct Logic. I would be willing to take a look at a completely formal proof (including working through the fixed point operator) that you have constructed a self-referential statement in Direct Logic (without all the other stuff in your post).

re: It is not obvious that your post *exactly* corresponds to DL

It is not obvious to me that everything in your long post *exactly* corresponds to Direct Logic

"Direct Logic" being an incompletely specified system you haven't bothered to formalize for us.

Joke's on me. I took you seriously. Ha ha.

As I said, any examples of ambiguities in the grammar ....

As I said, any examples of ambiguities in the grammar on page 10 of Direct Logic are most welcome.

So the requirement for you to make a contribution stands: A completely formal proof (including working through the fixed point operator) that you have constructed a self-referential statement in Direct Logic (without all the other stuff in your post).

re As I said, any examples of ambiguities in the grammar ....

As I said, any examples of ambiguities in the grammar on page 10 of Direct Logic are most welcome.

The grammar of terms, sentences, and propositions does not fully specify a formal system of mathematics.

Grammar + inability to enumerate bar self-referential sentences

Grammar + inability to enumerate from within Direct Logic bar self-referential sentences.

Any ambiguities in the rules of Direct Logic are also welcome.

re: inability to enumerate from within Direct Logic

Grammar + inability to enumerate from within Direct Logic bar self-referential sentences.

Inability to enumerate what?

I think that you have contradicted yourself again.

Elsewhere you have written that given a DL theory it is possible to enumerate all of the proofs of in that theory. Consequently it must be possible to enumerate all terms and all sentences.

Theorems of a math theory are enumerated in math

Theorems of a math theory are enumerated in math (closure). But the math theory does not have access to this enumeration and the enumeration function is not part of the language of the theory.

It's entirely possible for

It's entirely possible for someone to be good at some things and bad at others. Hewitt is, it appears, not good at thinking carefully about computability/Godel (which I find a more interesting connection than Curry-Howard; but I digress). We've been giving him considerable benefit of doubt (even I have, though I placed a lower likelihood on this than you did), encouraging him to clarify what he's doing. He, though, has three or four points he falls back on alternately — self-referential sentences can't be expressed in DL, sentences can't be enumerated in DL, try to do that in DL and see what happens, and occasionally, DL is inconsistency robust. On our side, things have pretty much stabilized for some time on please clarify, to help us understand better what you're doing, so that his set of fallback points is starting to sound quite repetitive. I fear he isn't at this time able to clarify further.

[I see he was writing a reply at the same time I was writing this. <gets out popcorn>]

re It's entirely possible for

John, I think I mostly agree with you.

His latest claim at least suggests something specific and meaningful might be lurking. He wrote:

Theorems of a math theory are enumerated in math (closure). But the math theory does not have access to this enumeration and the enumeration function is not part of the language of the theory.

If you look back at my sketch I use two functions that I didn't specify in detail: P_Encode for enumerating Terms, and S_Encode for enumerating sentences.

There's a technicality in the grammar that proofs are Terms while theorems are Propositions proved by proofs that establish some Sentence. So I need two encode functions: one for Terms, the other for Sentences.

So the generous interpretation here is that terms and sentences are "conceptually" enumerable and I could enumerate them in a model of DL, but that somehow DL prevents me from defining P_Encode and S_Encode in DL itself (about the sentences and terms of the self-same theory).

And supposedly this thing -- whatever it is -- that prohibits defining P_Encode and S_Encode is a syntactic restriction revealed in the grammar that the paper contains.

Supposedly, P_Encode and S_Encode, mapping terms and sentences to natural numbers, violates some syntactic type restriction. Yes?

Maybe I don't understand Hewitt's notation but I don't see any such type restriction in the grammar. Do you?

I'm a bit frustrated because I think it might well be possible to construct a robust system that would prohibit a type mapping natural numbers to sentences. That would block Goedel's construction even if the same system contained a self-consistency proof. In effect it would be a way to enforce a limitation that we have a semantic tower of models instead of more direct reflection. But I can't find this concept in the more formal parts of what Hewitt has actually presented.

How would you write Encode?

How would you write Encode? You previously proposed enumerating possible sentences until you found a match, but how do you know you'll ever find a match? To prove that, you'd need to know that every sentence is of one of the forms that you know about, with an induction principle to tell you that they're finite trees built up from those forms. I don't see anything like that in the rules, and since Hewitt has said that there are uncountably many Sentences, it seems unlikely that he'd have such a principle.

If there were something like that, I could internalize the Integrity principle and make my alternate version of the proof go through as well (I think).

Note that axioms guaranteeing particular forms for sentences would also throw a monkey wrench in the interpretation I proposed to demonstrate consistency. That is, if there were axioms to pattern match on Terms / Sentences, then obviously there would be no way to get that information back out of a boolean value.

So I think the situation is:

1. Direct Logic is consistent because it doesn't allow inspection of Terms/Sentences and thus a statement of |- A can be interpreted trivially as just asserting A, A |- B as asserting A --> B, etc.

2. If you added patterning matching on Terms/Sentences, it would become inconsistent.

Hey, Matt M:

Do you have a proposal for a workable formalization of DL? Not merely an underspecified syntax but also core axioms?

Hewitt doesn't seem to but you have some ideas about what one would look like.

If you've got one, I could try to figure out if P_Encode and S_Encode work in *that*.

I don't mean to just burden shift back to you here. I just really no longer care to keep guessing about what DL officially is since no coherent story about it seems to be forthcoming from C.H.

Sorry that you are having trouble understanding Direct Logic

Sorry that you are having trouble understanding Direct Logic.

Suggestions for improvement are greatly appreciated.

re: Suggestions for improvement are greatly appreciated.

My suggestion is that you try to articulate a minimalist but robust formal definition of DL. A definition that has clearly explained notation, explicitly listed rules of inference, ... so forth.

The problem is not that "we" here on LtU are stupid. The problem right now is that *you* are too vague.

Direct Logic has grammar + rules

Direct Logic has grammar (page 9) and rules (pages 21-23) in Mathematics self-proves its own Consistency.
What more do you want?

It may be that vagueness is not the problem. Perhaps the problem is that a more tutorial presentation is needed as well :-)

Are you being serious?

I read page 9 as being informally stated well-formedness rules, not just a grammar. Aren't those rules part of Direct Logic? But you're missing all of the other rules, like ... beta-reduction, when things are equal, how one would ever establish that a term is a proof (are you using Curry-Howard?), how are sets formed, ... The list really isn't short.

I've been extrapolating based on what I think are reasonable assumptions, but Tom is right that everything is much too vague to do anything but speculate. (Tom: I don't want to undertake specifying Direct Logic, but if you have any particular rules that you'd like my guess on, let me know).

Direct Logic needs a tutorial book

Direct Logic needs a tutorial book à la Language, Proof, and Logic: 2ND Edition.

Mathematics self-proves its own Consistency is a scientific article addressed to professionals for the purpose of advancing the state of the art. As such, it doesn't reiterate standard material, e.g., lambda calculus, etc. However, it does state the axiom for sets because Direct Logic does not use the first-order formulation for reasons stated in the article and instead uses the following in which there are no restrictions on the predicate P:

       e∈{x∈D | P[x]} ⇔ (P[e] ∧ e∈D)

It also needs a formal definition

Certainly specific articles that don't go over standard parts of the system are fine but usually there's a comprehensive set of rules somewhere. Am I wrong? Most of the time I see a new formal system introduced they list all of the rules in the paper introducing the system, even if the text only glosses over what some of them are supposed to mean. That's quite different than a tutorial, which would also probably be helpful. And if this is all standard, can't you at least give references?

Article provides formal definitions for purposes of article

The article at the beginning of this forum topic provides formal definitions sufficient for the purposes of article. However, it does not rehash old well understood material (e.g. the lambda calculus) that are explained in the references of the article.

Of course, if anyone comes up with a genuine ambiguity then it will have to be addressed.

"a scientific article addressed to professionals"

I think you might be deceiving yourself.

Article of this forum topic is addressed to professionals

The article at the beginning of this forum topic is addressed to professionals. Now we are going to find out how many of them read the article ;-)

Hewitt, I'll volunteer a bit.

Hewitt,

First off, I'm not sure if it is polite in your book to call you "Hewitt". You sign your LtU stuff that way so that's what I use. I hope it is polite. If "Carl" or "Dr. H." or something else is better I'll adopt that. I'm comfortable with "Tom" or "Thomas" for myself.

Second, I think you can see from the comments on this topic that you have an interested but highly skeptical audience that is impatient for a clear and rigorous definition of DL, against which your extraordinary claims can be examined in detail.

Now maybe you don't care about this LtU audience but it seems like you do.

Would you be interested in trying to write down an explanation of DL for this audience, the skeptical one right here on LtU? Is there some way that you can see that me or any of us might be able to help?

I don't mean to sound silly about this but if you think you are writing for "professionals" who get all this better than the rest of us who are here.... can some of those pros step forward here and defend you a bit and perhaps help explain what you are trying to say a bit better?

I don't think that DL needs a tutorial exactly. It needs something more like Euclid's "Elements" or Conway's "On Numbers and Games".

Making a contribution

Thanks for your friendly offer!

Students usually begin by calling me "Professor Hewitt" but I encourage "Carl." However, as soon as there is another Carl, they revert to "Professor Hewitt" ;-)

Unfortunately, I don't have time to write something special for LtU :-( However, LtU is a useful forum and I am happy to participate. Unfortunately, none of my colleagues have shown up here :-( In order to encourage them to participate, it is best to keep insults to a minimum ;-)

I try to make articles as readable as possible and they all are under continual revision (made possible by the Internet). So if you can find a bug or have a particular concrete suggestion or question, I will be happy to consider it. Academic articles tend to be written with incredible conciseness, which is why I suggested a tutorial book.

There are also videos; some of which have been published, e.g., Stanford CS Colloquium June 2012. More videos will be published this year.

It is a challenge to keep everything up to date because there are continual small improvements in the state of the art by students and colleagues.

Attitudes like being skeptical are not the point. I encourage students and colleagues to think critically and precisely because what counts are the arguments. At the same time it helps to be curious and have an open (but non-credulous) mind.

What font encoding

What font encoding is LtU using, as this just looks like gibberish in Chromium?

LtU is garbeling Unicode :-(

For some reason, LtU is garbeling Unicode :-(

At first, the Unicode looks OK but later has been garbled.

It needs to be fixed.

I think it needs to be fixed as it makes unicode rich text almost impossible to decipher. It's also mangling Godel's name with the umlaut. Such things are what make me want to stick to ASCII characters for this stuff, and make me reluctant to use systems that don't.

Sentences of a math theory cannot be enumerated in Direct Logic

Sentences of a mathematical theory cannot be enumerated in Direct Logic even though the propositions that are theorems can be enumerated.

I'd suspect you'd agree that

I'd suspect you'd agree that the 2nd incompleteness theorem goes through without difficulty if the consistency sentence were taken to be (forall n:Nat, Provable[n] /\ Provable[coded_not(n)] -> 1 = 0), or something else along these lines. Then the question of forming Encode and Decode never enters into the proof. (There may be a gap in my understanding of the theorem. If I'm wrong on this I'd like to be corrected.) The only difficulty is in demonstrating, at the meta-level, that the Hilbert-Bernays conditions hold for Provable. This means showing that DL is strong enough to prove certain statements about arithmetic; syntax is still only relevant at the meta-level. There is no apparent reason to think that DL is not strong enough.

There is a question as to whether this consistency sentence is equivalent to Hewitt's. I was under the impression they are not equivalent, but your discussion with Matt M on whether Encode is expressable has raised doubts in my mind. In any event, if they are equivalent then it follows that Hewitt's system is inconsistent, and hence not suitable for doing mathematics as envisioned.

If, on the other hand, they are not equivalent, then we are left to wonder what significance the self-proof version of consistency has. I don't see it as so different from the Coq formalization I posted "proving" that there can be no contradiction, which merely reflected a built-in property of the CoC. It did not say anything like "no contradiction can result by applying the rules of CoC". Neither does Hewitt's theorem. I don't see that the presence of the turnstile makes much difference here.

Summarizing, the self-proof is unconvincing, but apparently formally valid. Nonetheless, either it is not a self-proof of consistency in the precise sense of the formal statement of the 2nd thereom (i.e., the theorem also finds it unconvincing), or it is and Hewitt's system is inconsistent.

Fixed points are not required for Godel's theorem to apply.

Gödel's disproof of completeness does not require fixed points. It requires only transforming axioms.

If you have statements whose truth value are known, and some rules that you can use to derive from them additional statements whose truth values are also known, gödelization can be applied. Whatever transforming axioms you have for statements, can be encoded as functions whose range and domain are integers.

These functions transform an integer corresponding (according to some schema) to a statement, into integers corresponding (according to the same schema) to a related statement. It is not required that a statement ever be "converted" into one of these numbers; it is enough that we can say with certainty that such an integer exists.

This set of functions corresponding to the axioms of a mathematical system can be used, if the set of axioms is complete and consistent, to derive integers corresponding to every theorem that can be shown to be true according to the axioms of that system. The sequence of application of these functions correspond exactly to the steps of a proof of the theorem.

The inconsistency of a mathematical system would be demonstrated if there exists a potential theorem S such that integers corresponding to both S and ¬S can be derived.

The incompleteness of a mathematical system is demonstrated if there exists a potential theorem S such that integers corresponding to neither S nor ¬S can be derived. This is what Gödel did with mathematics as it was then formulated.

Now, Gödel's observation was this. Consider a theorem which states that the set of theorems having indeterminate truth value is nonempty. It must definitely either be true or false. Nevertheless, neither the integer corresponding to that theorem nor the integer corresponding to its negation can be derived. Therefore that theorem is itself a disproof by counterexample of the completeness of ordinary mathematics.

It is important to note that while the theorem itself is a member of the set of theorems it is talking about, the theorem does not in fact refer to itself. It merely asserts the existence of a set of theorems having a particular property. The fact that it is also a member of that set is happenstance.

This proof rests on the fact that the theorem *cannot* refer to itself. If the proof of a theorem were permitted to use the theorem itself, then it would be an axiom, not a theorem. Mathematics, if it allowed theorems to refer to themselves in order to prove themselves, would be inconsistent rather than merely incomplete.

Because banning self-referential statements does not prevent a set of mathematical functions from existing in your system, it does not prevent mathematical functions that model your transformation rules from existing in your system. Because banning self-referential statements does not prevent any particular integer from existing in your system, it does not prevent integers corresponding to the axioms and theorems of your system according to some schema from existing.

Therefore I do not see any way you could be avoiding Gödel's incompleteness theorem and still have an interesting mathematics that permits operations on unlimited-precision integers.

Ray, you missed re fixed points not required

Gödel's disproof of completeness does not require fixed points.

Clean miss.

The topic is Goedel's proof (the 2nd theorem) that in some sorts of formal system, a self proof of consistency implies inconsistency.

DL (with any set of consistent additional axioms you like) is incomplete and nobody disputes that. Goedel's first proof applies just fine. DL seems to be just as useful as any system to which Goedel's 2nd theorem applies yet, nevertheless, DL contains a formal proof of self consistency. DL's extreme restrictions on sentence formation prevent Goedel's 2nd theorem construction of a contradiction from being carried out in DL. That's nifty.

How about putting it this way

How about putting it this way. Godel says for a sufficiently powerful system, if it can prove its own consistency that it is in fact inconsistent. We're given that DL can prove its own consistency. So, is it inconsistent, or is it not sufficiently powerful? And if it isn't sufficiently powerful, what meaningful capability has been lost?

How about putting it this way

How about putting it this way. Godel says for a sufficiently powerful system, if it can prove its own consistency that it is in fact inconsistent. So, is it inconsistent, or is it not sufficiently powerful? And if it isn't sufficiently powerful, what meaningful capability has been lost?

A theorem of consistency leads to a contradiction, using Goedel's construction,by constructing a sentence G such that the system shows that the Goedel number of G is not "provable" and the system shows that G asserts that it's own number is not provable. If the system can do all those things it has proved the self-contradictory statement G. Hewitt's type restriction on sentence formation means that an internal-to-DL construction of G can't even get off the ground.

We can use Goedel's first theorem methods from outside and construct G using our meta-world knowledge. In meta-world we can prove that DL doesn't (or had better not) have a proof for G or its negation. DL itself can't even begin to express this, though. DL contains G. You probably won't find any proof of G in DL. But unless we leave DL for the meta level you won't prove, within DL, that DL can't prove G because the syntactic restriction prevents you from even beginning to express the facts of G's tricky nature.

Direct Logic is sufficiently powerful

Direct Logic is sufficiently powerful to do all of classical mathematics, but it has no self-referential sentences.

Ah. Thanks.

Ah. Thanks. It's seriously helpful to know how you see it. Not to quibble too much, but since classical mathematics does self-referential sentences, if DL doesn't (supposing, that is, the constraint works as intended) then seems it can't actually do everything classical mathematics can. But, taking the point poetically (which I'm quite willing to do, for getting the gist of things), if it's really as powerful as classical mathematics, it's an uphill battle to plausibility that with that power it wouldn't also have the "paradoxes" in some form, even though they might have to pop up in a different form due to the difference in what you're allowed to say.

Classical mathematics can't allow self-referential sentences

You are very welcome!

Classical mathematics can't allow self-referential sentences because Wittgenstein showed that self-referential sentences result in contradiction.

There is a tension here:
* Direct Logic attempts to have mathematics and mathematical theories that are very strong
* Direct Logic attempts to have mathematics and mathematical theories be consistent

The art and science lies in negotiating the above tension.

I see where we may be

I see where we may be talking past each other a bit, here, regarding what classical mathematics "can do". I would say classical mathematics most certainly can do self-referential sentences, and doing so often results in antinomies. It seems when you speak of what classical mathematics "can do" you mean what it can do without causing antinomies. But the difference between what it "can do" with versus without self-referential sentences is, in this case, the whole point. If inconsistencies arise from the deep property of how powerful the system is, rather than from the superficial property of the form of the sentences, then forbidding certain sentences without reducing the power can be fully expected to merely change the form in which the problem manifests itself, rather than eliminating the problem. From the many different forms of paradoxes I've seen over the years, my deep expectation (and you do invoke intuitive expectations right at the start of your discussion, when claiming justification for proof by contradiction) — my deep expectation is that the problems arise from the deep property of power rather than the superficial form of sentences.

Removed

Removed

Hmm

Could you double check your proof? You've got identity functions in there that look funny. Looks like this would work, though.

λf. λg. f (g (λx. f x x)) (g (λx. f x x))

Also, could you explain briefly why this is paradoxical?

Removed

Removed

How is it a paradox?

The form of the Curry paradox I'm familiar with starts by defining X = X -> Y. The set theory version you linked to is along the same lines using (z in z) to set up the loop.

The simply typed proof above seem to be missing that crucial aspect. How could you use it to obtain contradiction?

Removed

Removed

Wording

I guess I was just confused by the wording. I would have said that those naive comprehension axioms to set up the self-reference are a crucial ingredient in order to have Curry's paradox. The proposition (X -> (X -> Y)) -> ((X -> Y) -> X) -> Y is valid in the kinds of logics I'm familiar with, so it seems funny to me to call it a paradox. I think it's Hewitt's intent to avoid Curry's paradox by avoiding self-reference (see page 6 of his document).

Removed

Removed

DL sentences requires types, e.g., Λ[Sentence]↦Sentence

Direct Logic sentences require types for their construction, e.g., Λ[Sentence]↦Sentence. Consequently, it is impossible to construct arbitrary sentences using fixed points.

Removed

Removed

Fixed points don't exist for typed sentences

Fixed points don't exist for typed sentences. For example, the Fix defined below cannot be shown to be of type Λ[Sentence]↦Sentence:

Diagonal ≡ λ[s:Sentence]→ ⌈λ[x:Sentence]→ ⌊s⌋[⌊x⌋[x]]⌉
Fix ≡ λ[s:Sentence]→ ⌊Diagonal[s]⌋[Diagonal[s]]

PS. Thanks for pointing out Diaconescu's theorem.

Removed

Remoeved

Fixed points for computation; but they can't construct sentences

Fixed points for computation; but they can't construct sentences. Grammar of Direct Logic limits ways in which sentences can be constructed.

Removed

Removed

Firxed points for terms; not sentences

InfinitelyDeep is a procedure which is Λ[ ]↦Set. Fixed points can be freely used to define procedures on terms. But sentences are not terms.

The relevant clause from the grammar of Direct Logic is the following:
If f is Λ[Sentence]↦Sentence, s is Sentence, and Halt[⌈f.[s]⌉] then f.[s] is Sentence
Consequently, a procedure that is known to compute sentences from sentences, can compute a sentence from an already known sentence.

Removed

Removed

Complexities in which lambda expressions are Term

An unavoidable complexity is the issue of convergence in the lambda calculus. For example Factorial.[-1] might not converge. On the other hand for every x, it is the case that x=x. Consequently, if Factorial.[-1] is Term then Factorial.[-1]=Factorial.[-1], etc.

So there is a problem stating when Factorial.[x] is Term.

Sentences are easier because the untyped lambda calculus is not allowed.

Removed

Removed

Intuitively navigating issues

A goal is to intuitively navigate the following issues:
* Self-referential sentences lead to inconsistencies in mathematics and consequently must be barred. Self-referential sentences cannot be constructed if (1) fixed points do not exist in the grammar of sentences and (2) there is no way to enumerate sentences. Fortunately, self-referential sentences do not have any practical uses.
* A computational expression may not converge and can be used as a term in logic only if it converges. Consequently computational expressions should be grammatically different from terms in logic.

unconstructable sentences?

Is it the case that DL includes unconstructable sentences? For example, one could establish the existence of unconstructable Constants. Constants are Terms. There exist in DL sentences that contain Terms which are unconstructable Constants?

(The paper doesn't much explain Constants other than to say that there "can be" uncountably many of them.)

Constants not countable; consequently sentences not countable

Constants are not countable because, for example: for each real number x, ⌈x⌉ (reification of x) is a constant. Consequently, sentences are not countable.

We can have the following axiom

∀[e:Expression]→ (Halt[e] ⇒ ∃[x]→ x=⌊e⌋)

Removed

Removed

Inconsistency Robust Sentences are not reducible to truth values

Inconsistency Robust Sentences are not reducible to truth values. Suppose ⊢TP and ⊢T¬P. Do you want to conclude that True=False in theory T?

Added in editing:

Since Tarskian model theory is based on assigning truth values to sentences, it is not directly applicable to inconsistency robust theories because there are no Tarskian models. For discussion of probabilities, fuzzy values, etc., see the following: Formalizing common sense for scalable inconsistency-robust information integration using Direct Logic

On the other hand, Direct Logic does have standard Boolean equivalences, e.g. p⇔¬¬p

Removed

Removed

Removed

Removed

Reification/abstraction make truth values problematical in DL

Reification and abstraction make reducing propositions to truth values even more problematical in Direct Logic.

Removed

Removed

Reification/abstraction convert between Sentences/Propositions

Reduction to truth values is impossible because reification and abstraction convert between Sentences and Propositions as well as because some Expressions converge to Sentences and Propositions.

Removed

Removed

On my list of things to

On my list of things to avoid like the plague if you're trying to get people to listen to your new idea (which I've been trying to turn into a blog post for ages). Don't invent lots of new terminology. Hamilton made that mistake with quaternions, producing a slew of terms almost all of which vanished (when's the last time you heard the word "versor"?), although "scalar" and "vector" are still around, and "tensor" with a completely different meaning.

Removed

Removed

Direct Logic doesn't have the property of "compactness"

Direct Logic doesn't have the property of "compactness" of Agebraic Semantics See page 8 of Mathematics Self-proves Its Own Consistency.

Removed

Removed

Finite statements about the transfinite

From what he's just been saying, he is making finite statements about the transfinite. He's saying that if such-and-such doesn't halt then so-and-so combination of symbols isn't grammatically admissible. Which would mean that whether or not an expression is grammatically correct is formally undecidable.

It's possible that his system could be radically rearranged so as to make it useful. One would have to refactor it so that the role of undecidability in the system appears in a part of the system where its consequences are manifest rather than hidden. It's even possible that one would then end up with what I've been working toward — a system in which the foundation-shaking paradoxes are transformed into mere unalarming nontermination. But the big challenge in that is in just how to arrange its rigorous treatment, and he's not there yet.

Removed

Removed

In some previous posts I

In some previous posts I asked him about mutual recursion between syntax and semantics in his system. And he then modified the definition of term, so as to not use halt.

That's called inconsistency robustness.

Yes, this is my fear, that

Yes, this is my fear, that not asking about the Unicorn in the room may fail to solve the problem of its presence. That was the point (some time back now) of my anecdote about Feynman: by not leaving him alone in the office, they prevented him from observing their weak security, but didn't make it less weak. I'm also extremely uncomfortable with all this emphasis on inconsistency-robust logic, because Hewitt started out by saying the expectation of consistency justifies proof by contradiction; if you're using inconsistency-robust logic, then that seems to me to justify rejecting proof by contradiction, because you wouldn't care about robustness against something you don't think happens.

I do think it may be possible to arrange that if Unicorns cause a problem, the problem takes the form of non-termination rather than paradox. But of course I also don't want type theory, especially not at a fundamental level, as I consider it a complication that obscures what's really going on.

"To be is to be the value of a variable." — W. V. Quine

(Though Quine in his essay was talking about a different kind of mythical horse-like creature.)

A Wittgenstinian reply to DL

In traditional mathematics, at least naively:

An "utterance" (a word not normally used in the literature) is a physical thing. It is specific marks on a chalk board or glyphs on a printed page or sounds made or ...

Here is one utterance that could be regarded as some bits on a disk at LtU central or as the state of some pixels on your screen: "2 + 2 = 4".

Here is another utterance of a similar sort: "2 + 2 = 4".

A "sentence" is an abstract equivalence class, so to speak, of utterances. The two utterances above could be regarded as "the same sentence", for obvious reasons. In various (related) mathematical conversations, all utterances which are the same sentence play a consistent role in the language games.

A mathematical utterance might not be a sentence per se. Another possibility is that it is a "proof" -- another kind of equivalence class among utterances, similar to "sentence".

Classical formal systems in mathematics give us effective (computable) definitions for "sentence" and "proof" and an effective way of telling whether a given proof establishes the (formal, within the system) "truth" of a given sentence.

Goedel proved some interesting facts about classical formal systems in general.

When you talk about DL, you are talking about something very different, with no obvious relation to classical formal systems.

It might sound amusing to say that some conclusion in DL contradicts Goedel's theory of proofs and sentences but when you say that, you are making a pun, not a point.

Removed

Removed

Direct Logic provides a foundation for math practice in CS

Direct Logic provides a foundation for mathematical practice in Computer Science for both of the following:
* Mathematical theories that are thought to be consistent (e.g. Peano numbers, homotopy theory, etc.)
* Practical theories that are pervasively inconsistent (e.g climate change, the human liver, etc.)

All of this is very much in the spirit of Wittgenstein. (See paper at the beginning of this forum topic.)

Unfortunately, the self-referential sentences (used by Gödel, et. al.) result in mathematical inconsistencies. Consequently, the foundation provide by Direct Logic for Computer Science has to be different from Gödel, et. al. At same time Direct Logic has to be intuitively usable for both mathematical theories and practical theories.

Thus, Direct Logic is a substantial undertaking that builds on a enormous amount of previous work. It will be exciting to see what happens ;-)

Removed

Removed

Beware of undertaking a land war in Asia!

Beware of undertaking a land war in Asia!

"I predict you will sink step by step into a bottomless quagmire,
 however much you spend in men and money."
  Charles De Gaulle

Indeed

Indeed, it's one of the classic blunders.

Updated article published in book "Inconsistency Robustness"

There is an updated article in the book "Inconsistency Robustness" at
Inconsistency Robustness in Foundations.

I'll take that bet, so long

I'll take that bet, so long as the payout criterion contains a self-reference preceded by a negation operator.

I think I found another problem

In classical mathematics, only known theorems can be used in a proof. You can't deduce the existence of a theorem and then insert it macro-style into the proof without statement. In principle, quoting theorems is just a shortcut to including their text, but the proof above can't macro-expand without a macro-expander that includes the ability to find the statement Φ within a hyper-infinite (at least aleph-one) set of possibilities. That's not within the spirit of classical mathematics.

I therefore contend it's not a valid proof within classical mathematics for that reason.

you didn't re: I think I found another problem

"but the proof above can't macro-expand without a macro-expander that includes the ability to find the statement Φ"

The proof begins by assuming the hypothesis "Inconsistent", in order to try to obtain a contradiction from that hypothesis. (The real goal is to prove that the hypothesis is not true.)

The proofs of DL are effectively enumerable. One could write a program that will run forever, listing them all, listing them in order from the shortest ones to longer ones, with a secondary sort in alphabetical order. There will be a finite time between each proof printed out. Every proof of finite length will eventually be printed out.

If Inconsistent were true, then a suitable Φ would be discovered, automatically, by that program, in a finite amount of time. So the proof proceeds by assuming we've done that, and now we have Φ in hand.

Now that we have a hypothetical Φ in hand we immediately obtain a contradiction and so, by proof-by-contradiction, our hypothesis (Inconsistent) is false. The system thus self-proves "not Inconsistent". Constructivists can stop there. Classicists can proceed on to "Consistent".

Any theorem can be used in a proof without restriction

Actually, any theorem can be used in a proof without restriction. And, of course, every theorem has a proof even if you don't know what it is.

Wittgenstein and DL

The syntactic restriction in DL reminds me indirectly of the famous Wittgenstein quote:

Wovon man nicht sprechen kann, darüber muss man schweigen.

See my comment about

See my comment about Feynman's anecdote.

Translation of Wittgenstein quote?

What is the translation of the Wittgenstein quote?

re: Translation of Wittgenstein quote?

"Whereof one can not speak, thereof one must be silent."

my preferred translation

My preferred translation is "Whereof I cannot think of the answer right now, thereof you must be silent."

Sorry: didn't mean to double

Sorry: didn't mean to double post.

lol.

lol.

my preferred translation

My preferred translation is "Whereof I cannot think of the answer right now, thereof you must be silent."

Unstated assumption

The problem with the proof is quite simple, but easy to miss because its not physically present on the page:

Proof by contradiction only works if you assume that you are working in a system without contradictions (ie, consistent).

State this assumption explicity, and your theorem becomes:
assume mathematics is consistent and mathematics is not consistent
derive a contradiction
so it is false that (mathematics is consistent and mathematics is not consistent)
therefore,
(mathematics is not consistent) or (mathematics is consistent)

Real breakthrough, there.

re Unstated assumption

Actually, not only is the circularity you are complaining about "present on the page", it's explicitly state in the first 28 words of the abstract:

That mathematics is thought to be consistent justifies the use of Proof by Contradiction. In addition, Proof by Contradiction can be used to infer the consistency of mathematics

Hint: the proof must be interesting for some other reason you haven't thought of, in spite of the obviously unconvincing nature of that circularity.

Hewitt explicitly denied

Hewitt explicitly denied it's circular, when I raised this point somewhere above.

re: Hewitt explicitly denied

It's not a formally circular argument. Although the proof is formally valid it is also "unconvincing". That's because the (informal) justification for axiomatically allowing proof-by-contradiction in the formal system is a belief (in the sense of very strong suspicion) of consistency.

A hazard here is that by

A hazard here is that by prohibiting certain propositions as circular (supposing DL succeeds in doing this), one might have only prevented DL from acknowledging problems, rather than actually preventing problems. (Hence my anecdote about Feynman.) Informal reasoning about the self-proof is plainly invited, through the informal justification at the start, and at that level of reasoning it is explicitly circular — and consequently, the fact that the self-proof isn't prohibited as 'formally circular' is actually a reason to suspect the prohibition can be bypassed, in the same sense that the prohibition purports to bypass Godel.

Math self-proof of consistency not intutively convincing because

The math self-proof of consistency not intuitively convincing because it simply demonstrates that consistency is built deeply into the structure of mathematics through proof by contradiction.

Consequently, any proof of the consistency of mathematics that uses proof by contradiction is suspect.

Please don't hint.

The form of this hint provoked an emotional reaction of irritation in me. Probably because it doesn't outright state what is interesting, so rhoark can't know whether it is also interesting to him without investing a lot of energy.

I think that the interesting thing you refer to is that Direct logic has a proof of it's own consistency and also surprisingly appears to BE consistent. (from your post) http://lambda-the-ultimate.org/node/4784#comment-76354

(I really enjoy your posts because I can understand and make use of them)

Someone somewhere above

Someone somewhere above remarked more or less, as I recall, that you can take any logic and add an axiom that says "based on no premises, provably this logic is consistent", but doing so doesn't contribute anything to our beliefs about whether or not the logic is consistent. I would agree that the self-proof is vacuous in the limited sense that it doesn't make any positive contribution to our belief that the logic is consistent. Proofs that don't contribute to belief are, in a sense, the classical difficulty with antinomies: given an antninomy in classical logic, one can prove any proposition P, therefore knowing that there exists a proof of P ceases to enhance our belief in P. However, there is a stronger sense in which the self-proof is not vacuous: it does have some negative impact on my believe in the consistency of DL, because I'm suspicious of any logic that allows vacuous stuff like this.

Someone somewhere above

Someone somewhere above remarked more or less, as I recall, that you can take any logic and add an axiom that says "based on no premises, provably this logic is consistent", but doing so doesn't contribute anything to our beliefs about whether or not the logic is consistent

John, in the kinds of formal system that Goedel was thinking of, if you added an axiom of self-consistency you would, thereby, make the system inconsistent.

Goedel imagined only systems which contained sentences describing such things as "a function from integers to sentences".

With all that reflective power, Goedel showed how to combine a self-proof of consistency, or axiom of self-consistency, with the construction of an unprovable sentence, in order to derive a formal contradiction. All the "steps" of deriving the contradiction are in the formal system itself. Goedel's 2nd theorem is a recipe for doing that construction.

Goedel imagined only systems which contained sentences describing such things as "a function from integers to sentences" but Hewitt's grammar rules those out.

I think that a really clean construction of DL would make it more obvious that "Sentences" is not quite an ordinary set in DL.

Math allows many kinds of sentences, but not self-refererential

Mathematics allows many kinds of sentence, but not self-referential ones. Direct Logic even allows some functions from integers to sentences, but it does not allow self-referential sentences.

re; Math allows many kinds of sentences, but not self-refererent

Direct Logic even allows some functions from integers to sentences,

Example, please?

Earlier you said that sentences could only be constructed from primitive sentences and from earlier constructed sentences and now you suggest that sentences can be constructed from integers.

So?

Why do you believe that Godel's proof mechanism requires a self-referential sentence to exist?

Gödel imagined only systems

Gödel imagined only systems which contained sentences describing such things as "a function from integers to sentences".

On the face of it, this statement makes no sense. Surely Gödel imagined only systems which contained sentences describing such things as a "function from integers to integers"? To be more careful, the original paper dealt specifically PM. The generalizations that came later still only assume that the systems include at least some fragment of arithmetic, e.g., PA. The 2nd incompleteness theorem does not suddenly require that the system includes reflection capabilities in its syntax.

I follow your arguments and agree that it is interesting that we can mimic the proof of the 2nd incompleteness theorem, follow Gödel's recipe as you say, using the built-in reflection features of DL. I think I can read your statement in this context in a way that makes sense, but it is still confusing.

I think we risk confusion by conflating the two versions of the theorem, and it seems to me that much of the talking past each other going on in this thread is due to precisely this conflation.

For example, I am unable to tell if you think the actual, precisely worded, 2nd theorem applies to DL or not: the wording of your posts makes it sound like you do not think so.

Should we not be a little more careful and precise in our language, given the risk of confusion that is so apparent in this thread?

re Gödel imagined only systems

For example, I am unable to tell if you think the actual, precisely worded, 2nd theorem applies to DL or not: the wording of your posts makes it sound like you do not think so.

Does this help?

I think the 2nd theorem implies, for DL, that DL can not prove a self-definable model of itself to be consistent (unless DL is inconsistent).

I believe that DL can self prove its own consistency.

I believe that DL can prove that a model of DL contains a proof of self-consistency.

I believe that DL can not express a concept like "Prov(#P) => P" because it can not express the concept of a mapping from numbers to sentences within the theory itself.

We don't have "Prov(#P) => P" and we also don't have "~ Prov(#P) => |/- P".

DL can prove that it's model can not prove P (by proving "~ Prov(#P)" but it can't conclude from that that it, itself, can not prove P.

So DL can prove "~ Prov (#G)" where G is a Goedel sentence but it can't conclude "|/- G". So we can't combine the proof of "~ Prov (#G)" with the proof of self consistency to derive a contradiction that way.

We can't do it all in the model, either. DL trivially has "|- Consistent" and trivially has "Prov (#Consistent)" but it does not have a theorem of consistency for the model. It doesn't have "(forall x:numbers) ~(Prov (x) and Prov (#not(x)))"

Did you see Hewitt's post?

it can not express the concept of a mapping from numbers to sentences within the theory itself.

You kept repeating this after I pointed out that it wasn't true, but I figured you'd stop when Hewitt told you it wasn't true. See six inches up where he writes:

Direct Logic even allows some functions from integers to sentences, but it does not allow self-referential sentences.

All of the evidence says that types are there to block overtly self-referential sentences and any avoidance of Godel is just due to lack of rules that would let you pin down precisely what reflected proofs do.

re Did you see Hewitt's post?

No, I didn't see that comment from Hewitt but thanks for pointing it out. I've asked him the direct question of what functions from integers to sentences he is thinking about.

Agreement?

It doesn't have "(forall x:numbers) ~(Prov (x) and Prov (#not(x)))"

Indeed, the "normal" 2nd theorem says precisely that if DL did have this, it would be inconsistent. So we are in agreement then?

I think I (tentatively) agree also with most everything else you said. That is, the things that need to be shown of Prov can be shown "all in the model"---as is the usual mode of operation for the 2nd theorem---and (perhaps) not outside it.

I say "tentatively" because Hewitt has now explicitly said DL can express functions from numbers to sentences (but only some of them), whereas you said it cannot. Does this change your mind on the outside-the-model proof not going through? I.e., was that a critical part of your argument? I'm fascinated by how this question will ultimately come out, but am not persuaded on one side or the other yet.

Edit: I see Matt made this point before I did.

re agreement?

I think so.

Hewitt has now explicitly said DL can express functions from numbers to sentences

Yeah, that's odd. Let's see what he says.

DL aims to have everything *but* self-referential sentences

Direct Logic aims to have everything *but* self-referential sentences.

So the grammar for a Sentence includes the following:
* If s is a SentenceT and i is an identifier, then ≪lambda [i:σ]→ s≫ is a Lambda [σ]↦Sentence and i is a Term in s.
* If f is a Lambda [σ]↦Sentence and t is a σ, then f[t] is a Sentence
The above imposes typing constraints to prohibit taking a fixed point that could embed a spurious reference to a sentence in itself.

So it is OK for σ to be the type Integer in the above.

Hewitt, please explain

Alright, so let's go through this again. Here is a proof, in DL, that DL is inconsistent. Parts of the proof are only sketched. For example, unlike Goedel I don't go through and tediously work out an encoding for sentences as numbers -- I just assume it can be done. If this proof is not valid in DL because the syntactic restrictions, please point out where the problem is, Hewitt.

Do you agree that in DL I can devise an encoding for sentences, as numbers:

Definition:

S_Encode : Sentence ⟶ ℕ

such that S_Encode[s] gives a Goedel number for sentence s

We can also encode proofs:

Definition:

P_Encode : Term ⟶ ℕ

such that P_Encode[p] gives a Goedel number for term p

So far I have not used any self-referential sentences, right?

I can define an effective proof checker:

Definition:

Proves : ℕ ⟶ {0, 1}

such that Proves (P_Encode(P), S_Encode(S)) is 1 if (⊢ P S) and 0 otherwise,

Can I not in fact prove, in DL:

Theorem(?):

(⊢ P S) ⇒ Proves (P_Encode(P), S_Encode(S))

So now let's construct a Goedel sentence.

I earlier thought that the syntax of sentences in DL prohibited the following, but now I think you say it is allowed. Can I define GoedelSchema as follows:

Definition(?):

GoedelSchema ≡ « λ i :  ℕ | ¬ ∃ [p ∈ Term] → Proves (P_Encode(p), i) »

Assuming I can define that, I assume DL also allows me to define this:

Definition:

GoedelSchemaNumber : ℕ → ℕ

GoedelSchemaNumber[x] ≡ S_Encode (GoedelSchema [x])

If I have taken a little bit of care with my encoding scheme, I can guarantee that it has fixed points and thus that it has a least fixed point.

I assume that nothing in DL prevents me from defining:

Definition:

G_number = LeastFixedPoint (GoedelSchemaNumber)

Now let us define:

Definition:

G : Sentence

G ≡ GoedelSchema [G_number]

We have a trivial theorem, now:

Theorem: 

G_Number = S_Encode [G] = S_Encode [GoedelSchema [G_Number]]

Let's prove Goedel's 1st theorem, now.

Goedel's First Theorem:

(¬ ∃ [p ∈ Term] → Proves(P_Encode [p], S_Encode [G]) = 1)
∧
(¬ ∃ [p ∈ Term] → Proves(P_Encode [p], S_Encode [¬ G]) = 1)

Proof:

Suppose p is a Term such that:

    Proves (P_Encode [p], S_Encode [G]) = 1

    (hypothesis for contradiction)

[1] ⊢ p G

    by definition of Proves

[2] ⊢ p ¬ ∃ [p ∈ Term] → Proves(P_Encode [p], S_Encode [G]) = 1

    a contradiction by definition of G

On the other hand, suppose q is a Term such that:

    Proves (P_Encode [q], S_Encode [¬ G]) = 1

    hypothesis for contradiction


[3] ⊢ q ¬ G

    by definition of Proves

[4] ⊢ q ¬ ¬ ∃ [r ∈ Term] → Proves(P_Encode [r], S_Encode [G]) = 1

    by definition of G

[5] ⊢ q ∃ [r ∈ Term] → Proves(P_Encode [r], S_Encode [G]) = 1

    eliminating the double negative

[6] ⊢ q G

    a contradiction by definition of Proves


Q.E.D.

The first theorem told us that we can't find a proof for G or for ¬ G without obtaining a contradiction. The theorem implicitly assumed that DL itself is consistent.

Have I, at any stage there, violated the syntactic restrictions of DL? Earlier I thought I had violated the syntactic restriction as soon as I define the banal looking, not obviously self-referential GoedelSchema (see above for the definition). Then I think Hewitt said, no, that's allowed.

So, now, we can apply Goedel's 2nd theorem, exploiting DL's theorem of consistency to derive a contradiction in DL:

Goedel's 2nd theorm (DL is inconsistent):

[1] G ⟺ ¬ G

    lemma of the first theorem

[2] ¬ ⊦ G

    by consistency

[3] ¬ ∃ [p ∈ Term] → Proves(P_Encode [p], S_Encode [G]) = 1

    by [2] and the definition of Proves

[4] G

    equivalence to [3]

[5] ¬ G

    a contradiction by [1]

Q.E.D.

So where's the part that violates the rules of DL or is DL in fact inconsistent?

gurgitate

It seems to me that if you do the quantification on the Nat side of the pairing instead of quantifying over Sentences & Terms directly, you greatly reduce the area of things that can go wrong. And then you can restrict your attention to just a subset of DL that has the power of PA + the DL quotation/reflection mechanism. It seems to me that the first thing that could go wrong does go wrong, as I don't see how to define and prove Encode total.

BTW, would you mind elaborating how you set things up so that you can claim there to be a least fixpoint? I haven't seen (or don't remember) that trick. I would expect an explicit substitution scheme to setup the fixpoint.

re gurgitate

You can make a version of Encode that has gives a least fixed point for GoedelSchemaNumber very directly and trivially. Assume you have Enc_prime which is an encoding function but you aren't sure if it has a fixed point.

Then define:

Encode [x] ≡ 0 if x is the sentence GoedelSchema [0]

Encode [x] ≡ 1 + Enc_prime [x] otherwise

In that case, the sentence G is:

¬ ∃ [ p : Term ] -> Proves (P_encode (p), 0)