Mathematics self-proves its own Consistency (contra Gödel et. al.)

The issue of consistency has come up in many LtU posts.

Some readers might be interested in the following abstract from Mathematics self-proves its own Consistency:

                   Mathematics self-proves its own Consistency
                          (contra Gödel et. al.)

                             Carl Hewitt
                         http://carlhewitt.info

That mathematics is thought to be consistent justifies the use of Proof by Contradiction.
In addition, Proof by Contradiction can be used to infer the consistency of mathematics 
by the following simple proof:
   The self-proof is a proof by contradiction.
     Suppose to obtain a contradiction, that mathematics is inconsistent.
     Then there is some proposition Ψ such that ⊢Ψ and ⊢¬Ψ¦.
     Consequently, both Ψ and ¬Ψ are theorems
     that can be used in the proof to produce an immediate contradiction.
     Therefore mathematics is consistent.
The above theorem means that the assumption of consistency
is deeply embedded in the structure of classical mathematics.

The above proof of consistency is carried out in Direct Logic [Hewitt 2010]
(a powerful inference system in which theories can reason about their own inferences).
Having a powerful system like Direct Logic is important in computer science
because computers must be able to carry out all inferences
(including inferences about their own inference processes)
without requiring recourse to human intervention.

Self-proving consistency raises that possibility that mathematics could be inconsistent
because of contradiction with the result of Gödel  et. al. that
if mathematics is consistent, then it cannot infer its own consistency. 
The resolution is not to allow self-referential propositions
that are used in the proof by Gödel  et. al.
that mathematics cannot self-prove its own consistency.
This restriction can be achieved by using type theory
in the rules for propositions
so that self-referential propositions cannot be constructed
because fixed points do not exist. 
Fortunately, self-referential propositions
do not seem to have any important practical applications.
(There is a very weak theory called Provability Logic
that has been used for self-referential propositions coded as integers,
but it is not strong enough for the purposes of computer science.) 
It is important to note that disallowing self-referential propositions
does not place restrictions on recursion in computation,
e.g., the Actor Model, untyped lambda calculus, etc.

The self-proof of consistency in this paper
does not intuitively increase our confidence in the consistency of mathematics.
In order to have an intuitively satisfactory proof of consistency,
it may be necessary to use Inconsistency Robust Logic
(which rules out the use of proof by contradiction, excluded middle, etc.).
Existing proofs of consistency of substantial fragments of mathematics
(e.g. [Gentzen 1936], etc.) are not Inconsistency Robust.

A mathematical theory is an extension of mathematics
whose proofs are computationally enumerable.
For example, group theory is obtained
by adding the axioms of groups to Direct Logic.
If a mathematical theory T is consistent,
then it is inferentially undecidable,
i.e. there is a proposition Ψ such that
¬⊢TΨ and ¬⊢T¬Ψ,
(which is sometimes called incompleteness)
because provability in T
is computationally undecidable [Church 1936, Turing 1936].

Information Invariance is a
fundamental technical goal of logic consisting of the following:
  1. Soundness of inference: information is not increased by inference
  2. Completeness of inference: all information that necessarily holds can be inferred
Direct Logic aims to achieve information invariance
even when information is inconsistent
using inconsistency robust inference.
But that mathematics is inferentially undecidable (incomplete)
with respect to ¬ above
does not mean incompleteness
with respect to the information that can be inferred
because it is provable in mathematics that ¬⊢TΨ and ¬⊢T¬Ψ.

The full paper is published at the following location:
Mathematics self-proves its own Consistency

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I don't get it

How can Encode depend on godelSchema when GodelSchema already depends on Encode? And why do we bother with defining the whole schema if we can just write down a Godel sentence?

Alternate approach

Here's a sketch of how I'd do it. First, I'd enumerate formulas instead of sentences:

Decode : ℕ ⟶ ℕ ⟶ Sentence 

-- Decode[n] is the nth formula (ℕ ⟶ Sentence)

Then, we enumerate proofs and (for expedience) let our Proves predicate take a formula and parameter instead of enumerating Sentences:

Proves: ℕ ⟶ ℕ ⟶ ℕ ⟶ {0,1}

-- We need to show that Proves[Prf][Frm][Parm]=1 iff Prf is 
   a valid proof encoding that proves Decode[Frm][Parm]

GN : â„•
GN ≡ Quote { « λ i :  ℕ | ¬ ∃ [prf ∈ ℕ] → Proves[prf][i][i] = 1» }

G : Sentence
G ≡ Decode[GN][GN]
  = ¬ ∃ [prf ∈ ℕ] → Proves[prf][GN][GN] = 1

In the above, I have used Quote{} instead of Encode. They're similar, but Quote is in the meta-language. It means produce the Nat that describes this formula.

Then Godel's 1st goes through as you've shown:

Suppose ï¿¢G.  Then,

    ∃ [prf ∈ ℕ] → Proves[prf][GN][GN] = 1

Then by our choice of Proves, we can prove:
 
    Decode[GN][GN]

But that's just G, so we're at a contradiction.  Thus, G.

We've thus proven G, which says that there is no Nat encoded proof of the quoted equivalent of G.

Working on the arithmetic side, as above, makes it a little harder to push Godel's 2nd through. What we have to do is show that we've formalized enough of the Proof system inside our Nat encoding of proofs that we can quote the above argument to get a proof of Quote{G}. A statement of consistency will then let us convert that into a Nat encoded proof of G and obtain a contradiction.

The only step I have particular concerns about is the one where we can show that the correctness of Proves inside the quoted proof of the above. That is, lifting the DL reflection mechanism into quoted logic. I think we can, though, and that would mean DL is inconsistent.

Are sentences countable?

Are sentences countable given that there are uncountably many propositions (one for every real number). So if every proposition can be reified ...

Doesn't matter

The above doesn't require that Decode be surjective, only that it cover the subset needed to push the argument above through. And that subset is enumerable (e.g., no mention of reified Propositions).

The only step I have

The only step I have particular concerns about is the one where we can show that the correctness of Proves inside the quoted proof of the above.

I share this concern, and I have the same problem with "by the definition of Proves" in Thomas Lord's sketch. Since you think it's possible, do you mind elaborating a little? Maybe just a general approach or two that might pay off?

Thanks to these sketches I at least find it plausible that direct logic can have a sentence that refers to itself by way of an unambiguous self-specification. But that self-specification seems to rely on reflection through Proves, a self-specification of (a subset of) direct logic. I don't see a reason to believe direct logic has enough power to recognize itself this way.

I'm very inexperienced with this, so don't let my doubt discourage you.

I think it's consistent

I was thinking about that step in greater detail to respond to you here and I kept hitting trouble. Stepping back, I already gave an argument for why Direct Logic's reflection mechanism might be provably consistent -- the interpretation of (A, B |- C, D) as (A and B --> C or D). We can interpret (|-p A) in a proof-irrelevant way as (|- A). While I don't know if that argument would work for all of Direct Logic (which I don't have a complete description of), it would seem to work for the fragment here. Thus, I don't think it's very likely that this reflection mechanism is introducing inconsistency in the fragment we've seen.

This also shows why I don't think a Direct Logic proof of (¬ ∃ A:Statement. (⊢ A) ∧ (⊢ ¬ A) ) can be reasonably called a proof of consistency. We can interpret all nested turnstiles as ordinary implication and make sense of all Direct Logic theorems that way, such that the interpretation of the "statement of consistency" is the same as (¬ ∃ A:Prop. A ∧ ¬A ).

(⊢ A)∧(⊢¬A) means inconsistency; bug in above post

(⊢ A)∧(⊢¬A) means inconsistency; there is a bug in the above post.

Thanks

Fixed. Can you think of a rule in Direct Logic that would be unsound with respect to this implication interpretation? i.e. We interpret terms as the values they denote and interpret both Sentences and Propositions as either T or F.

I haven't thought this through too carefully.

re Hewitt, please explain

At the risk of being a noodge I'm a little disappointed that this question hasn't yielded a clear, direct reply. We've been repeatedly assured that a syntactic restriction in DL prevents the 2nd theorem's construction of a proof of inconsistency from going through. Here has been given a pretty detailed sketch of what such a proof might look like. Where specifically does the syntactic restriction kick in and how?

Earlier I apparently misunderstood the syntactic restriction. I (apparently incorrectly) thought it to prohibit defining GoedelSchema a mapping from natural numbers to sentences that include those numbers. That restriction would prevent the 2nd theorem's construction of inconsistency from succeeding. And in my way of looking at things, the mistaken restriction would make good "philosophical" sense. By analogy, the characters in a book (the natural numbers) don't have the capacity to write any part of that book (the sentences).

Well, that mistaken interpretation is wrong so what's the right one?

You might try putting your proposal in Direct Logic

You might try putting your proposal in Direct Logic in as succinct way as you can on how you propose to construct a self-referential sentence including keeping track what is being done in mathematics versus a mathematical theory (restrictions are different).

re: You might try putting your proposal in Direct Logic

You might try putting your proposal in Direct Logic in as succinct way as you can on how you propose to construct a self-referential sentence.

Surely you have noticed that you have not provided anyone with a formal enough specification of DL for that to be possible.

Also, is there some reason you are unable to answer the damn question?

Direct Logic has tight restrictions

Direct Logic has tight restrictions to bar self-referential sentences. Any examples of ambiguities in the grammar on page 10 of Direct Logic are most welcome.

re: Direct Logic has tight restrictions

Direct Logic has tight restrictions to bar self-referential sentences. Any examples of ambiguities in the grammar on page 10 of Direct Logic are most welcome.

Please look over the sketch of a DL self-proof of inconsistency and point out where you think that proof requires a "self-referential sentence" of the sort prohibited.

At this point I am beginning to assume that you can not point out any problem in the proof because DL is, in fact, inconsistent.

Direct Logic grammar has to be obeyed exactly

The Direct Logic grammar has to be obeyed exactly:
*Everything used has to be justified by a clause in the grammar (quote which clause)
*Can't add any extra clauses to grammar

functionals which are (numbers -> sentences)

You can add by axiom any that don't create the kind of self reference needed to generate a contradiction. I suspect that's what he means.

The shortfall of rigor seems

The shortfall of rigor seems to me to be a really fundamental problem here. Because the claimed strategy rests on really technical grounds, so that the devil (or, at least, Godel) is in the details.

A way of thinking about it (not essential to the preceding observation): Some sentences lead to problems. I suggest that not all self-referential sentences lead to problems. Distinguishing between those that do and those that don't would probably be comparable, at some deep level, to solving the halting problem. Therefore the only way to decidably eliminate all the problem sentences is to eliminate a larger class of sentences that includes all the problems and some other, non-problematic sentences. Poetically not unlike retreating from Chomsky type 0 grammars to Chomsky type 1 grammars; some power has been lost. I would like to know what power has been lost. (This differs from my own nascent approach, which is to try to transform logical paradoxes into non-halting, since programmers already live with the halting problem.)

A mathematical theory cannot enumerate its own theorems

A mathematical theory cannot enumerate its own theorems. The closure property of a mathematical theory is part of the over-reaching framework of mathematics. See page 12 of Mathematics self-proves its own Consistency.

check me on this...

Tom? I may be remembering this wrong, but as I remember it I have to question what you wrote above.

You say that Godel showed that a self-proof of consistency or axiom of consistency, leads to a contradiction (inconsistency). I had believed that the self-proof or axiom of consistency led to a demonstrable *incompleteness*, ie, consistency and completeness are complementary qualities that you can have at most one of. Did you mistype, or did I remember it wrong?

re check me on this....

Ray, could you please look over my post "Hewitt, please explain..."?

It gives a pretty detailed sketch of Goedel's first proof as applied to DL.

Then it combines that result with DL's theorem of self-consistency in order to derive a contradiction -- that's Goedel's 2nd theorem.

The question is: What step in this proof (if any) is ruled out by DL's syntactic rules? I thought for a while that it would not be legal to define what the proof names GoedelSchema but now Hewitt seems to say otherwise. So I used Goedel's 2nd to provide a self-proof of inconsistency in DL.

Let's see what he says.

re: Please don't hint.

provoked an emotional reaction of irritation in me.

Sorry. I only hinted because the comment I was replying to was based on a pointedly unkind reading of Hewitt's text. I think that if a person suspects Hewitt has written something really, really off the rails that the first thought that person ought to have is that they themselves are not reading Hewitt correctly. Harsh criticisms should be reserved for more generous readings, usually.

I'll unpack the hint:

Hewitt is describing a formal system for mathematics. A useful intuition to have about "formal systems" is that given a formal system, you could write a computer program based on its rules. The computer program could run forever, printing all of the valid proofs in the system, starting with the shortest ones and continuing to longer and longer proofs. The system is formal enough to allow that level of automation.

Hewitt's system is powerful enough to talk about the natural numbers. For example, if you wanted to, you could take all of the proofs in the famous number theory book "An Introduction to the Theory of Numbers" by Hardy and Wright, and you could express those proofs in Hewitt's system with no difficulty. They would look and act just like you are used to. Hewitt's system is intuitively familiar in that way.

Hewitt's formal system for math ("Direct Logic" with suitable axioms) has some limited ability to talk in general about the propositions that it can prove and about the "sentences" in the formal language of the system. The system can talk a little bit about itself, in that limited sense.

Using that ability to talk about its own propositions, it so happens that Hewitt's system contains a formally valid proof of its own consistency. The proof doesn't really give us any deep reason to believe the system is consistent or isn't consistent... it's just formally valid is the key thing. Cute but...

Goedel's "2nd incompleteness theorem" says very roughly that if a formal system can handle number theory, and that system can prove itself to be consistent.... then that system is actually inconsistent.

So superficially, you might think:

"Well, Hewitt's system can do number theory."

"And Hewitt's system proves itself consistent."

"So according to Goedel, Hewitt's system must be inconsistent."

And this is why what Hewitt is saying is interesting. Because as it turn out....

Goedel's proof of the 2nd theorem is a recipe. The recipe tells you how to start with a formal system that can prove its own consistency, and to conjure up, in that system, a proof of a self-contradictory statement.

This is the new and interesting thing: In Hewitt's system, Goedel's recipe doesn't work.

Other people besides Goedel have given some famous recipes for deriving a contradiction from a system that can prove its own consistency. Those recipes don't work either -- not for Hewitt's formal system. (I won't try to rehash in detail, right here, why Goedel's recipe won't work but the gist is that certain kinds of formal sentence, that Goedel's recipe requires, aren't permitted by the formal grammar for Hewitt's system.)

Most of the paper is taken up with looking at a kind of "history of ideas" around Goedel and how people interpreted his results. Hewitt is kind of pointing out why nobody thought of this before although Wittgenstein was kind of on the right track. The historical discussion also talks about the kind of psychological concepts people have used to think about "this stuff" and so maybe some suggestion about how to think about Hewitt's system.

Thomas, you have it almost exactly right!

Thomas, you have it almost exactly right!

Just one little niggle:

Theorem ⊢Mathematics is Open
Proof. Suppose to obtain a contradiction that it is possible to prove closure, i.e., there is a provably computable total procedure Proof such enumerates the theorems of Mathematics

As a consequence of the above, there is a provably total procedure ProvableComputableTotal that enumerates the provably total computables that can be used in the implementation of the following procedure:
Diagonal[i] ≡ ⌊ProvableComputableTotal[i]⌋[i]+1

However,
⊢ ComputableTotal[Diagonal] because Diagonal is implemented using provably computable totals
⊢ ¬ComputableTotal[Diagonal] because Diagonal is a provably computable total that differs from every other provably computable total.

Torkel Franzén [2004] argued that mathematics is inexhaustible because of inferential undecidability (“incompleteness”). The above theorem that mathematics is open provides another independent argument for the inexhaustibility of mathematics.

your niggle

Let me try to translate from Hewitt-speak to Tom-speak a little bit.

Let's think about all of the computable sequences of 1s and 0s, including the sequences that don't terminate. Every program that we can prove either terminates, or keeps on spitting out more 1s and 0s, corresponds to some "provably computable" sequence of 1s and 0s.

A program that prints a few 1s and 0s and then hangs forever does not correspond to a "provably computable" sequence of 1s and 0s.

A program that prints 1s and 0s and happens to never hang -- but none of us is able to prove it never hangs -- well, the output of that program (g*d knows) is computable, but unless we can find another way to print the same sequence it isn't provably computable.

A sequence is "provably computable" if we have a non-hanging computer program that prints the sequence, and we can prove that the program doesn't hang.

Some smart ass comes along and says "I have a formal system for proving stuff about these computable sequences. My formal system will generate proofs, from shortest to longest, identifying one program after another that prints a provably computable sequence. Every provably computable sequence is printed by a program of finite length with a proof of finite length to go with it. Eventually, just by exhaustively checking all the strings of length N, then of length N+1, and so forth -- my formal system will generate all the programs that provably generate sequences of 1 and 0 without hanging. So here is a theory of all the computable sequences".

A smarter ass comes along and makes a diagonalization argument. He writes a new program that takes as its input, the list of proofs from the first program. The proofs from the first program could be numbered. Here is the first proof that algorithm A produces a computable sequence. Here is the second proof that algorithm B produces a computable sequence. And so forth.

And the smarter ass says here is a new algorithm: it takes your list of proofs as input. For proof #N in your sequence, this new algorithm prints either a 1 or a 0 --- whichever is different from what your algorithm #N would print.

And the smarter ass's meta-program pretty trivially seems to print a sequence of 1s and 0s without terminating. It prints a "provably computable sequence". But on the other hand its output is different from all the sequences printed by the first program (isn't it?) -- so that first program must not be able to print all of the provably computable sequences after all!

And, of course, we could go ahead and add in the second program as a patch to the first program. So now we have all the sequences of the first program PLUS this new sequence from the second program. But even if we do that, won't someone come along and do the same rotten trick again, making *once again* a provably computable sequence that differs from all the sequences in our existing list?

No matter what happens we can always construct a new program that computes a provably computable sequence that none of our earlier programs computed. In that sense, maths is inexhaustible.

It's not too mystical, though.

Let's start with a formal system X that claims to list all the algoritmms for provably computable systems. We know that's a bogus claim so we make this diagonalized algoritm, Diag[X], that differs from all of the computable sequences proved by X in at least one digit.

System X' is the same as system X except that includes Diag[X] as the 0th element of list of provable computable algoritms emitted by X'.

System X'' is the same as system X' except that it includes Diag[X'] as the 3rd element of its list.

System X''' is the same as system X'' except that it includes Diag[X''] as its 6th element,

In this way we can see that "provably computable" sequences is "open". We have inexhaustible and in fact uncountable supplies of new axioms we might pick from to add.

Most of the paper is taken

Most of the paper is taken up with looking at a kind of "history of ideas" around Goedel and how people interpreted his results.

Speaking of my hard-won insights into how to not sound cranky: It's a capital mistake to use historical discussion in explaining one's ideas. The temptation can be pretty overwhelming, I know: Finding yourself in a conceptually unconventional place, you puzzle over what is interfering with others understanding your work, you find it in the historical evolution of ideas, and you think, ah yes of course, if only people understood this they would realize that I'm not a crank but just coming from a different perspective that they could have seen too if they hadn't been trapped by this historical sequence. This impulse must be ruthlessly suppressed. The only circumstances in which it would be worth people's time to study this sort of history would be after they're already thoroughly convinced you're right six ways from Sunday. Which isn't going to happen at all within the scope of your first paper, or possibly even for several papers thereafter. And maundering on how people supposedly made mistakes in the past is a very common pattern for people who actually are cranks, so by adopting it you camouflage yourself as a crank. When presenting a radical idea, do so in an exquisitely plain, objective, clinical way.

“Those who do not learn from history are doomed to repeat itâ€

“Those who do not learn from history are doomed to repeat it”

And one of the lessons to be

And one of the lessons to be learned from history is, that there's such a thing as the wrong moment for a lesson in history.

No implicit assumptions in Math self-proof of consistency

The self-proof of the consistency of mathematics at the beginning of this forum topic does not have implicit assumptions. Classical mathematics allows proof by contradiction absolutely.

Of course, if a contradiction were to be found in mathematics then everything would be provable in mathematics. Over the centuries, humans have debugged mathematical arguments to the point that mathematics is thought to be consistent. Good luck finding an inconsistency in the mathematics (⊢) of Direct Logic!

Perhaps it's those who do

Perhaps it's those who do not learn from the future who are doomed to repeat it....

Are you describing proof by

Are you describing proof by contradiction? :)

Sounds more like reductio ad

Sounds more like reductio ad absurdum to me.

Self-referential sentences also tripped up Alonzo Church

[Church 1931] attempted to construct logical foundations on the basis of the lambda calculus. However, using self-referential propositions, Church's students Kleene and Rosser were able to show Church's system to be inconsistent. Consequently, Church gave up using the lambda calculus for logical foundations and only used the lambda calculus for computation.

You're leaving out an

You're leaving out an important step in the historical process, in which Church and Rosser showed that a subset of the logic is consistent.

Lack of rigor seems a show-shopper for you, but I'm nonetheless curious — what self-referential sentence do you see as involved in the Richard Paradox?

No substantial part of logic of Church [1932] proved consistent

There may be some confusion: No substantial part of the logical part of Church [1932] was proved consistent. Later on, the Church-Rosser theorem [1936] showed a kind of non-logical "consistency" for the computational part of the lambda calculus in the sense that it doesn't matter in which order reductions are performed. Of course, the Church-Rosser kind of computational "consistency" should not be confused with logical consistency.

You might try formulating the Richard Paradox in Direct Logic and see how far you get ;-) A goal of Direct Logic is to bar all of the monster sentences for the known paradoxes.

The computational part is

The computational part is part of the logic, and it's substantial. If you don't understand how that proof fit into consistency and into Hilbert's program, you don't understand the history nearly as well as you think you do.

You're the one making an extraordinary claim here. Even your allies say your paper isn't as rigorous as it should be, and your strategy is technical so that the Devil is in the details and a lack of rigor is fatal to your case: if you don't have a rigorous treatment, you have nothing but claims. And when you ask someone else to "try", you're basically asking them to fill in for themselves whatever you've left out. Asking them to do what you should have done to justify an extraordinary claim. And sure enough, when I ask a specific question, your response is to tell me to go do your work for you. I'm giving you an opportunity to explain your work. The Richard Paradox was used in showing inconsistency of Church's logic, and you say that you've made such things impossible by prohibiting self-referential sentences. Very well; explain what it is about the Richard Paradox that you're making impossible. If you can't explain something simple like this about your strategy, you weren't ready to go public with it yet.

More confusion about inference versus computation

According to Cantini:

In the twenties and in the early thirties, the orthodox view of logic among mathematical logicians was more or less type- or set theoretic. However, there was an effort to develop new grand logics as substitutes for the logic of Principia Mathematica. These frameworks arose both as attempts to recover the simplicity of the type-free approach, as derived from the so-called naive comprehension principle, as well as in order to satisfy meta-mathematical needs, such as the clarification of fundamental concepts underlying the notions of “formal system,” “formalism,” “rule,” etc.
...
However, the theories of Curry and Church were almost immediately shown inconsistent in 1934, by Kleene and Rosser, who (essentially) proved a version of the Richard paradox (both systems can provably enumerate their own provably total definable number theoretic functions). The result was triggered by Church himself in 1934, when he used the Richard paradox to prove a kind of incompleteness theorem (with respect to statements asserting the totality of number theoretic functions).
...
The reason for the inconsistencies was eventually clarified by Curry's 1941 essay. There he distinguishes two basic completeness notions: a system S is deductively complete, if, whenever it derives a proposition B from the hypothesis A, then it also derives the implication A⇨B (deduction theorem or introduction rule for implication); S is combinatorially complete if, whenever M is a term of the system possibly containing an indeterminate x, there exists a term (Church's λ[x]→ M) naming the function of x defined by M. Curry then remarks that the paradox of Kleene-Rosser arises because Church's and Curry's systems satisfy both kinds of completeness, thus showing that the two properties are incompatible.

Non sequitur

That remark is a non sequitur, with no bearing on anything I said.

Removed

Removed

Removed

Removed

FOL can't prove there are no nonstandard Peano models

First-Order Logic cannot prove that there are no nonstandard Peano models. In particular, first-order logic cannot prove that there are no infinite numbers in a model of the first-order Peano axioms.

In this way, Direct Logic is stronger than First-Order Logic. See page 8 of Mathematics Self-proves Its Own Consistency.

Removed

Removed

Many a problem in CS can be solved using a level of abstraction

If you had read page 8 in the reference linked in my post, you would know another way in which a problem in CS can be solved using another level of abstraction. First-order logic cannot prove that there are no infinite numbers in the standard model of the full second-order Peano axioms. (All of the models of the full second-order Peano axioms are isomorphic by a unique isomorphism.)

Removed

Removed

It looks like we agree about a limitation of First-Order Logic

So it looks like we agree that: First-Order Logic cannot prove that there are no infinite numbers in the standard model of the full second-order Peano axioms. (All of the models of the full second-order Peano axioms are isomorphic by a unique isomorphism.)

Other issues are more complicated. You might want to actually read the paper Mathematics Self-proves Its Own Consistency.

Removed

Removed

Removed

Removed

Elements of a model are not necessarily natural numbers

A model of a first-order theory of Natural Numbers may contain elements that are not integers. See page 8 of Mathematics Self-proves Its Own Consistency

Removed

Removed

Categoricity of Natural Numbers and Real Numbers

[Dedekind 1888] and [Peano 1889] proved that models of the second-order axioms of Natural Numbers and Real Numbers are unique up to isomorphism with a unique isomorphism.

Removed

Removed

Direct Logic Proves Categoricity of Natural Numbers and Reals

Direct Logic proves categoricity of natural numbers and real numbers. For example, see end note lxxv of Formalizing common sense for scalable inconsistency-robust information integration

Nonstandard models are a security hole in understanding between humans and computer systems in understanding natural numbers and real numbers.

Removed

Removed

Categoricity of Natural Numbers is fundamental to CS

As shown for example in Mathematics self-proves its own consistency, Categoricity of Natural Numbers is not first-order provable. Consequently, Direct Logic uses the original Dedekind/Peano proof which is not first-order.

For security reasons, it is important that humans and computer systems agree *exactly* on the Natural Numbers because they are fundamental to understanding the operation of computers. It would not be good if it were to be possible to smuggle in an infinite Natural Number.

Removed

Removed

Removed

Removed

Removed

Removed

Not sure if I can add anything to this but...

1) Godel's paper does not claim that "mathematics is incomplete". In fact, Godel proved that first-order logic is complete.

2) It is false that Wittgenstein or anybody else proved that self-referential statements necessarily lead to contradictions. Self-referential statements are no different than any other kind of statement; that's one of the key insights that comes when you understand exactly what Godel-numbering is doing... it's just giving you a way to index a particular statement out of all possible statements. And if the index happens to match the current statement, then you have a self-referential statement. This is so common in computing (recursion) that it's not even exotic anymore.

3) From (2), we can see that sufficient conditions for Godel's proof in a given formal system are that (a) there are infinitely many statements within the formal system and (b) it is possible - within the given formal system - to encode any given statement in the language of the formal system itself. Thus, systems which can be complete and consistent are those which either have only a finite number of possible statements, or those which have an infinite number of possible statements but those statements cannot be encoded within the formal system itself.

4) Unfortunately for someone wishing to escape the implications of incompleteness on "real" math, the rules of basic arithmetic on the integers is sufficiently powerful to satisfy the conditions given above - namely, there are infinitely many of them, and a simple encoding system can be used to map the natural numbers to statements about the natural numbers. Godel's original numbering is just one particular example of such a mapping.

In summary, the incompleteness theorems do not apply to "mathematics" - they apply to any formal system that satisfies the properties which permit the construction of a Godel sentence in that formal system (by whatever means). Among the sufficient conditions for constructing a Godel sentence in a formal system are that the formal system has infinitely many possible statements and that it is possible to encode those statements within the formal system itself - conditions satisfied by the natural numbers and the basic arithmetic operations.

Clayton -

It is unreasonable to suppose that sentences can be enumerated

Why is it reasonable to suppose that there is an enumeration of all sentences? A grammar of sentences does not provide such an enumeration. Also sentences are uncountable because the real numbers are uncountable and there is a sentence for every real number. See Mathematics self-proves its own consistency

Well, I wasn't talking about

Well, I wasn't talking about "enumerating all sentences"; but there's no reason to suppose that enumerating sentences (say, in lexicographic order) is any more problematic than enumerating natural numbers or the primes.

"there is a sentence for every real number"

This is flatly false. Please write out for me the sentence that names the Chaitin constant for a specific reference machine U.

A real by any other name computes the same

Of course, there is a sentence that defines Chaitin's constant Ω; just as there is a sentence that defines π, which is the ratio of a circle's circumference to its diameter.

You can't prove that a real number doesn't exist simply because all real numbers cannot be enumerated. There is no denumerable lexicographic order for the real numbers although they can be well-ordered by the axiom of choice.

"there is a sentence that

"there is a sentence that defines Chaitin's constant Ω"

A sentence is ordinarily understood to be finite in length. By construction, there is no finite-length description of Omega.

"You can't prove that a real number doesn't exist"

I wasn't trying to. I was pointing out that Omega exists in the real numbers and there is no sentence which describes it, contrary to your claim that "there is a sentence for every real number".

counting confusions

A sentence is ordinarily understood to be finite in length. By construction, there is no finite-length description of Omega.

Let's unravel this a bit.

Hewitt asserts that in DL there are uncountably many sentences, pointing out that there are uncountably many reals and at least one sentence about each such real.

You, claytonkb, correctly pointed out that in the ordinary historic sense of the term of art, "sentence" refers to a finite string drawn from a countable or, without loss of generality, finite alphabet.

Thus, you correctly implied claytonkb, that "uncountably many sentences" is kind of an oxymoron.

It's only an oxymoron speaking with reference to dominant tradition, though. Hewitt is in some sense arguing for liberation from that particular tradition.

Hewitt replied with:

Of course, there is a sentence that defines Chaitin's constant Ω; just as there is a sentence that defines π, which is the ratio of a circle's circumference to its diameter.

He is correct but he hasn't actually responded to your point. There is some talking past one another here.

You replied:

A sentence is ordinarily understood to be finite in length. By construction, there is no finite-length description of Omega.

You missed, there.

Of course there is a finite length description of Omega. There's a formalizable one on the Wikipedia page, for example.

Hewitt's mistake in meeting your point was that his examples of Omega and Pi have finite definitions and the existence of reals that have finite definitions doesn't change the underlying fact that the ordinary sense of the term of art "sentence" refers to finite sentences over finite or countable alphabets.

Hewitt's assertion is that there *ought* to be uncountably many sentences because there are uncountably many reals. He relies on us agreeing that there ought to be mathematical sentences that mention, as constants, reals that can not be finitely defined.

So let's examine it from that understanding. Ought we agree with Hewitt about the idea we should have an uncountable number of sentences?

Goedel's central construction of an indirectly self-referential sentence can be reinterpreted in this context as a proof about a subset of DL that deals only in finite sentences.

Goedel shows us how to construct, in DL, a finite sentence that is true and that is indirectly about the finite sentences and finite proofs of DL. This sentence, G, will lack any finite proof in DL (if DL is consistent). And additionally, because of the indirect self reference, we will happen to know that G is true.

Hewitt is interested in Goedel's second theorem which says that if a theory is strong enough to construct G *and* strong enough to prove its own consistency, then it trivially proves itself inconsistent. After all, the theory can prove that it can not prove G and, thereby, it has proved G. That's how Goedel's 2nd works in systems with only finite sentences.

DL contains a (trivial) self-consistency proof. And it can encode a kinda-sorta G that refers only to the finite sentences and finite proofs of DL.

So what does Goedel's 2nd tell us about DL?

Reinterpreted for the brave new world of DL, Goedel's 2nd tells us that DL contains no *finite* proof of G, but there is still the open possibility that DL contains a proof of G that can not be finitely specified.

Even assuming DL could be formally perfected per its intentions, I'm not sure what is usefully added by contemplating this uncountably many infinite sentences and proofs. Perhaps there is a practical application in computer assisted math I'm not seeing clearly? Perhaps not.

"Of course there is a finite

"Of course there is a finite length description of Omega. There's a formalizable one on the Wikipedia page, for example."

Well, perhaps we're talking past each other re. the word "description." By *description* of Omega, I mean one of the following:

1. A writing-out of Omega in its entirety, i.e. 3/2 = 1.5
1a. This could even be a "human-readable" writing out, such as 1/7 = 0.142857142857...
2. An algorithm which could write out Omega in computable time
3. An algorithm which could check a witness of Omega in computable time

By construction, there is no finite-length description of Omega, where "description" is defined in one of the senses 1-3 above. I checked Wikipedia's article and I see no description of Omega that purports to satisfy any of 1-3, either.

describing Omega

By *description* of Omega, I mean one of the following:

By "description", I took you to mean an unambiguous definition without regard to the decidability of the values of the individual bits.

Well, that's one of the

Well, that's one of the nearly metaphysical aspects of Omega: what exactly is "ambiguous" versus "unambiguous"? On an informal level, I think that Omega actually connects time and knowledge through computability - what we mean by "unambiguous" cannot be separated from the time required to make it so. The nth bit of Omega is eventually computed in unbounded time, just not in computable time. How odd that definitions and time are connected...

Wittgenstein: self-referential sentences lead to inconsistency

Wittgenstein showed that self-referential sentences lead to inconsistency.
See Mathematics self-proves its own consistency

Incompleteness proved without self-referential propositions

Incompleteness of mathematics can be proved without self-referential propositions. Actually, incompleteness is an immediate consequence of the computational undecidability of provability. See Mathematics self-proves its own consistency

Real computer scientists don't use Gödel numbers ;-)

Real computer scientists don't use Gödel numbers ;-)

They use abstraction and reification instead. See Mathematics self-proves its own consistency"

Who says LtU is slowing down?

This thread has 390 (now 391) posts. When it rounds past 400, it will be on to its third page of comments. Have we ever had a comment thread that made it to page 3? Usually, the buggy Drupal paging code seems to curb whatever enthusiasm has made it to the end of page 1.

Drupal is drooping at LtU

Drupal is drooping at LtU :-(

At first, I was totally baffled by the bugs wondering where to find comments for the broken links at the right column.

Now I jump straight to the last page and manually scan for updates.

An incantation you might like

When you see a post in the "recent posts" thread that you want to jump straight to, you won't necessarily know what page it's on. But what you can do is right-click on it and copy the link address. Then paste it into your URL bar and insert "?comments_per_page=1000" after the thread number. That will take you right to the comment. You could left click instead of copy-pasting the link, but you'll lose the asterisks on new posts.

For example, your comment would be:
http://lambda-the-ultimate.org/node/4784?comments_per_page=1000#comment-77054

Removed

Removed