Mathematics self-proves its own Consistency (contra Gödel et. al.)

The issue of consistency has come up in many LtU posts.

Some readers might be interested in the following abstract from Mathematics self-proves its own Consistency:

                   Mathematics self-proves its own Consistency
                          (contra Gödel et. al.)

                             Carl Hewitt
                         http://carlhewitt.info

That mathematics is thought to be consistent justifies the use of Proof by Contradiction.
In addition, Proof by Contradiction can be used to infer the consistency of mathematics 
by the following simple proof:
   The self-proof is a proof by contradiction.
     Suppose to obtain a contradiction, that mathematics is inconsistent.
     Then there is some proposition Ψ such that ⊢Ψ and ⊢¬Ψ¦.
     Consequently, both Ψ and ¬Ψ are theorems
     that can be used in the proof to produce an immediate contradiction.
     Therefore mathematics is consistent.
The above theorem means that the assumption of consistency
is deeply embedded in the structure of classical mathematics.

The above proof of consistency is carried out in Direct Logic [Hewitt 2010]
(a powerful inference system in which theories can reason about their own inferences).
Having a powerful system like Direct Logic is important in computer science
because computers must be able to carry out all inferences
(including inferences about their own inference processes)
without requiring recourse to human intervention.

Self-proving consistency raises that possibility that mathematics could be inconsistent
because of contradiction with the result of Gödel  et. al. that
if mathematics is consistent, then it cannot infer its own consistency. 
The resolution is not to allow self-referential propositions
that are used in the proof by Gödel  et. al.
that mathematics cannot self-prove its own consistency.
This restriction can be achieved by using type theory
in the rules for propositions
so that self-referential propositions cannot be constructed
because fixed points do not exist. 
Fortunately, self-referential propositions
do not seem to have any important practical applications.
(There is a very weak theory called Provability Logic
that has been used for self-referential propositions coded as integers,
but it is not strong enough for the purposes of computer science.) 
It is important to note that disallowing self-referential propositions
does not place restrictions on recursion in computation,
e.g., the Actor Model, untyped lambda calculus, etc.

The self-proof of consistency in this paper
does not intuitively increase our confidence in the consistency of mathematics.
In order to have an intuitively satisfactory proof of consistency,
it may be necessary to use Inconsistency Robust Logic
(which rules out the use of proof by contradiction, excluded middle, etc.).
Existing proofs of consistency of substantial fragments of mathematics
(e.g. [Gentzen 1936], etc.) are not Inconsistency Robust.

A mathematical theory is an extension of mathematics
whose proofs are computationally enumerable.
For example, group theory is obtained
by adding the axioms of groups to Direct Logic.
If a mathematical theory T is consistent,
then it is inferentially undecidable,
i.e. there is a proposition Ψ such that
¬⊢TΨ and ¬⊢T¬Ψ,
(which is sometimes called incompleteness)
because provability in T
is computationally undecidable [Church 1936, Turing 1936].

Information Invariance is a
fundamental technical goal of logic consisting of the following:
  1. Soundness of inference: information is not increased by inference
  2. Completeness of inference: all information that necessarily holds can be inferred
Direct Logic aims to achieve information invariance
even when information is inconsistent
using inconsistency robust inference.
But that mathematics is inferentially undecidable (incomplete)
with respect to ¬ above
does not mean incompleteness
with respect to the information that can be inferred
because it is provable in mathematics that ¬⊢TΨ and ¬⊢T¬Ψ.

The full paper is published at the following location:
Mathematics self-proves its own Consistency

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Wait.... It's not april first yet.

You're trolling, right?

Meta-trolling?

Ray,

Have you read the paper?

Removed

Removed

Principle of Integrity: ⊢Φ implies Φ

The Principle of Integrity in Direct Logic is that ⊢Φ implies Φ. Consequently, a theorem can always be used in a proof.

Removed

Removed

Removed

Removed

Re: Meta-Trolling?

OK, I need some context here. On a superficial level, I see a lot of name-dropping, a bunch of mathematical symbolism, and some mumbo-jumbo.

If this is an important result, why couch it in impenetrable mathematics? CACM has for a long time provided context for theoretical results in a section called Technical Highlights that preceeds the actual research paper. That way mere mortals can understand what's being said, why it's important, and what its consequences are. Without that context here, there is no deeper level.

Again on a superficial level, by a non-mathematician, if Direct Logic disproves Gödel, then Direct Logic is inconsistent and hence, in an important sense, wrong. Any inconsistent formal logic can be twisted to prove any result that's desired, in this case that mathematics self-proves its own consistency.

And before you say, this result demands highly specialized mathematics, let me point you to Leslie Valiant's book Probably Approximately Correct. It's intensely and deeply mathematical (computational) and yet it's written in English and can be understood by non-mathematicians (with a lot of effort, I might add, but that's OK.)

I'm sorry for the rant, but if you want more than n people where n is a small number, to understand what you're saying, you need to speak their language.

P.S. "Suppose to obtain an inconsistency that mathematics is consistent" is not an English sentence. There seems to be a verb. [Also an non-sentence.]

Turnstile (⊢) shouldn't scare you

Turnstile (⊢) shouldn't scare you. It was invented long ago by Frege and is standard mathematical notation. It is not "mumbo-jumbo".

However, the self-referential propositions of Gödel et. al. are indeed "mumbo-jumbo". A long time ago Wittgenstein showed that the kind of self-referential proposition that Gödel used in his proof leads straight to contradictions in mathematics.

PS. It looks like you haven't bothered to read the paper.

PPS. "Suppose to obtain an inconsistency, that mathematics is inconsistent" is indeed an English sentence with the verbs "suppose" and "is",

Yes, no and no

Yes, you are right. I have not read your paper, but I have bothered to try. The message I was trying to get across is that I found your paper impenetrable. (And your slides even more so.) It's like the tree-in-the-forest conundrum: If no one reads your paper, does it matter? Possibly there are others who can make their way through it, but I am not one of them. Perhaps it will become one of the most cited papers in theoretical computer science; perhaps not.

No, turnstile doesn't scare me any more that a summation sign scares me. I do have a degree in computer science, albeit from the Middle Ages of computer science, so mathematical notation per se doesn't scare me. It's more a notion kind of thing than a notational kind of thing (to paraphrase Gauss).

I still don't think Suppose to obtain an inconsistency, that mathematics is inconsistent. is an English sentence. It's missing something. The only interpretation that makes sense to me would have a comma after "Suppose". (Have you read Eats, Shoots & Leaves? It's hilarous. The problem there is an extra comma that completely changes the title's meaning.) Did you mean In order to obtain an inconsistency, [I will] assume that mathematics is inconsistent?

In order to obtain a contradiction, assume ....

I agree that it would be better phrased as "In order to obtain a contradiction, assume that mathematics is inconsistent."

You can't find a more elementary proof of a fundamental result.

You can't find a more elementary proof of a fundamental result than the one in paper:

Proof by Contradiction can be used to infer
the consistency of mathematics by the following simple proof:
   The self-proof is a proof by contradiction.
     Suppose to obtain a contradiction, that mathematics is inconsistent.
     Then there is some proposition Φ such that ⊢Φ and ⊢¬Φ.
     Consequently, both Φ and ¬Φ are theorems
     that can be used in the proof to produce an immediate contradiction.
     Therefore mathematics is consistent.

Self-promotion

I've been reading the controversy surrounding your Wikipedia edits prior to your ban. It makes for genuinely fascinating reading - especially your edits to the section on denotational semantics, and the attempts to link the Actor model to general relativity and quantum mechanics. Is LtU your new vanity publishing forum of choice?

Corruption of Wikipedia and some of its Advocates

Roly,

I see that you are a strong advocate of Wikipedia including its policies of censorship, outlawry, and vilification of academics by anonymous Administrators. Also, you are even willing to engage in the kind of libel that is common on Wikipedia.

You insinuate that there are no connections between the Actor model and the physical concept of indeterminacy or between the Actor Model and relativity of simultaneity. Unfortunately, you are wrong on both counts. (See the published literature on the Actor Model.)

And exactly what is your objection to the denotational semantics of the Actor Model?

Are you advocating that LtU should adopt Wikipedia policies of censorship and outlawry by anonymous Administrators?

PS. You might be interested in the following abstract of an article on Wikipedia:

                Corruption of Wikipedia
                      Carl Hewitt
                 http://carlhewitt.info


Wikipedia's business model is generating Web traffic
(primarily from search engines) for articles
of conventional wisdom and morality
(that are heavily censored by a commune of mostly anonymous Administrators)
to motivate (financial) contributions. 

Wikipedia does not allow proper vigorous academic discussion and debate
because they are incompatible with its business model as follows:
In normal academic practice,
the views of experts are solicited, discussed, and respected.
On Wikipedia, academic experts who have tried to participate have been
denigrated as “self-promoters”, censored, and then banned
on the grounds that their views
are not in accord with Wikipedia-imposed Administrator point of view.
As part of its business model,
Wikipedia has engaged in libel and vilification
in an attempt to intimidate academics
into conforming to the censorship of its Administrators.

The article has the following contents:


Introduction	
Censorship by Wikipedia	
Wikipedia relies on Outlawry	
Vandalism and Harassment by Wikipedia	
Wikipedia is a “Communal Blog”, not an encyclopedia	
Wikipedia’s Business Model	
Wikipedia Libels and Vilifies People	
Wikipedia instigated the defamatory attack by The Observer on Carl Hewitt
Scientific Paradigm versus Wikipedia Paradigm
Deception by Wikipedia
Current trends do not look promising for Wikipedia
For the greater good of Wikipedia, Jimmy “Jimbo” Wales should resign from its Board of Trustees.
Acknowledgments
References
Appendix 1: Wikipedia censorship of “Logic Programming”
Appendix 2: Wikipedia censorship of “Incompleteness Theorems”

The article can be found at the following location:
Corruption of Wikipedia

LtU has no problem with first person sources

This is as good of place for these kinds of threads as any. At worst, the article is ignored because no one gets it, and at best it inspires a good debate.

It is not like LtU is a very noisy forum, and this isn't spam. Let's be open minded at the very least (FYI, I don't get it, but then theory isn't my thing).

The subtle art of the Request for Arbitration

The crafting and prosecution of Wikipedia's RfArs is a political art that often has little to do with fairness. While I think Carl's activity was not in tune with Wikipedia's principle of neutrality, the RfAr struck me as vindictive, because the members of the physics group were (understandably) angry with him, their frustration came through very effectively in their statements of evidence, and he didn't have the kind of colourful defenders he would have needed to impress the ArbCom (Wikipedia's supreme court).

If you read closely, you will see that while nobody provided Carl with muscular support, few computer scientists were happy with the process (later, I received a message from one of the most active members of the physics group who also thought the outcome was bad). If I had an idea of where the process was headed, I would have been less equivocal in my participation.

Don't judge Carl on the basis of a superficial reading of this document.

Wikipedia doesn't seem to have any way to correct its errors

Thanks for your friendly comment. Unfortunately, Wikipedia has fundamental, unresolved problems with its processes.

For example, Wikipedia doesn't seem to have any effective way to correct errors and make apologies. The whole affair (recounted starting on page 15 of Corruption of Wikipedia) was a morass of hubris, libel, and corruption.

I made some beginner's mistakes as a Wikipedia editor. However, the flawed processes of Wikipedia resulted in a bad outcome that further damaged Wikipedia's reputation in academia. That Wikipedia has been unable to make a correction has compounded the damage.

I've seen some of the

I've seen some of the dysfunctional social dynamics of Wikipedia, from the perspective of someone who contributes to one of its sisters (the sister projects, like Wiktionary and Wikinews and so on, are independent of Wikipedia, but are under the aegis of the same charity that runs Wikipedia, the Wikimedia Foundation).

Displaimers: I've not read up on this case (I've got enough horror stories of my own about Wikipedians behaving badly, without borrowing someone else's), and I don't even know if Hewitt was in the right or in the wrong (I've got some real problems with some of Hewitt's mathematical/logical/computational claims here on LtU, after all, and have no idea whether those things that were bothering me are even relevant to the case).

But I do think Wikipedia has taken itself to a very bad place. There may not be any one reason for this, and there probably isn't any one solution for it, but I've come to believe a major root cause is Wikipedia's principle of "Assume Good Faith" (AGF). It sounds wonderful, doesn't it? I was initially attracted to Wikipedia partly because AGF sounded so idyllic. Unfortunately, from having that principle in place for years, what's happened is that a whole class of nasty characters has developed, who are expert at being vicious in a "civil" way. They'll use and twist policies and guidelines of the project to torment newcomers, and when the newcomers react, the newcomers get slapped on the wrist for failing to assume good faith. So that in the long run, Assume Good Faith has become a protection for people who don't act in good faith, and helps them to chase away people who are of good faith. I'm under no illusions that it's easy to know how to orchestrate a project like Wikipedia, but I can see that the way they're doing it has flaws.

Wikipedia's reputation...

...is something I'd rather not spend too much time discussing, but if you are not aware of it, Dean Buckner has been engaged in a writing project with Eric Barbour that will be of interest: Wikipedia through the Looking Glass: The Wikipedian POV.

(edited: link fixed, thanks Sean)

link error

The url has some extra text prepended.

Hats

Yeah, clearly a tangential topic not to get wrapped up in. I'll make a small supplemental comment/caveat, though: there's a whole population of external critics of the wikimedia movement who are, themselves, a big part of the problem. Just saying, someone who says bad things about wikimedia is not necessarily wearing a white hat. (I'm automatically skeptical of anything Sanger or Wales says on the topic.)

[dup]

[dup]

Captions for Tenniel illustrations are hilarious

The book's captions for the Tenniel illustrations are hilarious!

IMHO

The problem with the above proof is easy to see. There are NO assumptions. It is about everything. What we need is a "mathematical lingo". Substitute your favorite natrual language for lingo. I find this site to be very suggestive.

A proof by contradiction uses an assumption

The structure of a proof by contradiction is to assume the negation of what is to be proved and derive a contradiction using the assumption.

I was afread of this. But

I was afread of this. But what is implicitely assumed is everything, by the absence of specific assumptions, and the negation of everything by the only stated assumption.

The self-proof uses the assumption that Math is Inconsistent

In order to derive a contradiction, the self-proof uses the assumption that Mathematics is Inconsistent. Since using the assumption leads to contradiction, it follows that Mathematics is Consistent.

Why should I assume that

Why should I assume that mathematics is inconsistent? In model theory the real closed fields, and the algebraically closed fields are consistent, as well as anything that can be validly stated within those theories. For reasons like these I am inclined to accept you theorem, but not the proof. Model theory has methods to combine or extend theories and there some very powerful results, but I don't think we can say that mathematics proves itself, yet.

In order to obtain a contradiction, assume ....

The reason for assuming that mathematics is inconsistent is to derive a contradiction. In proof by contradiction, anything at all can be assumed. But it doesn't get you anywhere unless you can derive a contradiction.

Gentzen developed a proof of the consistency of first-order Peano number theory. But many people were not convinced by the proof.

Similarly, the self-proof of consistency at the beginning of this forum topic is not very convincing. The self-proof simply shows that consistency is deeply built into the very architecture of classical mathematics.

Simpler and clearer approach

The self-proof simply shows that consistency is deeply built into the very architecture of classical mathematics.

All that's needed to show that is to analyze the structure of proof by contradiction, in which any contradiction is assumed to be an invalid conclusion.

The "self-proof" seems to serve no useful purpose. It only obfuscates, raising meta-issues of dubious relevance that detract from the above point.

Self-proof of consistency points out commonly held falsehood

The self-proof of the consistency of mathematics points out a commonly held falsehood that mathematics cannot prove its own consistency. (Last relative clause added after comment immediately below.)

Also, it gets the philosophers' underwear in a knot :-)

The self-proof of the

The self-proof of the consistency of mathematics points out a commonly held falsehood.

I find myself slightly uncertain whether you're viewing that proof as valid.

If you aren't viewing it as valid, that would make sense of some things, and my next question would be what commonly held falsehood you're referring to.

If you are viewing it as valid, I would observe that the proof has two things in common with Anselm's proof of the existence of God. First, both are circular proofs, whose premises include what they claim to prove, so they don't contribute toward establishing their claimed conclusion. Second, in both cases, the proponent's assessment of the merits of the proof appears to be influenced by their desire to believe the conclusion.

Proof is not circular; human desires and dislikes irrelvant

The proof that Mathematics self-proves its own Consistency is valid by the laws of classical Mathematics and is not circular. Euclid (and any succeeding mathematician) could have thought of this proof. Why didn't they?

Classical mathematics is a rigorous subject with exact rules. The desires and dislikes of those who create proofs are irrelevant to the validity of proofs.

If X says that X is

If X says that X is consistent/true/halting/secure/uncorrupted/sane/healthy then it means one of two things:
* X is consistent/true/halting/secure/uncorrupted/sane/healthy and has produced a correct proof.
* X is inconsistent/false/divergent/cracked/corrupt/insane/ill and, because of this, is capable of spitting out nonsense, including the incorrect statement that it is consistent/true/halting/secure/uncorrupted/sane/healthy.

That's Goedel's second incompleteness theorem, and it applies to everything, for example: I am Odin; all Vikings must worship me, since Odin (I) say so.

That dichotomy is much

That dichotomy is much easier than Godel's second incompleteness theorem, which roughly says that, for sufficiently powerful logics, the first astericks isn't a possiblity.

Steer clear of monstrous self-referential propositions

Ever since the Greek philosopher Eubulides of Miletus (4th century BC) developed the Liar Paradox ("This sentence is false."), we have known about inconsistencies that result from self-referential sentences.

Wittgenstein pointed out (see elsewhere in this forum topic) that the self-referential sentence that Gödel proposed ("This sentence is unprovable.") also leads to contradictions.

My advice is: Steer clear of the monsters!

Why didn't they?

OK, so why didn't Euclid and succeeding generations of mathematitions tell us about the self proof of consistency??

Edit: For some reason the system didn't put this under Hewitt, Oh, it's under Hewitt but the wrong one.

How the self-proof was overlooked and then discovered

Before the paradoxes were discovered, not much attention was paid to proving consistency. Hilbert et. al. undertook to find a convincing proof of consistency. Gentzen found a consistency proof for the first-order Peano theory but many did not find it convincing because it used ordinal induction, which is not elementary. Then following Carnap and Gödel, philosophers unquestioningly accepted the necessity of self-referential prepositions in mathematics. And none of them seemed to understand Wittgenstein's critique. (Gödel insinuated that Wittgenstein was "crazy.") Instead, philosophers turned their attention to exploring the question of which is the weakest theory in which Gödel's proof can be carried out. They were prisoners of the existing paradigm.

Computer science brought different concerns and a new perspective with the following characteristics:
* powerful so that arguments (proofs) are short and understandable
* standard so they can join forces and develop common techniques and technology
* inconsistency robust because computers deal in pervasively inconsistent information

The results of [Gödel 1931], [Curry 1941], and [Löb 1055] played an important role the development of Direct Logic:
* Direct Logic easily formalized Wittgenstein's proof that Gödel's self-referential sentence leads to contradiction. So the consistency of mathematics had to be rescued against Gödel's self-referential sentence. The self-referential sentences used in results of [Curry 1941] and [Löb 1055] led to inconsistency in mathematics. So the consistency of mathematics had to be rescued against these self-referential sentences as well.
* Direct Logic easily proves the consistency of mathematics. So the consistency of mathematics had to be rescued against Gödel's "2nd incompleteness theorem."
The common solution adopted in Direct Logic is to bar self-referential sentences. Inferential undecidability ("incompleteness") of mathematical theories still has to be provable in Direct Logic, which can be accomplished using computational undecidability.

In summary, computer science advanced to a point where it caused a paradigm shift.

For me, your answer suggests

For me, your answer suggests another answer. Before the logic as foundation movement, Kant's analytic and synthetic were separate categories, with mathematics as analytic. Logic as foundation implicitly merges analytic and synthetic into something we might call linguistic. It is in this linguistic construct that we have Godel's inconsistency. Quine made the argument that analytis and synthetic are not separate in the 1950's.

Pragmatics and Cybernetics are destroyed by this assumption. C. I. Lewis realized this in 1912, and introduced strict implication and modal logic to counter Russell's material implication. But this was not enough to save Pragmatics or Cybernetics from decline starting in the 1950s.

A good book on Lewis and how he solved the challenges of analytic philosophy and Quine is C. I. Lewis in Focus by Sandra B. Rosenthal.

<=>

You literally just restated "consistency of M <=> no contradictions in M" in prose. This is not a theorem, it's a definition.

Ten Signs a Claimed Mathematical Breakthrough is Wrong

Ten Signs a Claimed Mathematical Breakthrough is Wrong lists entertaining telltales that some breakthrough in math might be not as imminent as a paper claims. For my untrained eye this paper matches at least #1.

Ten dumb ways to decide if a mathematical result is wrong

The reference that you cited lists ten condescendingly dumb ways to decide if a mathematical result is wrong. The first of the ten is probably the dumbest of them all. The current defacto standard for published papers is PDF. And for better or worse, the easiest and most powerful WYSIWYG way to create mathematical articles in PDF is currently Microsoft Word.

Knuth would turn over in his grave

Oh, wait, Knuth isn't dead yet!

Over 40 years ago Knuth invented a language to write beautiful mathematics. Granted (La)TeX is not WYSIWYG, but the results are immeasurably more beautiful than what Microsoft Word can produce. I thought mathematics was about beauty...

But these are just opinions about taste and there is no accounting for taste.

LaTeX should have been WYSIWYG

In order to survive long term, LaTeX should have been WYSIWYG.

PS. Latex attractively renders text and mathematics to Postscript. However, naively converting the Postscript output to Adobe's Portable Document Format (pdf) using Acrobat Distiller or ps2pdf yields a document that displays poorly electronically.

Use XeLaTex or pdfLaTex

Use XeLaTex or pdfLaTex which natively produce PDF output.

From my experience the pdfs

From my experience the pdfs produced by MS Word are much more broken than the TeX ones. Of course, uneducated users in both cases do not know how to properly generate bookmarks and hyperlinks.

Your approach, of just

Your approach, of just axiomatizing known inconsistencies and then limiting your reasoning modes to contain fallout of the resulting inconsistency seems like guano insanity to me... After you've given up consistency, how do you figure out which "theorems" to believe? I'm not very knowledgeable of Direct Logic, but have you established that the inconsistency in your logic cannot lead to non-termination of provably terminating functions? e.g. you reason that it should be safe to delegate to an evaluator, because of self-consistency, and then the evaluator comes to the same conclusion and recurses to divergence.

But this post has inspired me to write-up some total self-compiler notes which I'll post to another thread.

Mathematics follows rules wherever they might go

Mathematics follows rules wherever they might go. When the rules have led to undesirable consequences, they have been adjusted.

The paper discussed at the beginning of this forum topic explains how the rules need to be adjusted.

Here's a simpler proof of

Here's a simpler proof of consistency: Suppose false. Contradiction. Wonder how Godel missed that...

How could Gödel et. al. miss the Self-Proof of Consistency?

How could Gödel et. al. miss the Self-Proof of Consistency? For that matter, how could their predecessors going back to Euclid not notice? The proof is extraordinarily simple and doesn't require any complicated concepts.

I hope you're trolling


Inconsistency can be very elusive.

Inconsistency can be very elusive. Some very smart mathematicians have developed axiomatizations for theories that have later turned out to be inconsistent. When strongly held philosophical believes are involved, it can be especially difficult. Gödel was certain and he brooked no dissent.

And another thing

I clicked twice.

The problem is that your

The problem is that your proof is necessarily tautological, even if mathematics turns out to be actually inconsistent. Which is why intuitionstic logics ought to be more widely used since they don't allow such trickery, where you can just assume consistency and the law of the excluded middle to establish results.

Intuitionistic Logic also has Proof by Contradiction

Intuitionistic Logic also has Proof by Contradiction. Consequently, by the following proof, mathematics is not inconsistent.

   The self-proof is a proof by contradiction.
     Suppose to obtain a contradiction, that mathematics is inconsistent.
     Then there is some proposition Φ such that ⊢Φ and ⊢¬Φ.
     Consequently, both Φ and ¬Φ are theorems
     that can be used in the proof to produce an immediate contradiction.
     Therefore mathematics is not inconsistent.

In Intuitionistic Logic, being not inconsistent is different from being consistent. But it is still quite remarkable.

A formalization in

A formalization in Coq:

Theorem consistency: (exists P:Prop, P /\ not P) -> False.
Proof. firstorder. Qed.

Remarkable? Hardly.

Reading your paper, one gets the impression that you have no understanding of what Gödel actually did. Footnote 9 is telling: you seem to attribute (bizarrely) the idea of Gödel numbering to recent efforts to formalize the incompleteness theorems. Also, after reading that "direct logic" bars self-referential propositions, I expected to find some indication of how "direct logic" escapes Gödel's trick. There was none. The only actual result in your paper, beyond the above trivial theorem, appears on page 11 with no proof, and no indication as to why it is significant.

I actually believe you do not think you are a troll. Nonetheless I sure feel like I am feeding one.

Incorrect formalization of "Consistency"

A more correct formalization of consistency is as follows:
¬∃[P:Proposition]→ ⊢(P∧¬P)
Unfortunately, Coq is not very powerful and so may not be able to prove the above even if it could formalize it.

As explained in footnote 9 of the paper, the formalization of Gödel's proof is a relatively recent endeaver.

In formalizing Gödel's proof, [Shankar 1994, O'Connor 2005],
sentences are coded using integers to introduce self-referential sentences
in order to get around the fact that
the logical systems being used do not allow
self-referential sentences in order to avoid inconsistency!
    Self-referential propositions give rise to numerous
inconsistencies including the paradoxes of Curry and Löb.

The paradoxes of Curry and Löb are then explained the remainder of footnote 9.

Computer science has developed much better ways than Gödel numbers
to explain the relationship between sentences and propositions.
See the section of the paper on "Reification and Abstraction" on page 4.

Unfortunately, you missed reading the grammar of sentences (on page 9)
which uses the typed lambda calculus to bar the construction of self-referential sentences because fixed points do not exist.
See footnote 28 for the exact place where the restriction is enforced.
This is how Direct Logic bars the construction of Gödel's self-referential propositions because they are "monsters" as explained in the paper.

The proof of the Theorem on page 11 is a very elementary and well known.
It is due to Alonzo Church and Alan Turing.
The significance of the theorem is the proof that
there are inferentially undecidable propositions
without using monstrous self-referential propositions.

If you want to get more deeply into technical subject matter, details really do matter. Thanks for taking a look.

I wasn't careful in reading

I wasn't careful in reading the turnstile as "Φ is provable." My mistake.

I am well aware that the formalizations of the incompleteness theorems are relatively recent. But notice how your footnote gives the impression that Gödel numbering was introduced as part of these endeavors in order to overcome deficiencies of the systems in which the formalizations were being done, when of course Gödel numbering is one of the key ideas in the original proof (hence the name).

Russell thought he had barred paradoxes from PM with the syntactic restrictions of his theory of types. The whole point of Gödel numbering was that the ability to form self-referential propositions that lead to paradoxes follows from the ability of a system to reason about basic arithmetic, regardless of any syntactic restrictions.

Nowhere in your paper (on page 9 or otherwise) is it clear why propositions in Direct Logic cannot be encoded as numbers or why the rules of Direct Logic cannot be encoded as statements about arithmetic. So unless Direct Logic is too weak to reason about basic arithmetic, I don't see how Gödel's "monster" is barred.

You may have barred self-references from the "better" ways of relating propositions and sentences "developed by computer science", but have you barred them from Gödel numbers?

The whole point of Gödel

The whole point of Gödel numbering was that the ability to form self-referential propositions that lead to paradoxes [...]

But they don't lead to paradoxes in Arithmetic, just incompleteness.

Agreed, and good

Agreed, and good clarification. I should have been more precise in my language: the ability to form a Gödel sentence G, which resembles the liar paradox, follows from the ability to reason about basic arithmetic. G is true but, assuming consistency, not provable, and so not itself a paradox. A further thought/question: in a system that admits inconsistency (is this not the point of Direct Logic?), is it still clear that G is not a paradox?

Math Theories are thought to be consistent vs practical theories

Mathematical theories are thought to be consistent, which justifies the use of proof by contradiction. However, practical theories (e.g. theories about climate, livers, etc.) are invariably not consistent and consequently Direct Logic does not allow proof by contradiction for practical theories.

Self-referential propostions lead to inconsistency in math

Wittgenstein was one of the first to point out that self-referential propositions lead to inconsistency in mathematics when he wrote:

“True in Russell’s system” means, as we have said, proved in
Russell's system; and “false in Russell's system” means that the
opposite [negation] has been proved in Russell's system.
Let us suppose I prove the improvability (in Russell’s system) of P
[⊢⊬P where P⇔⊬ P]; then by this proof I have proved P [⊢P].
Now if this proof were one in Russell’s system [⊢⊢P]—I should in
this case have proved at once that it belonged [⊢P] and did not
belong [⊢¬P because ¬P⇔⊢P] to Russell’s system.
But there is a contradiction here! 

However, formal grammars had not yet been invented so formal tools were lacking to rule out monstrous self-referential propositions.

See the paper referenced at the top of this forum topic for a formal grammar that uses types to rule out self-referential propositions.

A typed grammar can rule out self-referential sentences.

A typed grammar can rule out self-referential sentences and provide the definition of exactly what is and is not a sentence. Since Gödel did his work before formal grammars were invented, it never occurred to him that it is necessary to formally define exactly what is a sentence.

Similarly, Euler did not formally define polyhedron when he developed his formula for the relationship among the number of faces, edges, and vertices. However, it became necessary to define polyhedron when monsters were discovered.

Since Gödel did his work

Since Gödel did his work before formal grammars were invented, it never occurred to him that it is necessary to formally define exactly what is a sentence.

This seems rather like saying that Galileo didn't know about acceleration because he lived before the differential calculus was invented.

Galileo would have benefited from better tools

There is no doubt that Galileo would have benefited from better tools, e.g., the differential and integral calculus, tensors, etc. And they would have made a large difference in improving his work.

Similarly, Gödel et. al. would have benefited from knowledge of formal grammars because it would enable posing the following question: "What is a sentence?"

In Direct Logic, there is no way to encode sentences as numbers

In Direct Logic, sentences are not character strings. Consequently, there is no way to encode sentences as numbers. You might think of sentences as abstract syntax trees in which the terminals are also abstract.

PS. Thanks! I agree that the footnote might give the wrong impression and will revise it. The reason that the formalizations of Gödel's results was so cumbersome was that they mimicked Gödel in using numbers to code sentences instead of using the tools of computer science, namely reification and abstraction.

Presumably you mean there is

Presumably you mean there is no way to encode sentences as numbers within Direct Logic, internal to the system. (I would guess this claim applies just as well to PM, the system Gödel considered, though I know little about it.) Gödel numbering happens at the meta-level, external to the system. I assume meta-level encoding is still possible for Direct Logic, unless you're going to tell me that the abstract syntax trees of Direct Logic are not representable in the computer. If you don't care about encoding and self-references occurring at the meta-level, then what is the connection to the incompleteness theorems?

Enumerating Sentences does provide self-referential sentences

Being able to enumerate sentences, does not, by itself, provide self-referential sentences if none of the enumerated sentences are self-referential.

You would need a witness

You would need a witness for Φ in intuitionistic logic.

Which not coincidentally also hints at a problem with the proof in a classical logic context.

If you want to use the inconsistency of a system as a premise, you have to accept the consequences of that. The consequence in this case is that the proof is only relevant in the context of an inconsistent mathematics, which I'll call W (being an upside-down M.)

In that context, the proof, interpreted in W, shows the consistency of W via the inconsistency of W. This sort of thing is to be expected from an inconsistent system, no surprise there.

But this doesn't say anything about the consistency of any mathematics that is not already known to be inconsistent. The proof in W doesn't prove the consistency of some other mathematics M. The normal trick of proof by contradiction doesn't work when the contradiction changes a relevant aspect of the nature of the system in which the proof is interpreted.

Restating, you are essentially claiming that the proof selects between M and W, but it does not achieve that since as a proof in W, it says nothing about M. All it does is restate a trivially true property of W. To actually select between M and W would require proof of the existence or nonexistence of Φ. The former could be provided by a witness, as the intuitionistic interpretation suggests.

Math Theories extend Math so that theorems are enumerable

Please see the discussion of Mathematical Theories in the paper at the beginning of this forum topic.

Metamathematics equivocation

There seems to be an equivocation in the handling of "mathematics" as a concept, which when cleared up I believe illuminates the underlying argument. The first proof uses two statements:

  1. Suppose to obtain an inconsistency that mathematics is inconsistent.
  2. Therefore mathematics is consistent.

In the first, he talks about the consistency of a mathematical theory; that is, whether a system of deduction proves (P and not P). In the second, he's speaking on the traditionally metamathematical level in which we accept the provability of certain statements in a theorem to be a meaningful property of that system. The consistent object theory cannot speak to this, but the meta system can consistently treat these provability constraints as meaningful.

Thus, I think this is really about the meta-meaningfulness of incompleteness; when a system doesn't prove itself inconsistent, the standard mathematical approach is to treat it as consistent. Hewett has embedded this in a meta-theoretic proof while the tradition is to state such suppositions in sheer linguistic meta-description.

There is some meat here in formalizing what mathematicians do informally, especially in the context of computational reasoning systems that have to have both meta- and object-level systems formalized. I remember Natarajan Shankar making a compelling argument that reasoning systems should be able to handle these sorts of meta-proofs natively, e.g. when you can't find a proof of "A", you should be able to look for a proof of "provable A", from which you can infer "A" without exhibiting a proof directly. As to the rest of the conclusions Hewitt drawn from this, I don't have any firm grasp to share.

Mathematics is broader than any particular Mathematical Theory

Mathematics is broader than any particular Mathematical Theory. Types and Sets (including Peano Numbers) are the common property of all of Mathematics. Group Theory is a particular mathematical theory that augments Mathematics with the axioms for groups along with the axiom that the proofs (and consequently the theorems) of Group Theory are computationally enumerable.

Unfortunately, philosophers never formalized meta-mathematical reasoning in fullness. Consequently, when it became necessary to do a full formalization so that computers could do all of the reasoning, it turned out that philosophers had overlooked some important things. The paper at the beginning of this forum topic does that kind of formalization.

For example, with respect to your last point, it certainly is the case that ⊢⊢A implies ⊢A. In fact, you can go further because ⊢A implies A.

Removed

Removed

Provability Logic

See the section on Provability Logic in Mathematics self-proves its own Consistency

Anselm

From memory: God is that more perfect than which nothing can be. Something that does not exist is less perfect than something that does exist. Therefore, God exists.

Something that does not

Something that does not exist is less perfect than something that does exist.

Plato would disagree.

Has humanity ever found a perfect circle? Or does such a thing exist only in our imperfect imaginations?

Plato would disagree?

Plato would disagree.

Would he? I thought the point of Platonism was that the perfect circle is more real than the imperfect shadows of it that we see.

I suspect this line of inquiry would necessarily lead to an investigation of the words Plato used, and the nuances of meaning of those words in Ancient Greek. Which I'm quite sure would be beyond me.

Your quote spoke of

Your quote spoke of existence, not of reality. Platonic forms, pretty much by definition, do not exist in any material or physical sense. No matter how many particular circles you point at, you'll never find the platonic form of circles or circleness.

Which is the very nuance

Which is the very nuance of words I had in mind. I'm glad we can agree on something. :-)

Oh, no, undecidability again

Oh, gawd, not again.

It's worth pointing out, though, when undecidability is being discussed, that undecidability requires infinite storage. A deterministic machine with finite memory must either halt or repeat a previous state. Termination in a finite machine can be hard or time consuming (worst case requires going though every possible memory state) but not undecidable.

There's a decidable subset of infinite mathematics, too, and it's useful for program verification. See the classic Nelson-Oppen paper "Simplification by Cooperating Decision Procedures". A system with addition, subtraction, multiplication by constants, arrays, structures, and Boolean operations is decidable. Many program proof systems use that approach. But you can't multiply two variables together and retain decidability, unless you have an upper bound.

It's possible to build up finite mathematics with something like Boyer-Moore constructive theory and an upper bound on number size in such a way that it's decidable. A long time ago, I worked on that, building up array theory from first principles, without axiomatic set theory. It can be done, and I got it through the Boyer-Moore prover, but the machine proofs were really clunky. (Years later, I realized there was a way to clean that up, but I had moved on to doing other things.)

Computational Undeciability versus Inferential Undecidablity

It is important not to confuse two kinds of Undecidability:

* Computational Undecidability is that there is no total computational procedure that correctly returns True or False for the domain in question.

* A proposition P is in Inferentially Undecidable for a Theory T if and only if ⊬T P and ⊬T ¬P.

Hewitt2011 ?

What is [Hewitt2011] refering to? The bibliography contains multiple entries by Hewitt in 2011.

@Hewitt, why do you use such an ambiguous citation style?

Repaired reference to [Hewitt 2010]

Thanks! I have made the correction and posted the new version to the standard location at:

Mathematics self-proves its own Consistency

clarifications requested

I've tried to read and evaluate the paper. I think I have some idea of its main points but I'm not certain I've read it correctly. I could use some help with a second reading (from anyone, not just Hewitt).

The main theorem and its trivial proof, in the main text and in the abstract, refers to "mathematics".

From the abstract (emphasis added): "That mathematics is thought to be consistent justifies the use of Proof by Contradiction."

From the proof: Consistency of mathematics can be defined as follows: [...]. I'm not confident I can transcribe the notation correctly in an LtU comment so I'll just describe: Consistency of mathematics is here defined to mean that for all mathemetical sentences, s, there are not proofs for both the abstract proposition described by s and the negation of that proposition.

My problem here is that I am not familiar with any conventional, technically precise enough definition of the word mathematics to make sense of those statements.

For example, taking another look at the sentence:

That mathematics is thought to be consistent justifies the use of Proof by Contradiction.

What is the definition of "mathematics" being used here? I've heard plenty of people talk about the consistency or inconsistency of specific axiomatic systems but that's clearly not what is meant here.

Later, in the proof the main theorem, the consistency of "mathematics" is defined in terms of a set: the set of all "sentences". I understand this to imply that we have an enumerable set of sentences, and that these are in some way "all of the sentences of mathematics".

Again, I'm not familiar with any sense of the word "mathematics" that could make suitable technical sense of this stuff.

I think a clue to the issue here might be found in the way that Hewitt summarizes "Gödel et. al.":

because of contradiction with the result of Gödel et. al. that "if mathematics is consistent, then it cannot infer its own consistency."

There are quotation marks around this summary of Gödel but I don't think anyone is being quoted. I don't think that that is a good summary of Gödel, either. Gödel's results were about classes of theories where by "theory" I mean particular axiomatic systems.

Whereas Gödel developed results applicable to particular axiomatic systems, at least superficially Hewitt seems to want to talk about the consistency of "mathematics in general" in some unclear way.

What is mathematics?

Thanks! This is an excellent question!

Classical mathematics is a formal system that incorporates Types and Sets (including Peano Numbers). Mathematics is thought to be consistent. But mathematics is not closed in the sense that it is provable that the theorems of mathematics cannot be enumerated by a provably total computational procedure. Theorems of mathematics are denoted by ⊢, which was invented by Frege.

Classical mathematics is the common foundation of each mathematical theory (e.g., group theory) that adds domain axioms to mathematics along with the axiom that the theorems of the theory are computationally enumerable. The theorems of a mathematical theory T are denoted by ⊢T. It is also the case that every mathematical theory (in the sense above) self-proves it own consistency (even if the theory is inconsistent!).

See Mathematics self-proves its own Consistency

further clarification req. -- re what blocks Goedel's 2nd

Thanks, Hewitt. I guess it was a good question in the selfish sense that your answer made a lot of the material easier for me to read.

To test my (partial) understanding (so far) I'd like to try to give an informal, vernacular, high level description of the work. This might help other people "get it".

At the bottom, after my "vernacular summary", I have a further clarifying question about why I can't trivially apply Goedel's second theorem here to derive a contradiction within DL.

-----------------------------------
begin summary
-----------------------------------

Direct Logic (DL) is a formal and effective, extensible, axiomatic system of mathematics.

It is formal and effective in the sense that its proofs can suitably encoded and machine-verified, automatically enumerated, etc. For example, given a choice of any additional axioms (or none), a program could spit out all Direct Logic proofs with a primary sort of size (shortest to longer), and a secondary sort of alphabetical order.

DL is extensible in the sense that we are meant to add domain axioms as suits our purpose. For example, we could add definitions and axioms about groups, or about Conway games and numbers, or whatever else we like. In a given context we could add axioms that make the system meaningless and inconsistent but we will try to avoid that.

In its unextended form, DL is taken to include Peano numbers, some "safe" set theory, a notion of types, and the particular sets and types that are used in defining DL itself.

Among these is the set "Sentences". DL sentences are defined by a simple, constructive, induction on typed terms. A proof in DL concludes with a sentence whose "abstraction", a proposition, has thus been proved.

Conversely, if we want to prove some proposition in DL, we must first find a way to express it as a DL sentence, subject to the rules of sentence construction. (Per Hewitt, the rules limiting sentence construction play a central role in how we evade Goedel's 2nd incompleteness theorem.)

The "built in" features of DL comprise a recognizable mathematical toolbox of components universally believed to be mutually consistent. When people do really fancy math, like developing a theory of differentiation or a theory of graphs, they usually feel free to make simple and safe uses of basic arithmetic, sets, and so forth. (If these tools are one day discovered to be inconsistent then it is not only DL but pretty nearly all of math that will fall apart until and unless the inconsistency can be patched up.)

DL deploys those familiar, presumed-safe tools in a conservative way. There is no facial reason why we should suspect DL has introduced an inconsistency.

These basic math tools are sufficiently powerful that we also facially believe DL can encode and prove the arithmetic theorems of number theory.

DL has a limited ability to reflect on itself. DL theorems can talk, to some extent, about what DL can and can not prove. This leads to an interesting result:

DL contains a (trivial) proof of its own consistency. That's the short proof at the top of the "abstract" posted at the top of this LtU thread. It is stated a little more formally in the body of the paper.

Hewitt says that this proof is a valid proof within the system but allows that it is not a "convincing" proof. Yes, the system validly proves (by contradiction) that the system itself is consistent ... but an inconsistent system could easily prove itself consistent as well.

A "convincing" proof would give us some more visceral intuition about why DL's rules for constructing proofs never lead to proving both P and not P.

Nevermind though: Whether the self-consistency proof is "convincing" or not isn't important for the purposes of this paper. What is important is that the proof is "valid".

DL apparently -- and again speaking informally -- contains all the "usual" theorems of arithmetic and it contains a valid proof of self consistency.

A naive but popular misunderstanding of Goedel's Second Incompleteness Theorem would suggest that no theory like DL should exist! (Unless, in fact, DL is somehow inconsistent in a way we haven't discovered.)

What's that? A math system powerful enough for arithmetic AND containing a valid self-consistency proof? Hewitt must have made a mistake somewhere! Woo hoo! (Not really, though.)

A more careful approach, though, is to assume for argument sake that Hewitt hasn't made any big mistakes and to ask:

(1) Why doesn't Goedel's 2nd proof "go through" for this system?

(2) Is this system indeed powerful enough that we may as well regard it as a successful formalization of "mathematics"?

Right away we can say that (2) is an open question but that "yes" is facially a possible answer. Nothing obviously points to "no".

And (1) is a mechanical question we can work through. Why doesn't Goedel's second theorem apply, even though it superficially seems like it ought to apply?

Very abstractly speaking, per Hewitt, Goedel's second theorem does not apply to any system that does not include directly "self referential" sentences.

By its rules of typed construction, DL contains no such sentences (per Hewitt, I haven't absorbed the form and function of the rules yet). That is why Goedel's proof doesn't apply here.

An alternative title for the paper might be: "Goedel's Second Incompleteness Theorem doesn't quite mean what is popularly assumed".

An alternative subtitle: "Here's a formal, effective, and partially reflective system of mathematics (DL) which you can safely use for just about everything. Use caution when adding new axioms. Steer particularly clear of introducing self-referential sentences. If condition persists, consult your physician."

-----------------------------------------------------------

end of vernacular summary

-----------------------------------------------------------

Clarifying question:

Per Goedel, I can devise an arithmetic encoding of DL sentences and proofs, and an elementary definition of proof validity.

At the meta level I can make a sentence in DL that says:

There does not exist a number X such that IsAValidEncodedProofFor (X, ___)

where I fill in the blank ("___") with a fixed point of my encoding scheme specialized for that monstrous sentence schema.

If DL is actually consistent then of course it will contain no proof for this Goedel sentence I've constructed.

DL contains its elementary proof of self consistency.

Therefore, although I can construct a sentence, Self, which is a fixed point of the above schema -- I gather that I must not be able to show within DL that the number filling in the blank in Self encodes Self itself.

Can you help me better understand how DL is blocked from showing that the number filling in the blank in Self encodes Self?

Here is how I read Hewitt

Here is how I read Hewitt (if I mischaracterize, I invite Hewitt to correct me).

Hewitt is talking specifically about doing mathematics in Direct Logic, with particular foundations in mind, detailed somewhat in Appendix 2 in this paper of his.

So, in that first quotation, he's talking about adding "Proof by Contradiction," a specific inference principle he has in mind, to Direct Logic for the purposes of doing mathematics (and only for doing mathematics), and the motivation for doing so.

I find a lot of what Hewitt says makes more sense once you realize that the framework of Direct Logic is assumed. For example, if anyone else said "classical mathematics is a formal system" I would guess they were talking about ZFC, but obviously Hewitt is not.

Also, when he writes

Also, when he writes "computer scientists", read that as "Hewitt.". If you read it at all.

Computer Science versus Philosophy

Computer Science has scientific and engineering goals and constraints that differ from philosophical concerns of taking positions and making arguments for and against. Consequently, the achievements of computer science can be rather different from those of philosophy.

Direct Logic helps formalize mathematics and math theories

Direct Logic helps formalize mathematics and mathematical theories.

Direct Logic axiomatizes common foundations of mathematics using Types, Sets (including Peano Numbers), and Provability (Natural Deduction à la Jaśkowski). Proof by Contradiction has been a standard part of mathematics ever since Euclid.

Direct Logic also axiomatizes mathematical theories that are thought to be consistent (e.g., groups, ZFC, Hilbert spaces, etc.) and consequently use proof by contradiction. However, practical theories (climate change, human livers, economy of US) are invariably inconsistent and consequently don't use general proof by contradiction.

[misthread]

[misthread]

Cybernetics

I can't resist taking a shot at the self proof if mathematics from a Cybernetics point of view. The idea is that mathematics parses itself by desigh. See these papers Primer for "The Good Regulator",and "The Good Regulator". This amounts to thingness, reality, and/or existence. It is as much cognative as mathematical.

My $0.02, FWIW

Let's start with the following idea: the whole point of mathematics is to be consistent. Godel wasn't trying to say "Ha! You're all fools! It's all inconsistent!" He was saying that if you want to hold on to consistency, you have to give up on completeness. That you want consistency is a given. That's why it's called Godel's Incompleteness Theorem and not Godel's Inconsistency Theorem. So it may well be that we cannot prove (say) P != NP, even though it may be true that P != NP - that's what incompleteness is saying. (Although all Godel actually proved is the formal existence of nonsense, and it doesn't even necessarily apply to "interesting" questions, but I digress.)

What are some ramifications of mathematics not being consistent? Here, Hewitt makes an important and deep point: IF mathematics is inconsistent THEN proof by contradiction is invalid. And that really screws everything up, because then we have to throw out a ton of classical proofs and probably rethink the whole damned thing. We don't want to do that so we try to construct mathematics to be consistent. Have we succeeded? Who knows?

Where Hewitt crosses the line from interesting observation to (sorry) crackpot theory is in turning the correct observation around. ASSUMING proof by contradiction is valid THEREFORE mathematics is consistent. Sure, that's also true, and I accept he has proven this is true ... but he hasn't proven that proof by contradiction is valid in the first place! Everyone else can assume this for the reasons stated above (sure, my theorem may in future be invalidated by a proof that all of mathematics is inconsistent, but I can live with that). But someone who is trying to PROVE that mathematics is consistent cannot ASSUME that proof by contradiction works.

So yes, the argument is circular. IF mathematics is consistent THEN proof by contradiction is valid THEREFORE mathematics is consistent.

At least, that's the way I see it, FWIW.

Having read the paper, I

Having read the paper, I have a direct question:

If Direct Logic is inconsistency-robust, and Inconsistency Robust Logic rules out proof by contradiction, why does Direct Logic allow proof by contradiction (as used in the "self-proof of the consistency of mathematics contra Godel et al")? How is its use justified within an inconsistency-robust framework, when proof by contradiction is invalid within an inconsistent mathematics almost by definition?

Mathematical Theories versus Theories of Practice

Direct Logic deals in the following two kinds of theories:
* Mathematics (denoted by ⊢) and Mathematical theories are thought to be consistent and consequently proof by contradiction is allows. All theories (including theories of practice) are allowed to use theorems of mathematics.
* A theory of practice T (denoted by ⊢T) (climate change, human livers, etc.) is pervasively inconsistent and consequently general proof by contradiction is not allowed.

OK, so I'm confused about

OK, so I'm confused about the paragraph that says "the above reasoning is done using Direct Logic etc.", right after the proof of consistency (which is in the domain of classical mathematics, right?)

You might find some of the

You might find some of the history/background in my recent blog post on this of interest, here.

There's also an interesting character Gerhard Gentzen who didn't make it explicitly into the post, but also fits my profile (in the post) of someone already engaged by the questions addressed by Godel when Godel's result came out. His attitude toward Godel's result, as I understand it, was remarkably different from that of later generations. He understood Godel's results. He didn't dispute them. He just didn't see them as a reason to abandon Hilbert's program (modify, but not abandon); rather, it seemed obvious to him that one would naturally want to know, for any given subset of arithmetic, what additional axiom outside that subset would suffice to prove the consistency of the subset. He made some progress, and apparently would have made more if he hadn't been starved to death in a Communist POW camp after WWII.

Thank you.

Thank you. That was an excellent read. This is a nice trip down memory lane for me, being a trained mathematician but an industrial computer programmer. Of course, Russell already showed that classical mathematics (as axiomatically formalized in 19c) is inconsistent, which proof revolves around sets that self-refer. I guess you could say the "self-proof" here shows that classical mathematics *thinks* it's consistent :)

Mathematics self-proof of consistency is not circular

The self-proof of consistency at the beginning of this post is not circular. Classical mathematics allows proof by contradiction without restriction. Therefore classical mathematics self-proves its own consistency (even if it is inconsistent!).

Mathematics is thought to be consistent.

You mean, of course, "(even

You mean, of course, "(even though it is inconsistent!)" [emphasis added].

Whether or not the "proof" is circular depends on whether you're presenting it as an example of a meaningless formal manipulation within a known-inconsistent system, or presenting it as actually establishing that mathematics is consistent. If the former, then it's not circular but I also see nothing deep about it. If the latter, then of course it's circular.

Though the first sentence of the post is just plain false. "That mathematics is thought to be consistent justifies the use of Proof by Contradiction." Even if mathematics were thought to be consistent (which is either trivial or false depending on what you choose to mean by the word "mathematics"), no, that wouldn't justify the use of Proof by Contradiction.

If your point is that "the assumption of consistency is deeply embedded in the structure of classical mathematics", then you are being circular since you started out, before the "proof", by asserting that.

"Mathematics is thought to be consistent" is a huge achievment

That "Mathematics is thought to be consistent" is a huge achievement and requires maintenance. Direct Logic is an extremely powerful system that may need further debugging. In the meantime, Computer Science soldiers on.

Your "self-proof" uses proof

Your "self-proof" uses proof by contradiction to show that something is true. Proof by contradiction cannot, by itself, be used to prove that something is true, only that something is false; so your using it to show something is true is a red flag that the "self-proof" is using more than just proof by contradiction. (You "prove" that it isn't true that mathematics is inconsistent, but you then need the Law of the Excluded Middle to establish that, since mathematics is not inconsistent, it is therefore consistent.)

The statement that "mathematics is thought to be consistent" would be false if you include in your definition of "mathematics" things that are known to be inconsistent. So evidently you are defining "mathematics" to be something that is thought to be consistent. Which is all very well, except that when you include the Law of the Excluded Middle you're talking about something that isn't thought to be consistent.

If you start by defining "mathematics" to be something thought to be consistent, use that to justify using proof by contradiction, use proof by contradiction to prove that "mathematics", as you've defined it, is consistent, and then say that you're not being circular, then you are being inconsistent.

and in point of fact,

Regardless of whether the OP is being inconsistent in his *statements* about the proof, the proof itself is invalid. The starting conditions contain a disproof by counterexample of the law of the excluded middle. In assuming P & -P we already have a disproof of the law of the excluded middle. Therefore its use in a step of the proof of anything having P & -P among its starting conditions is invalid. So inferring consistency from the absence of inconsistency depends on a 'law' that cannot hold in any system that admits the starting conditions.

Further, this is merely an example of the contagion of inconsistency. In reasoning from P & -P you can create a proof of the negation of any proposition that you can create a proof for.

I seriously considered advising the OP to edit/retract this odiferous thing before it became established in an archive where anyone who might otherwise become a potential employer will find it, but realized that such advice would be unlikely to be heeded.

In assuming P & -P we

In assuming P & -P we already have a disproof of the law of the excluded middle.

You know, I had to stare at this a while before I realized it's not quite right either. Ghastly easy to trip up on, this stuff. The law of the excluded middle doesn't say P and -P can't both be true, it just says at least one of them is true. I'd never thought of it this way, but this is why those two axioms keep coming up together: of the two statements, P and -P, one axiom says at least one of them is true, while the other axiom says at most one of them is true. (I gather the three Laws of Thought, attributed to Aristotle and used by Scholasticism, were that A and not-A are different from each other, that they aren't both true, and that one of them is true.)

the contagion of inconsistency

I like that phrasing. This is why the situation in the early twentieth century rose to the level of a paradigm crisis, rather than an axiomatic flaw to be fixed: there wasn't just an antinomy, there was a systemic problem that took a myriad different forms and kept coming up like a bad penny.

re: Your "self-proof" uses proof

John I think you have completely missed Hewitt's point and the role of the proof in the paper. As in, you slung your bow and fired your arrow and squarely hit just where you meant to aim, embedding it squarely in the door through which you recently walked backwards while the actual target is unseen, somewhere behind you.

Hewitt's paper is about a formal system with these properties:

1) There looks to be here a familiar and conservatively applied foundation, sufficient for doing pretty much all of the formal mathematics you typically encounter. There's certainly plenty there to do number theory. And the building blocks are familiar enough and are conservatively used in ways that we don't really suspect any inconsistencies.

2) Within this formal system there exists a seemingly formally valid self-proof of consistency.

The conjunction of (1) and (2) superficially appears to provide a counter-example to Goedel's second incompleteness theorem. There are somewhat deep (or at least interesting) reasons why it does not actually violate that theorem.

By the way, oh look, here is this perhaps useful formal foundation for pretty much all the math you are likely to use.

It is arguably needlessly provocative and a little bombastic to call this formal system "mathematics" but those weren't your criticisms.

In any event, the Direct Logic self-consistency proof is not intended to convince you that Direct Logic is consistent. The unproven belief that Direct Logic is consistent derives from how Direct Logic is constructed (in conservative ways, from conservatively chosen parts).

If Direct Logic includes

If Direct Logic includes enough to be useful, and can prove its own consistency, that's pretty compelling evidence that it isn't consistent. And Hewitt's failure to give a simple and compelling reason why not is further suggestive, because if a way were found around Godel's no-go theorem, one would expect it to be simple and compelling.

"Hewitt's failure"

If Direct Logic includes enough to be useful, and can prove its own consistency, that's pretty compelling evidence that it isn't consistent. And Hewitt's failure to give a simple and compelling reason why not is further suggestive, because if a way were found around Godel's no-go theorem, one would expect it to be simple and compelling.

Yes, but that is something we can interrogate directly. I started to do so with two earlier comments -- first the "clarification requested" and now the inquiry at the end of the follow-up requesting further clarification.

Notice that my request for further clarification squarely and directly starts to try to apply the 2nd theorem to DL.

I should add, John, that even if DL has bugs I'm very confident that a project like what DL attempts can succeed. What he's trying to do has practical utility but it isn't mind-bendingly unexpected. He's found a nice way to formalize the way that mathematicians post-Goedel have learned to stay out of trouble. He has a very lucid historic argument for why the gals and guys contemporary with Goedel didn't think of it. And he's historically right that Wittgenstein had kind of the right sense of how to look at it -- just not the chops to come up with a worked solution. And I think he's historically right that the development of thinking about computing means he inherited the tools to connect some dots.

I think there may or may not be bugs in DL and that's worth exploring.

I think by my standards he presents DL poorly and a better presentation is worth exploring.

I suspect that because we can "meta-reason" about a debugged DL in ways that DL can't do internally, we might benefit by adding an account of semantic towers here (and its DL, all the way down that tower).

I don't think there is any (and I don't think he claims any) literal counter-example to Goedel. Rather, he claims a finer understanding of Goedel and an unexpected foundational system to take advantage of that finer understanding.

He's got my betting dollar at this point and this is coming from a guy who, like you, "has some real problems" with the way he talks about the computation power of actors.

He's got my betting dollar

He's got my betting dollar at this point

Yes. And to be very clear, I wasn't being sarcastic when I wished you luck. I on the other hand have no interest in betting against him (I dislike betting against others generally, because I generally don't wish anyone ill who doesn't wish me ill), but after some inquiries here I find nothing so far has convinced me it'd be worth my time to struggle to decode something in his work that, if it were there, would be something beautifully simple that he's evidently unable to express himself without coming across sounding like a crackpot. I'm quite aware that people working outside the mainstream can end up sounding like crackpots for reasons other than actually being crackpots (in fact, one of my many partially-written blog posts is about why this happens and how to avoid it). But in making an educated guess as to whether he is or not, the earlier business with computation theory doesn't fill me with confidence.

The self-proof that math is consistent is simple and compelling

The self-proof (at the beginning of this forum topic) that math is consistent is simple and compelling

The self-proof is simple and

The self-proof is simple and (pardon my bluntness) obviously BS. That doesn't necessarily mean you don't have some valuable insight buried somewhere in your work, but it does mean you're both failing to present your ideas in a plausible way and failing to recognize that you're failing to present them in a plausible way.

the "huge achievement"

John I think I might play the role of Rosetta stone here.

Hewitt said "that mathematics is thought to be consistent is a huge achievement".

You replied by criticizing the self-proof of consistency in Direct Logic, the proof that uses proof-by-contradiction.

When Hewitt says "that mathematics is thought to be consistent is a huge achievement" he is not talking about the proof or about a personal achievement. He is talking about a human civilization achievement. We are collectively pretty certain we are safe from contradictions when we use a conservative set theory, basic number theory, certain rules of inference, and so forth. This set of assumptions is implicit in just about everything people "doing math" usually do. The set of assumptions has been cross-checked for consistency lots of ways with lots of partial proofs. There is no absolute proof but.... "Mathematics is thought to be consistent" and that fact is a "huge achievement [of humankind]".

The role of the proof here is not to persuade you of that consistency but to exhibit a formal property of a formal system that also happens to formalize arithmetic well enough that... wait a minute, shouldn't this formal system be ruled about by Goedel's 2nd? Only there is this technical question of the definition of a "sentence" and the way Hewitt uses it here.

I'm not certain yet that DL doesn't still have bugs but it would be nice to see the discussion be more about looking for those bugs....

I wish you luck in exploring

I wish you luck in exploring this angle. However, speaking as someone who had blogged quite recently about prospects for bypassing Godel's no-go theorem, I see no evidence to suggest Hewitt has any real insight here. I'd not be surprised were this to turn out to be of a piece, at some abstract level, with Hewitt's earlier failure to grok computability.

"any real insight"

I see no evidence to suggest Hewitt has any real insight here.

What do you mean by "real insight"? Do you mean that DL is an intuitively acceptable system that contains a valid self-proof of consistency along with arithmetic, contra [popular understanding of] Goedel? Or do you mean that DL contains a mistake? Or some third thing?

John when I was an undergrad, long ago, for a while I was in the fancy advanced studies track for undergrads at CMU. I don't know if they still have it but back then they called it "Math Studies".

It was a really good program in which they took the brightest math students to volunteer for this and made them start over from near scratch.

The curriculum lasted 3 years or so -- I forget -- and started with first order logic and ended with things like a survey of the math of general relativity.

A curious thing about that curriculum was the way it used a consistent foundation for the entire course ... but explicitly and deliberately evaded foundational questions.

We were allowed to define sets by predicates. There was no specific prohibition against recursive definitions. But for the purposes of that course if someone brought up "the set of all sets which are not members of themselves" the official answer from the curriculum would be along the lines of "uh.. let's just not do that. you can talk about that in some other course".

The point of this course was mostly to prepare people who would go on to become professional mathematicians (hence part of the reason I dropped out). The point of the way they treated foundations was that there was a "safe territory" that everyone could learn to recognize but nobody had quite formalized. It was kind of a know-it-when-i-see-it territory, backed up if you really wanted to dig into it, by various foundational studies.

Hewitt's DL looks like a plausible attempt to formalize that know-it-when-i-see-it. That's its significance. It's kind of fun trivia that one of the consequences is the valid but unconvincing self-consistency proof.

Hewitt is about as nuts as someone, in this day and age, trying to make a case for something close to fexprs.

"Real insight" as opposed to

"Real insight" as opposed to thinking you've seen something when there isn't really something there.

And yes, I've thought deeply, for many years now, about how to pursue a scientific idea outside the mainstream. Between my dissertation and my master's thesis I've worked in-and-around two different failed paradigms in computer science, and made some contribution to partially lifting the curses on both. The executive summary of my hard-won advice on alternative science is, don't sound like a crackpot and don't be one. Both of which are difficult.

I might add, that Hewitt is

I might add, that Hewitt is likely working with a disadvantage: confidence. He's had success in the past that might make him feel he'd be taken seriously because of it. But when presenting a radical idea, one has to work really hard at how to present it, regardless of whether one has an established reputation.

Gödel was certain

According to Paul Cohen:

He [Gödel] struck me as having
an almost unshakable belief in this “realist” position,
which I found difficult to share. 
His ideas were grounded in a deep philosophical belief
as to what the human mind could achieve. 
I greatly admired this faith in the power and beauty of Western Culture, 
as he put it, and would have liked to understand
more deeply what were the sources of his strongly held beliefs. 

On the other hand, ccording to Lakatos:

“Certainty” is far from being a sign of success; 
it is only a symptom of lack of imagination and conceptual poverty.
It produces smug satisfaction and prevents the growth of knowledge.

At the risk of stating the obvious,

At the risk of stating the obvious, my remark about confidence is not about the researcher believing they're right, but about the researcher believing that others will believe them. Godel explained himself well (well enough, as demonstrated). I fear you may not realize you've failed to explain yourself well. I knew I was right about the theory of vau-calculus years ago, but I also knew I hadn't yet found a way of explaining it that would be understood by people already familiar with Wand's result, so I kept working on the problem. If I hadn't viewed the failure of communication as a problem that needed to be solved by improving my presentation, I wouldn't have continued trying to improve my presentation.

Understandability

Sorry that you are having trouble with the material. Is there anything that is particularly troublesome? Maybe the presentation could be improved.

understood vs understandable

I said I needed to explain my result in such a way that it would be understood by people already familiar with Wand's result. That's different from explaining my result in such a way that it would be understandable by people already familiar with Wand's result. The question I needed to ask was not why they couldn't understand, indeed it wasn't certain that they couldn't; the question was why they didn't understand.

When people already familiar with Wand's result saw my result, their first reaction was, not infrequently, to figure I was probably too ignorant of the subject to understand that Wand had already disproved what I ignorantly imagined I had proved. It doesn't matter whether it was "reasonable" of them to react that way; the reaction occurred and therefore had to be dealt with. And it could not be dealt with by YELLING MY CLAIMS MORE LOUDLY. Why did they react that way? The vast, vast majority of claims that appear to contradict a previous correct result are meritless, and trying to take them all seriously would only result in having no time or energy to do anything useful. If one can get past that first reaction, which I sometimes did, we then had spectacularly dysfunctional communications because the entire conceptual framework they were using, and the one I was using, differed on multiple subtle and pervasive points, so that we would talk past each other. Even if we tried to discuss something we thought might be "the" thing tripping us up, we'd be tripped up in trying to discuss that thing by the other things that were also tripping us up.

I do not know whether there is some great, valid insight underlying what you're doing. There are, from my perspective, lots of telltales in what you're saying that cast doubt on your understanding of the subject. That could be because you're failing to explain what you're doing in a way that makes you sound credible. But from my perspective (and here I have to be considered typical of a significant segment of your potential audience), pouring time into trying to understand what you're doing isn't a sound investment for me until and unless I've got better evidence that you know what you're talking about. From your perspective, you have the nasty, but not at all unusual, problem of either  (1) finding a radically different presentation that causes you to sound immediately credible, or —even more difficult and unpleasant—  (2) figuring out that you were mistaken.

A key factor here in how your work comes across, and therefore key for anything you do from here (either 1 or 2), is that Godel's result is the sort of thing that, if there is a way around it, that way is probably both incredibly simple and so stunningly obviously right that nobody's going to doubt it once they see it. That that is demonstratedly not the case with your result is in itself a problem.

Standing back and looking at the overall pattern of discussion here, it also seems you have a tendency to waffle back and forth between saying Godel doesn't apply because you've used types to prohibit the paradoxes, and saying (so it seems) that you don't need to worry about consistency because your system is robust against that sort of thing. The waffling back and forth feels complicated and suspicious, so if you want to change your image you need to avoid creating that feeling. It also doesn't help that each of those two claims is individually dubious: On the one hand, types are generally complicated rather than simple, and Russell and Whitehead had applied typing to the problem before Godel's result. On the other hand, tolerating inconsistencies doesn't feel like a suitable foundational approach, but rather like a way to live with the flaws of imperfect specs — note that you start out by saying that use of proof by contradiction is justified by belief that the foundations are consistent, which common sense says does not square with deliberately choosing foundations so they'll tolerate inconsistencies. Why would you care whether the foundations tolerate something that you supposedly believe aren't there?

Your problem, supposing you don't discover you've made a mistake somewhere, is to figure out how to give your entire presentation a radical make-over. And there's a good chance only you can effect such a make-over, working from the inside.

understandable

is that Godel's result is the sort of thing that, if there is a way around it, that way is probably both incredibly simple and so stunningly obviously right that nobody's going to doubt it once they see it.

That's the case here. You are simply forbidden from defining DL sentences using, for example, a function whose domain is the natural numbers. You can't use Goedel's 2nd theorem construction as a result.

Maybe there is some weird way to rescue the gist of Goedel's 2nd, for DL, using some yet undreamed of alternative construction. It's impossible to prove that negative. Good luck if you want to try.

I agree that the presentation of DL in this paper has some severe stylistic deficits, though, from the perspective of many potential readers.

This doesn't meet either of

This doesn't meet either of my criteria, in terms at least of presentation. It doesn't look incredibly simple, since it rests of types, and types are nowadays usually ludicrously complicated beasts (even though Russell and Whitehead's type system actually was simple). And it isn't at all obvious that it's right. It reminds me, forcefully, of a (half-remembered) story of Feynman's, that went something like this: At Los Alamos, where the military brass kept secret documents in safes in their offices, Feynman would crack their safes during brief periods when he was alone in their offices, to see if he could. He diligently reported to the higher-ups that there was this security problem that their safes could easily be cracked. They reacted to this information by issuing a memo that Feynman was not to be left alone in their offices.

re This doesn't meet either of

John, Goedel gave you a recipe for deriving a contradiction in a formal system if that formal system contains the basics about natural numbers and that system contains a valid self-proof of consistency.

I encourage you to go ahead and try to carry out Goedel's construction in DL. At some point you will want to define a function from numbers to sentences. That step is syntactically forbidden. You won't succeed, following Goedel's recipe, at deriving the promised contradiction.

You're kind of missing the

You're kind of missing the point I was trying to make to Hewitt. Even you have commented the stuff's not rigorous. Others have comments like 'he seems to be misunderstanding what Godel says'. If he's on to something, he needs to radically change his approach to telling others about it, because as it is he's presenting something that doesn't even look worth trying to disprove. The more valid his ideas actually are, the more important it is for him to completely change his presentation to the world.

re: You're kind of missing the

I agree that it would benefit from a more rigorous presentation and a presentation that makes fewer assumptions about the reader's knowledge going in. Not so sure about your condescension, though.

Fewer assumptions about

Fewer assumptions about readers' knowledge going in? (And you call me condescending?) Simplicity is desirable, but as for assumptions about readers, I'm mainly concerned that he seems to ask readers to believe him despite the trappings of crankiness. I went to great lengths, literally and figuratively, to give him the best advice I could on how to sound as uncranky as possible (based on my own decade-or-so of experience in trying to present an extra-paradigmatic idea while sounding as uncranky as possible). It's frustrating you seem to be encouraging him to ignore my advice.

DL may choose to forbid

DL may choose to forbid whatever it wants to, but that won't prevent the definition of functions from DL syntax to natural numbers and vice versa outside of DL. And from the standpoint of the incompleteness proof, this inside/outside distinction doesn't matter.

I've read many, many crank efforts (I am not saying Hewitt is a crank) at subjugating incompleteness by "forbidding" things; it doesn't help.

Also note: incompleteness is a statement about certain specific individual first order mathematical theories; it is not a statement about "mathematics" in toto.

re DL may choose to forbid

And from the standpoint of the incompleteness proof, this inside/outside distinction doesn't matter.

It matters for the 2nd theorem because inside the system you need to combine the self-consistency proof with proof of G's paradoxical implications if G obtains.

Suppose I am wrong. Can you please walk us through the 2nd theorem's construction of a contradiction in DL? I think you won't get far before you run afoul of DL's syntactic limitations but you might prove me wrong.

Also recall my other

I also asked, what power is being lost? Interfering with our ability to prove there's a problem (supposing that is successfully done) only helps our confidence that there isn't a problem if we can see, clearly, that the power that caused the paradoxes has been reduced. And showing that clearly is going to be difficult since we have clear evidence that hanky-panky like a self-proof of consistency is possible.

re: also recall

You could build a model of DL within DL. DL is strong enough to prove that the model system is incomplete. DL is strong enough to prove that the model system contains a self-proof of consistency. DL is not able to prove that the model system is consistent. DL is strong enough to prove -- using roughly the ideas of Goedel's 2nd but differently applied -- that DL can not prove that the model is consistent unless the model is inconsistent. DL can not prove that in talking about the model it is talking about itself. In particular it can prove that G, in the model, can't be proved in the model but it can't from that import the unprovability of G within itself.

Suggestions for improving the style of the paper are welcome

Suggestions for improving the style of the paper at the beginning of this form topic are most welcome.

a simple challenge for DL

John, I sketched out how I think Goedel's 2nd incompleteness theorem would be applied to DL to derive a contradiction. Hewitt's claim seems to be that some step implied by this sketch is not valid in DL and I'm hoping he'll elaborate a bit to give us all a clearer idea.

(If anyone else sees how DL prevents this, let me know! It's a bit frustrating because I don't think there are enough and precise enough definitions in the paper to be sure exactly how DL is constructed -- so I have to make what I hope are some reasonable guesses.)

http://lambda-the-ultimate.org/node/4784#comment-76480

So what you're saying is

So what you're saying is that within the system of classical mathematics, proof by contradiction is always allowed, therefore this counts as a proof within classical mathematics.

I sort of see your point. Sort of.

But if proof by contradiction is the irresistible force, surely circular arguments are the immovable object. So we're in the sphere of comical paradoxes in classical thought, really. Of course the argument is circular, and a classical mathematician can easily point that out. It's not *explicitly* circular, I'll grant you that.

Regardless, "classical mathematics" in this sense is not even formally defined. It includes any sub-language you care to invent. Arithmetic, calculus, geometry, topology, etc. So the proof is not even self-referential - it's referential of some much larger entity that is in principle infinite in scope - like an ant contemplating the universe. Godel's results are about formally-defined systems of arithmetic with limited scope, not informal universally-scoped self-referential systems that you can break if you wave your hands hard enough. Given that, how is this result really contra to Godel?

To contradict Godel you have to show a system X which is sufficiently complex to come under Godel's remit but which is provably consistent (so proof by contradiction can be used) and then you can prove consistency using proof by contradiction, but only because you are standing on the shoulders of whomever proved X's consistency in the first place. Even then, X does not "self-prove" its consistency, unless the actual proof of its consistency is within X and not within meta-X. So you still need to offload the actual work to the guy who can actually prove the consistency of some system X within X. That would be contra Godel, and would be a significant achievement. But saying "X assumes Y, and using Y I can prove f(X), therefore X self-proves f(X)" seems like nonsense to me, sorry. You need "X *proves* Y, and using Y I can prove f(X), therefore X self-proves f(X)".

Gödel 's results apply to Math and Mathematical Theories

Mathematics encompasses Sets (including Peano Numbers). So mathematics is certainly sufficiently powerful for Gödel's results to apply. In addition, every mathematical theory builds on mathematics. The same simple proof enables every mathematical theory to prove its own consistency.

Second-order inference

I've spent a bit of time with it, and I find this system interesting, but I class the paper as "fringe logic" because:

  1. It deploys terminology in a way that is avoided by professional logicians because it is confusing. Here, the Identity and Reflection axioms and the backchaining rule suggest (it is bad that I am not sure!) that we are working with a second-order theory. Logicians avoid talking about inference systems for such theories, or use of the singly dashed turnstile relation, because these systems do not behave like first-order theories. As Stephen Simpson put it, these theories do not have proof theories; your actor model for them is something else, a computationally grounded model theory, and your "natural deduction" seems to be the same. To try to be constructive about this: can we give a more typical inference system for the theory, and if not, can we cast light on why not?
  2. The paper is hard for me to understand for reasons not directly stemming from the difficulty of the material. I say a bit more below about the kind of consistency proofs that Gentzen and Willard give, but in general, I see much more effort put into showing what the theory can do than into the equally important issue of what it cannot do. If I were to cast the paper into a map of the connections between the ideas presented, it would mostly be marked "here lie dragons". This is not a good basis for a foundational critique.
  3. The rhetoric is grand and not clearly based on the formal results, I think partly because of terminological haze. I might be in some sympathy with the idea that the quest for certainty is Quixotic, but there is something of slapstick in repurposing the fundamental concepts of the revolution in mathematical logic of the 1920s and 1930s to make the goals look absurd. If I wished to be cheap, I might say that proof theoretically, the advantages of this theory over Gödel's system PRA are the advantages that Russell ascribed to theft.
  4. It would not be surprising for a second-order theory (if it is that) to prove its own consistency, if consistency means absence of a "contradiction", because second-order theories effectively elide the distinction between truth in the model and their (preferably doubly dashed) turnstile relation.
  5. I use scare quotes about "contradiction" in this system because since there seems to be no notion of proof in the standard sense of the term, the notion of contradiction is also non-standard; contradictions appear here to be redefined to mean the absence of a pair of sentences that are both satisfiable and where one is the negation of the other. I think this may be the point that Countably Infinite was getting at with the point that soundness is a highly nontrivial hidden assumption in this kind of theory.
  6. The kind of consistency proofs that Gentzen gives for FOA and Willard gives for his self-verifying theories are described by Hewitt in terms I regard as condescending. But both of these proofs do what I do not see in this paper, namely provide strong arguments for the adequacy of the systems they describe. If Direct Set Theory proves a mathematical theorem, what confidence do we have that the proof we have would be acceptable to a mathematician? Proofs in ZFC generally are acceptable to mathematicians.

Some minor quibbles:

  • However, PL [Gödel-Löb provability logic] is a very weak theory of deduction - PL is a very tricky propositional modal logic. Its theory of frames is non-first-orderisable. The Löb axiom is the magic sauce in some very interesting, strong type theories.
  • (Gödel insinuated that Wittgenstein was “crazy.”) - I respect Wittgenstein's work very highly, but Gödel was not being unfair here. Wittgenstein was using the notion of contradiction at cross purposes to the way Turing did (Turing famously lost his temper with Wittgenstein about the interpretation of mathematical logic.
  • I don't like the existence of the trademark symbol in Direct LogicTM (thanks note on the first page). At the very least, it is unfair to the literature stemming from the same-named decidable predicate calculus of Ketonen and Weyhrauch (1984).

I do not mean to be cheap. At the very least, an "internal logic of the actor model" is a very interesting thing, and many of the features of the system remind me of other interesting things that I have seen. But I would like to see less contempt for first-order solidity and more results about where the limits of the theory lie.

Computer Science versus Philosophy (again)

Charles Steward has raised some interesting points.

A fundamental point is differences how Computer Science and Philosophy address issues.

In order to facilitate discussion, consideration of his points is separated into multiple posts below (ignoring as much as possible Stewart's non-substantive innuendo).

"Faced with the choice between changing one’s mind and
proving that there is no need to do so,
almost everyone gets busy on the proof."
    John Kenneth Galbraith [1971 pg. 50]

Philosophy set the initial paradigm

Philosophy set the initial paradigm and some philosophers are uncomfortable that Computer Science has entered a field that used to belong to Philosophy alone. A new paradigm is often considered "fringe" and new terminology is not initially understood.

"fringe"

Three points:

  1. Perhaps I should not use the charged term "fringe". I do not mean to say there is no contribution there, or that I don't appreciate efforts to do something new. It is just that both paraconsistency and non-first-orderisability are tricky things, and if you inhabit that space, you lose things that boring, certainty-grounded, monotone first-order theories have. This is not to say that this ground is not interesting, but if you are not clear about the costs of giving up monotonicity or first-orderisability, you will lose audience and risk not being considered relevant to the ongoing enterprise of mainstream logic. In particular, the kind of reflection-like axioms you use and interleaving of forward/backward-chaining are known and have a history in logic: the reasons these haven't been widely adopted in the theorem proving community are reasons to be doubtful of your work until you deal with them straight on.
  2. I am not an academic anymore; to the extent I consider myself still a scholar, it is as a logician. Computer science, particularly, is where I think the most exciting applications of logic lie and so it is the richest source of problems for logic; likewise philosophy is something that logicians sometimes have to do, but logic is something of an autonomous discipline that overlaps with but is not just a part of either. I like how Volker Halbach has talked of the formal sciences as an ecology that is inhabited by logic, computer science, and (some) philosophy. You should be right at home in the formal sciences.
  3. I actually like your consistency proof, as an argument. It is nice to set up systems where this kind of thing can be done. I do not think it is very surprising that we can find systems that are either non-first-orderisable or paraconsistent that can "prove" their own consistency, and the interest of the proof depends what good properties your system has. I think what you have shown doesn't tell us that Gödel has been refuted if we avoid playing games.

Paraconsitency is a special case of Inconsistency Robustness

Direct Logic is designed to support Inconsistency Robustness (see the IR'14 symposium at Stanford next summer).

Paraconsistency is a special case of Inconsistency Robustness.
*All that paraconsistency requires is that it to be possible to derive everything from a contradiction. So a logical rule to the effect that you can infer "God exists." from a contradiction is paraconsistent because it does not infer everything.

*As a paradigm, paraconsistency resulted in zoo of incompatible and non-intuitive systems that were not useful to Computer Science. Computer needs a powerful, intuitive standard system for inconsistency robust inference. Direct Logic attempts to fill the bill.

The problem with paraconsistency...

... is that the class of paraconsistent theories includes the dialethic theories, we I think we agree are unacceptable for mathematics. But since dialethicity is something that can be hidden, it makes providing formal criteria of acceptability harder for paraconsistent logic than monotone logics.

I do not reject paraconsistency: Belnap and Dunn have shown that it is interesting. But its strengths I consider to be weaknesses for the kind of enterprise that I take you to be engaged in, namely providing foundations.

Inconsistency Robustness is fundamental to Computer Science

Inconsistency Robustness is of fundamental to Computer Science. See Inconsistency Robustness 2014

What theories are really like

I believe that handling inconsistencies is essential to working with large information systems, collective inference, and the like, which I would class as information science. I think failure to tolerate inconsistencies has been healthy for mathematics.

I am not barring monsters here. I just want to know what theories are really like before I claim to understand them. For non-normal modal logics, I like to see if they are equivalent to a normal modal logic, in the sense that proofs in one can be translated into proofs in the other, and likewise for unprovability. For paraconsistent theories, I like to see if they are equivalent to some montotone logic. If I can't see this, then I doubt the coherence of the system.

Two kinds of theories: mathematical versus practical

It is important to distinguish two kinds of theories and handle each appropriately:
* a Mathematical Theory should be a consistent extension of Mathematics which is the common foundation of Types and Sets (including Peano Numbers)
* a Practical Theory (e.g. climate change, the human liver) is invariable inconsistent and requires inconsistency robust inference (e.g. Direct Logic). A Practical Theory can make use of any theorem of Mathematics.
Both mathematical and practical theories should be monotonic for theoretical and engineering reasons.

Self-referential sentences are monsters that must be barred because they lead to contradictions. First-order theories are entirely inadequate for Computer Science.

Computer Science is informed by and informs Mathematics

Computer Science is informed by Mathematics and increasingly informs Mathematics. Just being born, Computer Science imported from Mathematics. But the new science soon needed to extend and then create new Mathematics.

A proper mathematical theory requires monotonicity. According to Anderson and Belnap in Entailment:

Any criterion according to which entailment is not transitive,
is ipso facto wrong.

Thanks for the pointer to the work of Volker Halbach.

Is the first-order restriction "playing games"?

One way of "playing games" is introducing limiting syntactic restrictions on general principles. For example, first-orderedness imposes syntactic restrictions as follows:
* on the general induction principle for natural numbers
* on the sets that can be defined by a predicate using comprehension
Philosophical reasons have been proposed for the above restrictions. But the above restrictions may not make sense for Computer Science.

No

FOL is far from a syntactic restriction on logic. The restrictions make sense for semantical reasons more intelligible to the typical computer scientist than to the similarly typical philosopher.

In terms of "logic harmony" (a philosophical concept due to Gentzen, Belnap, Prawitz, and Dummett, one that Howard and Martin-Löf have shown makes clear sense within theoretical computer science) and as I explain in my doctoral dissertation (Stewart 2000; mostly chapter two, for the intuitionistic case, together with the discussion of recursion in chapter one, and with an explanation of how classical logic complicates matters in chapters three and four and a demonstration there that these complications can be resolved) , the standard inference rules involving natural numbers (in type theory, the rules introducing the zero and successor term constructors and the induction rule) can be seen as a complete specification of the meaning of natural numbers and provides an effective criterion for acceptability of alternate induction principles. Incompleteness arises for a reason that can be understood in type theory syntactically: that the induction rule for the natural numbers cannot be constructed to satisfy what I call the decompositional part of the inversion principle (Prawitz calls this the reverse of the inversion principle), with the consequence that for some new concepts (such as universal quantification) the interaction of the natural numbers concept can interact with the new concept to increase the inferential strength beyond the two concepts separately (i.e., the theory with both concepts is a nonconservative extension of the theory with either one of the concepts removed; this generally means that it is not possible to prove cut elimination for the theories with both concepts, even though cut-elimination might hold for each of the one-concept theories). Comprehension principles appear to suffer a similar problem with respect to harmony, although there is no consensus on the right way to formulate sets in type theory.

If you look for people who think that SOL is better than FOL, I guess you will find more philosophers than computer scientists, e.g., Shapiro (2000).

(Shapiro 2000). Foundations without Foundationalism: A Case for Second-order Logic. Oxford University Press.
(Stewart 2000) On the Formulae–as–Types Correspondence for Classical Logic. DPhil thesis, Programming Research Group, University of Oxford. http://www.textproof.com/paper/dphil.html

Ohh. Your dissertation may

Ohh. Your dissertation may be exactly what I need to move me forward (well, your dissertation plus I need to launch another sustained assault on Kleene's Introduction to Metamathematics).

Good luck

Some people have complained that it is hard going. Do get in touch if you get in trouble.

First-ordedness is a syntactic restriction on general induction

First-ordedness is a syntactic restriction on general induction as introduced by Peano and Dedekind who proved that the general induction axiomatization (see below) characterizes the natural numbers up to isomorphism.

The correct categorical formulation of the general induction principle is as follows:

The natural numbers â„• are axiomatised as follows where S is the successor function:
  0∈ℕ
  ∀[n∈ℕ]→ S(n)∈ℕ
  ∀[n∈ℕ]→ S(n)≠0
  ∀[n∈ℕ]→ S(n)=S(m) ⇨ n=m 
  ∀[Y]→ Inductive[Y]⇨ ℕ⊆Y
     where Inductive[Y] is defined as follows:
       Y⊆ℕ
       0∈Y
       ∀[n∈Y]→ S(n)∈Y

The first-ordedness restriction has unfortunate consequences in what can be proved in practice because of the compactness property imposed on all first-order theories. A consequence of compactness is that no first-order theory can characterize the natural numbers up to isomorphism. Another unfortunate consequence of compactness is that no first-order theory can prove that a computer server always responds to the requests that it receives.

This observation is not a contradiction

General induction is formulated using language from three type-theoretic concepts: the natural numbers (Nat) (I claim this is adequately specified using normal induction as it is formulated by Martin-Löf), first-order universal quantification (1U), and second-order universal quantification (2U).

Nat alone is equivalent to PRA
Nat+1U is not a conservative extension of Nat, and alone is equivalent to PA
Nat+1U+2U is not a conservative extension of Nat+1U, and allows us to prove general induction.

Hence, on the account of logical harmony as I have expressed it in type theory, we have two sorts of interference involving two kind of universal quantifier.

Incidentally, the simple observation that general induction is a 2nd-order sentence has no obvious relevance to my observation that FOL is characterised by its semantics. It is common for 1st-year introductions to logic to begin by working directly with the model theory of FOL and treat its syntax casually until axiomatics and proof theory are tackled later in the course. If the essence of FOL was its syntactic restriction, this approach would fail to teach FOL at all. Having taught such a course, I attest that it does indeed teach FOL well.

First-order theories are inadequate for Computer Science

First-order theories are entirely inadequate for Computer Science because, for example:
* A first-order theory cannot axiomatize the integers up to isomorphism
* A first-order theory cannot prove that a server always responds to the requests received by the server
* The Tarskian model theory semantics of a first-order theory cannot handle inconsistency

Computer Science needs semantics for inference that are based on argumentation rather than Tarskian model theory. See Formalizing common sense for scalable inconsistency-robust information integration

Is allowing self-referential propositions "playing games"?

Is allowing self-referential propositions into mathematics "playing games"? Self-referential proposition do not seem to be at all useful in ordinary mathematics. Also self-referential propositions lead to inconsistency in mathematics. So why allow them? Is this where Gödel went off the rails?

I don't think so

I think incompleteness is mathematical for two reason. First, culturally, I've found "pure" mathematicians (unlike computer scientists) to often be condescending about logic in general, but are respectful of the core incompleteness results. Second, Paris & Harrington (1977) and Friedman (2010) have shown that incompleteness is isomorphic to quite natural mathematical phenomena.

There are many accounts of self-reference in consistent first-order theories; many use familiar devices: e.g., quotation, allowing names for sentences as individuals; many use more elaborate devices: e.g., ordinal reflection principles over hierarchies of Tarski-languages, categorical universes, first-order modal theories with non-first-orderisable (ie., Pi-1-1) frame conditions. You will not be surprised if I say that I do not think that Gödel's hierarchy of provability strength goes off any rails.

Is forbidding self-referential sentences a kind of monster barring? Lakatos seems to consider monster barring a kind of game.

(Paris & Harrington 1977). A mathematical incompleteness in
Peano Arithmetic. In J. Barwise, editor, Handbook of Mathematical
Logic, pages 1133-1142. Amsterdam: North-Holland.

(Friedman 2010) Boolean Relation Theory and Incompleteness. Lecture Notes in Logic, Association for Symbolic Logic.

Monster barring is serious business in Computer Science

Note that the Peano-Dedekind standard formulation of induction proves the Paris-Harrington Theorem that you cite. So the theorem is not an example of "incompleteness" in mathematics. The fact that the cut-down first-order version of induction cannot prove the Paris-Harrington Theorem is another argument against first-ordedness.

So far as I know, there are no good practical uses in Computer Science for mathematical self-referential sentences. Since self-referential sentences lead to inconsistency, they represent a serious security hole and must be barred.

Au contraire, Lakatos meant that that monster barring is serious business when he wrote:

“But why accept the counterexample? ... 
Why should the theorem give way...? 
It is the ‘criticism’ that should retreat.... 
It is a monster, a pathological case, not a counterexample.”

The self-proof of the consistency of mathematics stands and the self-referential sentences must be barred.

Paris-Harrington and Lakatos

By the PH theorem, I mean theorem 1.2 of Paris & Harrington (1977), which is not a theorem of PA, and weak systems (EFA, I think), can show is equivalent to the consistency sentence for PA. Is this PH theorem provable in your logic?

I think you have misunderstood the significance of the role those Lakatos assertions by Delta play in the dialog, though maybe you can convince me that Delta speaks for Lakatos there.

(Paris & Harrington 1977) A mathematical incompleteness in Peano arithmetic. In Handbook of Mathematical Logic, Amsterdam: North-Holland. http://www.compstat2004.cuni.cz/~krajicek/ph.pdf

(Edit: bad HTML fixed, thanks CI)

Italic Tag?

I guess your post introduces an unbalanced <i> tag. Which runs down from your post over all other posts making them italic.

Parris-Harrington is a theorem of Mathematics in Direct Logic

Parris-Harrington is a theorem of Mathematics in Direct Logic because Direct Logic incorporates the full Peano-Dedekind theory of the natural numbers.

Lakatos was serious about monster barring

Lakatos was serious about monster barring.

Computer Science is deadly serious about barring monsters like self-referential sentences because they lead to contradictions in mathematics that represent security holes.

Being first-order is an odd restriction for Computer Science

Being first-order is an odd restriction for many purposes in Computer Science.

First-order theories are incapable of characterizing (up to isomorphism) even fundamental mathematical structures such as integers and real numbers. Peano and Dedekind succeeded in axiomatizing (providing a proof theory) that exactly characterized integers and reals up to isomorphism. Subsequently, some philosophers have come up with quibbles that are not particularly bothersome in Computer Science.

Proof Theory versus Tarskian Model Theory

Proof theory (argumentation) is fundamental to Computer Science because it has proven useful for reasoning about inconsistent information that is pervasive in computer systems. Consequently Direct logic is "all in" for argumentation (proof theory). Of course, as a result model theorists are not happy with Direct Logic.

So far, Tarskian model theory has failed to gain similar traction with inconsistent information although some progress has been made in extending it. For example, see Berto's paper in IR'11.

Italic Tag 2

@Charles: Looks see I can even close the italic tag by posting
something with </i>. The posts should no more be italic
from this post on. I guess the LtU system does not sanitize
posts.

Post sanitised

I've closed my mismatched tag (I had an i closed by a /a). Excess /i closing tags don't seem to be a problem.

LtU runs on an old version of Drupal, which has issues like this.

Can credible consistency proofs make use of ProofByContradiction

In view of the proof that mathematics self-proves its own consistency, can credible consistency proofs make use of Proof by Contradiction?

Gentzen proved the consistency of first-order Peano numbers, using proof by contradiction. Is this a issue for the credibility of his proof?

Provability Logic is very weak in comparison to Direct Logic

Provability Logic is very weak in dealing with issues of provability by comparison with Direct Logic. An important reason for the weakness of Provability Logic is that it allows self-referential propositions.
Consequently Probability Logic lacks the Principle of Integrity:

          (⊢Φ) implies Φ 

Gödel insinuated that Wittgenstein was crazy.

Gödel insinuated that Wittgenstein was crazy. This was unfair because it was ad-hominem and didn't address Wittgenstein's substantive point that self-referential propositions lead to inconsistency in mathematics with the following argument:

True in Russell's system means, as we have said, proved in
Russell's system; and false in Russell's system means that the
opposite [negation] has been proved in Russell's system.
Let us suppose I prove the improvability (in Russell's system) of P
[⊢⊬P where P⇔⊬ P]; then by this proof I have proved P [⊢P].
Now if this proof were one in Russell's system [⊢⊢P]—I should in
this case have proved at once that it belonged [⊢P] and did not
belong [⊢¬P because ¬P⇔⊢P] to Russell's system.
But there is a contradiction here! 

PS. I asked Weyhrauch about our use of "Direct Logic". He thought it was OK and appreciated the reference.

use of "Direct Logic"

It's not the use of the term that I worry about, it is the trademark symbol. There seems to be an assertion of ownership over the term that was not made by Ketonen or Weyhrauch, or the various other people who have done logic using this name.

A valuable name should not be wasted

Ketonen & Weyhrauch kind of mentioned Direct Logic in passing. And their mention was for a very specialized system. At this point, the usage here is more deeply embedded in the literature than their original mention.

Embedded in the literature

Following the its recognition as a simplified fragment of linear logic, direct logic has been the focus of a significant literature. Cf. DDG Google Scholar: "direct logic" linear !gscholar.

If you want to assert a trademark on a theory you didn't invent, it does not change the validity of your research. But putting trademark signs on academic terms is a danger sign for fringe science

Direct Predicate Calculus versus Direct Logic

Weyhrauch liked to use the term "Direct Predicate Calculus" (6,430 results on Google) which appears as the official name in many of the papers with "direct logic" as a description.

Unfortunately, Google Scholar tends to be out of date in comparison to regular Google searchers. However, Direct Logic shows up in Common sense for concurrency and strong paraconsistency using unstratified inference and reflection in the first search page of a Google Scholar search for "Direct Logic".

I take it that we disagree on the importance of Linear Logic for Computer Science ;-)

Precedence

The point I made is that "direct logic" has seen substantial usage by a variety of scholars since Ketonen and Weyhrauch's paper was published. I am not going to evaluate your theory on the basis of a symbol, rather, I just wanted to draw your attention that it did not create a good impression on me.

I would find it strange to think direct logic is important and linear logic unimportant. I can see how to defend the other three combinations of important/unimportant.

The fate of your research is in the hands of your successors

The fate of your research is in the hands of your successors. So far Linear Logic has not gained much traction in Computer Science. We will see what happens with Direct Logic ;-)

Removed

Removed

Thank you

This is helpful. So it does have a first-order, paraconsistent proof theory?

There are non-normal epistemic logics that seem close to this theory. See Égré (2004) The knower paradox in the light of provability interpretations of modal logic; I have a PDF preprint of this somewhere, I think. You can get a provability logic that is equivalent to (intertranslatable with) Gödel-Löb logic that has the T axiom, but sacrifices the K axiom for something weaker (this is an old result, I think due to Solovay, which Égré takes seriously as an epistemic theory).

Use of the turnstile symbol as a binary modality is not new, though I agree that this is bad for clarity, since we will also have a different turnstile relation between propositions/ sequents/ whatever in the metatheory.

Tricky response to Avicenna: you may take away those clubs and brands, because I don't deny non-contradiction for sentences in Delta-1-2!

Removed

Removed

Having ⊢Φ and ⊢¬Φ means having a contradiction in math

Having ⊢Φ and ⊢¬Φ means having a contradiction.

Yes but...

The law of the excluded middle on which disproofs by counterexample rest, ∄Φ:(¬Φ ∧ Φ), is not present in an inconsistent system, therefore you cannot use it to disprove anything in such a system.

Hence while it is true that from the proposition that a system is inconsistent you can deduce that in that system

∃ Φ:(¬Φ ∧ Φ)

this does not generalize to

∀ Φ:(¬Φ ∧ Φ)

, nor form a valid basis for a disproof of anything in that system.

Removed

Removed

Direct Logic is paraconsistent and minimal

Direct Logic is
* paraconsistent because a contradiction in a practical theory does not infer everything
* minimal in the sense that adding proof by contradiction to a practical theory makes it a classical theory

Removed

Removed

Fortuntely, Direct Logic has the usual Boolean equivalences

Fortunately, Direct Logic has the usual Boolean equivalences for inconsistency robust inference. The equivalences make Direct Logic intuitive and powerful by comparison with alternative systems.

Direct Logic has given up on the propositions True and False

Direct Logic has given up on the propositions True and False since they don't actually say anything.

Removed

Removed

(~A & A) & B = (~A & A) is not an equivalence in Direct Logic

Of course, (~A & A) & B = (~A & A) is not an equivalence in Direct Logic.

Removed

Removed

The rule A |- (A v B) is not inconsistency robust

The rule A |- (A v B) is not inconsistency robust as pointed out in Formalizing common sense for scalable inconsistency-robust information integration using Direct Logic reasoning and the Actor Model.

From the above paper (and others), the correct inconsistency robust rule for Or-Introduction is the following: A,B |- (A v B)

Only inconsistency robust rules are allowed for practical theories (e.g. climate change, human liver, etc.) because they are invariably inconsistent.

Removed

Removed

For inonsistency robustness, P⇒Q is not equivalent to Q∨¬P

In Direct Logic for practical theories , P⇒Q is not equivalent to Q∨¬P. Instead P⇒Q is equivalent to (P⊢Q)∧(¬Q⊢¬P).

Of course for mathematical theories, the classical equivalence holds.

Removed

Removed

The following is not inconsistency robust: A ⊢(A v (B & A))

The following rule is not inconsistency robust: A ⊢(A v (B & A))

Not having a Φ such that ⊢Φ and ⊢¬Φ means consistency

Not having a Φ such that ⊢Φ and ⊢¬Φ means consistency of math

Approach of Direct Logic is the opposite of Willard

The approach for barring self-referential sentences in Direct Logic is the opposite of [Willard 2007], which developed sufficiently weak systems that self-referential sentences do not exist even using an untyped grammar.

Turing versus Wittgenstein

Turing differed fundamentally on the question of inconsistency from Wittgenstein when he attended Wittgenstein’s seminar on the Foundations of Mathematics [Diamond 1976]:

Wittgenstein:... Think of the case of the Liar.
It is very queer in a way that this should have puzzled anyone — 
much more extraordinary than you might think... 
Because the thing works like this:
if a man says 'I am lying' we say that it follows that he is not lying,
from which it follows that he is lying and so on. 
Well, so what? You can go on like that until you are black in the face. 
Why not? It doesn't matter. ...
it is just a useless language-game, and why should anyone be excited? 

Turing: What puzzles one is that one usually uses a contradiction as a criterion
for having done something wrong. 
But in this case one cannot find anything done wrong.

Wittgenstein: Yes — and more: nothing has been done wrong, ...
where will the harm come?

Turing: The real harm will not come in unless there is an application, 
in which a bridge may fall down or something of that sort…. 
You cannot be confident about applying your calculus
until you know that there are no hidden contradictions in it ….  
Although you do not know 
that the bridge will fall if there are no contradictions, 
yet it is almost certain 
that if there are contradictions it will go wrong somewhere. 

Wittgenstein followed this up with [Wittgenstein 1956, pp. 104e–106e]:

Can we say: ‘Contradiction is harmless if it can be sealed off’?
But what prevents us from sealing it off?

Don't feed the troll

Let's look at another attempt at adding self-consistency proof to a logic:

                |- 
  -------------------------------------
  |- Hey guys, this logic is consistent

The intended interpretation here is that if you can prove "Hey guys, this logic is consistent", then the logic is consistent (and also, hey, guys). You can add this inference rule to ZFC and the logic is still consistent! And it proves its own consistency!

I'm hearing objections that the "denominator" up there ^^^ doesn't sound mathy enough. So let me improve that:

                     |- 
  -------------------------------------------
  |- (¬exists[P:Proposition] --> |- (P ^ ¬P))

So if we add that rule, have we actually proven our own consistency? Of course not. A proof of consistency isn't just a term that's supposed to be interpreted as "this logic is consistent." Look at what Godel did. He carefully modeled what it means to be a valid term and a valid proof in arithmetic and the statement of consistency amounted to a real arithmetic statement that "there does not exist a proof of false". He did not do this, as Hewitt claims, because he didn't have access to cutting edge computer science. He did this because he understood that what you intend the logic to mean is irrelevant.

I'm not sure if Hewitt is a troll looking to increase his page rank or doesn't understand what Godel did, but this is kind of ridiculous. And look, I don't want to bet against Hewitt, either. At least, not metaphorically. If someone wants to bet real money, let's talk.

re: hungry hungry trolls

Pick your favorite, classical, formalizable number theory. Pick one that you and many mathematicians believe is consistent, albeit necessarily incomplete. If you really try to formalize it you'll discover it doesn't contain a proof for a Theorem of Consistency. You can go ahead an add an Axiom of Consistency and... Goedel's 2nd proof means your classical system with the new axiom is inconsistent. It's conceptually easy to make a specific prop P such that P and not P are both proved in your new system with the Axiom of Consistency.

Hewitt is displaying a different way to formalize something like number theory. Hewitt's way involves a syntactic restriction of a sort that people in Goedel's day hadn't quite conceived. Hewitt's claim is that this syntactic restriction prevents Goedel's 2nd theorem proof from applying yet leaves "classical math" otherwise pretty much intact. Not only could you add an Axiom of Consistency to this system without creating an inconsistency via Goedel's method, but in addition you don't need that axiom since Consistency is already a theorem.

Hewitt understands Goedel well and subtly. What's unclear to me, still, is whether or not his syntactic trick actually works as advertised.

Between the two of you, I think that Hewitt is not trolling and in the exchange just between you two there are either 0 or 1 trolls.

So Direct Logic makes it

So Direct Logic makes it impossible to make a statement P that says "I am unprovable", ducking around Godel that way?

This is of interest, but where is that claim proved or demonstrated? Certainly not in the paper we are talking about here, unless I'm missing something.

The paper here is attempting to self-prove consistency of classical mathematics using reductio ad absurdum, and the arguments stem from that, not from whatever syntactical properties Direct Logic has.

Are we at the end of the day just saying "what if we don't allow self-referential statements"? This is a syntactic restriction of which Godel's peers couldn't conceive? Really?

Russell and Whitehead

Passage from my dissertation:

[Russell and Whitehead] attributed the antinomies to impredicative definitions, i.e., definitions that are not strictly prior to what they define (such as the set of all sets, impredicative since the set contains itself; or the set of all sets that do not contain themselves, viciously impredicative since its self-containment leads immediately to Russell's Paradox). To eliminate impredicativity, they stratified properties (i.e., propositional functions) in the Principia by nonnegative integer type, restricting each function to take arguments of strictly lower type. [footnote: Ironically, and perhaps insightfully, the properties predicative and impredicative cannot be expressed in Russell and Whitehead's logic, because these properties pertain to properties of every type n.] However, having carefully arranged this stratification, they also introduced an axiom to collapse the hierarchy (for every proposition involving properties of type >=1 there is an equivalent proposition with properties all of type 0), for which the justification, by their own admission, was only that it was what was needed to found analysis. Even the authors considered this axiom a stopgap.

Direct Logic makes self-referential sentences impossible

Direct Logic makes self-referential sentences impossible to construct. See the syntax of Direct Logic on page 9 of Mathematics self-proves its own Consistency paying particular attention to footnote 28.

Intended interpretations

in the exchange just between you two there are either 0 or 1 trolls.

I can appreciate that the style of my last post wasn't helpful. I'll proceed more respectfully in the future.

Hewitt's way involves a syntactic restriction of a sort that people in Goedel's day hadn't quite conceived.

But let's reserve judgment on what the old timers missed for a post mortem (after we understand what Hewitt has done).

It seems to me that the possibilities are:

1. Direct Logic (DL) doesn't have a full encoding of arithmetic. Is PA there as a subset somewhere?

2. DL doesn't "understand" the reflected terms that Hewitt has added in a way that would allow Godel's (why does LtU sometimes add the umlaut and sometimes not?) proof to go through. Does DL know that these reflected terms are just data? Can it pattern match on terms and construct new ones?

3. DL is inconsistent.

You seem to be saying that you think bullet 1 is the case? I'm not sure how type theory could restrict us to the "useful fragment" of PA. The types are Nat, so which operation are you going to outlaw?

I suspect we're in bullet 2 and that Hewitt has added reflection but has not formalized how it works well enough for the logic to "get a handle on it" and construct self-referential statements. The problem is that without sufficient axiomatization to pin down what you can do with these "proof objects", you're left appealing to the intended model when you claim that you've proven consistency.

Consider: instead of interpreting Hewitt's "A, B |- C, D" as a judgment literal, let's interpret it as "A and B implies C or D" (a proposition with value True or False). Are there any rules of DL that this interpretation wouldn't satisfy? If you can't open up a Judgment or Proof and see that it's actually a piece of data, then it's possible that an interpretation like this satisfies the axioms. And then the "Proof of Consistency" is really just a statement that there is no proposition that is both true and false.

I don't know enough about DL to say whether the above works or not, but hopefully it illustrates the problem. Attempts to get rid of Godel by restricting what you're allowed to say about reified propositions can have the effect that you're no longer talking about what you intend to be talking about.

An aspect of Godel

An aspect of Godel I cheerfully admit to understanding very little, but that I note those at the time were especially impressed by, is Godelization, which basically allows him to establish his result for a logical system without having to worry about whether the logical system explicitly admits a statement "this statement is unprovable": if it's powerful enough, he says, it'll admit a statement equivalent to this statement. Via Godel numbers. Making his result appallingly difficult to avoid. Death, taxes, and Godel.

Gödel implicitly assumed a strange grammar for mathematics

Gödel implicitly assumed a strange grammar for mathematics.

It's not so difficult to rule out self-referential sentences. See the syntax of Direct Logic on page 9 of Mathematics self-proves its own Consistency paying particular attention to footnote 28.

re: "Intended interpretations"

You seem to be saying that you think bullet 1 is the case?

No, (2). I'll elaborate my tentative understanding in a reply to Shutt.

(Are you still thinking

(Are you still thinking about posting to John?)

I went through the paper looking for how he avoids Gödel. Hewitt states in the forums here that he avoids Gödel with his type scheme:

If f is a [σ]↦Sentence and t is a σ then f[t] is a Sentence [28]

But this scheme is just about avoiding overly self-referential Sentences. I don't know what it would have to do with blocking Gödel, which doesn't rely on overt self-reference.

It looks to me that the only thing stopping inconsistency is possibly a lack of axioms around proofs. He has a notation for |-^p Q, but the rules for this aren't in the paper.

To me it still seems that DL either doesn't have a proper self-consistency proof or it is inconsistent.

No fixed points in the typed lambda calculus

There are no fixed points in the typed lambda calculus and consequently they cannot be used to construct self-referential sentences a la Gödel.

applying Goedel's 2nd to derive a contradiction in DL

Below, I have tried to sketch how to apply Goedel's 2nd incompleteness theorem to derive a contradiction in DL.

I don't understand how the definitions given evade Goedel's 2nd theorem.

Which step in the following is wrong, and why:

I think it is possible in DL to describe a Goedel number of sentences. So I'd like to stipulate that we have:

Encode : Sentence ⟶ ℕ

such that Encode[s] gives a Goedel number for sentence s

Decode :  ℕ ⟶ Sentence

such that

(∀ s : Sentence) s = Decode[Encode[s]]

I believe there is no problem defining a model of provability in DL such that:

Provable : ℕ ⟶ {0, 1}

such that Provable[Encode[s]] is 1 iff s is provable.

Provable is, of course, a partial function.

I'm less sure but see no restriction preventing us from proving, in DL:

Theorem(?):

(∀ s : Sentence) (Provable[Encode[s]] = 1) ⇒ (⊢ ⌈ s ⌉) ⇒ ⌈ s ⌉

If I understand the definitions correctly, I can define a schema for Goedel sentences this way:

GS ≡ « λ [x : ℕ ] ¬ (Provable[X] = 1) » 

meaning that if X is a natural number then

GS[X] ≡ « ¬ (Provable[X] = 1) »

so I can trivially define:

GSN : ℕ ⟶ ℕ

GSN[x] ≡ Encode [GS [x]]

I can compute a least fixed point for GSN. I don't think that DL's restrictions on syntax prevent this step but I could be misunderstanding something:

Gn ≡ LeastFixedPoint [GSN]

Now I have a full fledged Goedel sentence, G:

G ≡ Decode[Gn]

At a meta level we know with certainty that either DL can not prove G or else DL is inconsistent. DL doesn't claim to evade Goedel's first theorem. The issue for Goedel's 2nd is can we now use DL's theorem of self consistency, with G, to derive a contradiction within DL.

I'm not sure but I don't see why we can't also prove in DL:

Theorem(?):

⊢ ⌈ G ⌉ ⇒ ⌈ ¬ G ⌉

Using the theorem of consistency we then trivially have a theorem:

⊬ ⌈ ¬ G ⌉

and therefore:

⊬ (Provable [Encode [G]] = 1)

which is a proof of G and thus a contradiction.

Which step here doesn't actually work in DL and why? Can someone please walk me through it?

Are you sure DL allows

Are you sure DL allows pattern matching on Sentences as would be needed to write 'Encode' in step 1?

Are you sure DL allows

Are you sure DL allows pattern matching on Sentences as would be needed to write 'Encode' in step 1?

We do not need "pattern matching" to define Encode. Defining equality for the enumerable set of sentences is enough to define Encode.

To encode a sentence, S, start enumerating sentences from shortest to longer. Count them as you go. When you get to S' which is equal to S, the current count is the encoding.

You're right

By "pattern matching" I just meant some way to inspect them, but you're right that equality is good enough and he does seem to give that.

You hit trouble here, though

I believe that, while your construction of Encode by enumerating will work, he doesn't have an axiom that will let you prove that it works. So you can't prove encode is total as needed for this step:

(∀ s : Sentence) (Provable[Encode[s]] = 1) ⇒ (⊢ ⌈ s ⌉) ⇒ ⌈ s ⌉

That troubling step

That troubling step is sure an interesting one and I think that if DL works as advertised then there has to be something that prevents that step from being valid.

What I don't see, though, is anything that prevents that step from being valid. I've stared at the definitions footnoted by footnote 28 and I guess I just don't understand what they mean well enough to see this (or else Hewitt has made an error).

Let's hope we hear from him or someone else who better understands DL.

Footnote 28

If I understand correctly, the scheme around that footnote just imposes a type discipline on parameterized sentences. So this syntax:

If f is a [σ]↦Sentence and t is a σ then f[t] is a Sentence

Just means this:

If  f: a -> Sentence  and t: a, then  f t: Sentence

And is just meant to rule out "monsters" like this:

f = \x. ~(x x)
Monster = f f = ~(f f) = ~Monster

Which is fine and good but does nothing to block Gödel's theorems.

Trivial?

Pardon possible ignorance on my part here, but why is this not a trivial theorem in light of DL's "Principle of Integrity" : (⊢ P) ⇒ P ?

I read that last implication

I read that last implication informally as "which implies" rather than as a trivial implication proposition.

Still don't get it

Given that "Integrity" appears as a valid deduction step on page 22 of the paper in the OP, surely you don't mean by "that last implication" the one in the principle of integrity?

And if you're referring to the last implication in the theorem to be proved in DL ... how else would one formalize, in DL, the informal "which implies"?

Sorry if I'm missing something obvious.

I read it like this

Tom wrote:

(∀ s : Sentence) (Provable[Encode[s]] = 1) ⇒ (⊢ ⌈ s ⌉) ⇒ ⌈ s ⌉

I read the last arrow as "which implies" or, in other words, as this:

(∀ s : Sentence) (Provable[Encode[s]] = 1) ⇒ (⊢ ⌈ s ⌉), 
and thus (by Integrity),
(∀ s : Sentence) (Provable[Encode[s]] = 1) ⇒ ⌈ s ⌉

Thanks!

That's crystal clear now. Thanks!

Typed lambda calculus does not have fixed points.

The typed lambda calculus does not have fixed points. Self-referential sentences cannot be constructed because:
* The primitive sentences of mathematics are not self-referential
* Sentence constructors (logical connectives) do not construct self-referential sentences out of sentences not self-referential.
* Because of the typing restriction, a lambda-calculus functional for constructing sentences must take as its argument something that has already been shown to be a sentence.

Which of Tom's steps does

Which of Tom's steps does this comment address? Responding to serious inquiry with talking points may work well for presidential candidates, but doesn't seem to me to be good form for a scientist.

ad hominems

Responding to serious inquiry with talking points may work well for presidential candidates, but doesn't seem to me to be good form for a scientist.

Why don't you go ahead and ask him he's stopped beating his dog, while you're at it?

Because I haven't seen him

Because I haven't seen him beat his dog.

re: Typed lambda calculus does not have fixed points.

* Because of the typing restriction, a lambda-calculus functional for constructing sentences must take as its argument something that has already been shown to be a sentence.

Does that mean that in DL this is nonsense:

Decode :  ℕ ⟶ Sentence

such that [....]

because natural numbers are not sentences and therefore no such functional is permitted because of typing constraints.

And this is nonsense:

GS ≡ « λ [x : ℕ ] ¬ (Provable[X] = 1) » 

meaning that if X is a natural number then

GS[X] ≡ « ¬ (Provable[X] = 1) »

because, again, GS here has a forbidden type?

Those restrictions certainly throw a monkey wrench in Goedel's way of building up G within DL itself.

slowly over marblehead

And these draconian restrictions about sentence formation are justified precisely because when we conflate the meta level of syntax with the subject of the theory itself that we get into these strange word games like Goedel sentences. And a kind of meta-claim here is that these draconian restrictions on syntax don't actually get in the way of proving anything useful.

Is there a clean, concise, and rigorous document that simply lays out DL as a formal system, in carefully explained baby steps, comparable to the first chapter of books about other formal systems?

Something like the stuff on pages 9 and 10 of the paper which is the topic, but more fleshed out and standing on its own?

Good luck

Tom,

I realize that you think that DL is based on deep understanding of Godel, but I suspect that it is based on deep misunderstanding.

Carefully read page 6 of his document. Hewitt seems to believe that Godel used integers to encode statements in such a way that they were overtly self-referential. In other words, he seems to think Godel used an integer encoding to write down a proposition like this:

P = (P is not provable).

In other words, the right hand side explicitly references P rather than "godel number of P". And the attempt to bar this scheme thus focuses on preventing fixpoints in sentence construction. Again I cite page 6 as evidence of this. At the top of the page he states that this mechanism will get him past the 2nd incompleteness theorem. Then at the bottom you can see specifically the examples that the typing scheme rules out (Curry's paradox and Lob's paradox). I see no acknowledgment that Hewitt understands that this still leaves him vulnerable to Godel's technique.

Also, I think you're reading the type scheme wrong. I think it's as I suggested above. You can construct a function N -> Sentences just fine. The rules aren't very clear, though, from this document, so I could be wrong. For example, there doesn't seem to have a rule that says that an identifier is a term, but there is one that says a name is a term. Is that the same thing?

Whether there is anything deep going on here is a separate question to the one of whether DL actually turns out to be consistent. I don't think your proof goes through as is, but there may be ways to fix it (e.g. do the quantification on the arithmetic side where you know that every sentence has the correct form). But I'm sufficiently convinced that there's nothing deep going on here that I don't particularly care whether DL turns out to be consistent.

As you may have inferred from the tone of some of my comments, Hewitt's approach of making over the top claims, belittling the work of Godel and others, and then responding to inquiry with canned slightly relevant talking points rubs me the wrong way and I have difficulty not responding. Maybe it's not my place to complain about it, but oh well... too late :). I'm going to pipe these Hewitt threads to /dev/null for a while. Enjoy the quiet.

P.S. I would add "peer reviewed" to the front of your wish list for DL documentation.

the type restriction

You can construct a function N -> Sentences just fine.

Not internally to DL itself, no you can't. The type restriction is the only thing standing in your way and is in that sense draconian. It's justified because it prevents Goedel's 2nd theorem construction (and similar constructions) from going through yet does not in any way seem to impede doing everything useful you'd actually want to do.

The construction in Goedel's 2nd collapses meta-reasoning about a formal system with reflective reasoning within the system itself. Hewitt's syntactic restriction on sentences puts a very sharp limit on how far that kind of self-absorbed nonsense can go.

The paper is a really frustrating presentation but the idea it conveys is as plain as the face on which your nose resides.

Wrapping up

Could you explain how you're reading the rules? Here are the ones I find relevant:

1) Boolean, Term, Sentence, List, and Set are a Type.
2) If σ1, σ2  are a Type, then Lam[σ1]->σ2 and σ1 X σ2 are a Type
3) If s is a Sentence, σ is a Type, and i is an identifier, then 
    ≪lam[i : σ]→  s≫ is a Lam[σ]↦Sentence and I is a term.
4) If f is a Lam[σ]->Sentence and t is a σ then f[t] is a Sentence

So 1) says that these primitive types are types and 2) says that product and sum types are types.

I read 3) as attempting to say that if s is a sentence with free variable i that ranges over a type, sigma, then we can form a lambda term and it has product type. The phrasing isn't good, but I think that's what it means.

And 4) just says that if you have such a typed function, you can apply it and get a sentence.

Are you thinking that the fact that Nat isn't in the list of primitives is somehow key? Even though we can form functions from Set and List? In isolation, I could entertain some other such interpretation, but look at the quotes like this everywhere:

[28]Consequently, self-referential sentences cannot be constructed because fixed points do not exist for sentences because of typing constraints.

What do you think that means? It seems clear to me that his goal is to allow functions with types like N -> Sentence without allowing general fixpoints in the creation of Sentences.

re: wrapping up

I don't think that Hewitt's write-up for the formal system is rigorous.

His recent comment to me clarifies what is meant (emphasis added):

* Because of the typing restriction, a lambda-calculus functional for constructing sentences must take as its argument something that has already been shown to be a sentence.

This makes it essentially impossible for DL to reason internally about the queer features of Goedel sentence G.

That comment makes perfect

That comment makes perfect sense to me as referring to blocking self-reference. See page 6, Curry's paradox. The type system blocks Curry from being a statement because you can't prove that Fix(Diagonalize) = Diagonalize[Fix(Diagonalize)] is a proper invocation until you've first proven Fix(Diagonalize) to be a Sentence.

The quote doesn't mean functions from e.g. Boolean aren't allowed. Clearly the type system is set up to explicitly allow functions from non-Sentence domains. He just means that if the function takes a Sentence argument, you can't pass in a garbage expression and then prove it to be a Sentence a posteriori. This rules out overt self-reference.

Matt, I think have got it!

Matt, I think have got it!

Cheers,
Carl

Sorry for the duplicate post

Sorry for the duplicate post. It looks like LtU inserted a page break in the middle of this post confusing both me and my browsers :-(

Reading this thread again,

Reading this thread again, because of a new thread on Hewitt.

LtU didn't figure that out with certainty AFAICS, but the restrictions might prevent Gödel's proof from going through.

However, this point is critical:

And a kind of meta-claim here is that these draconian restrictions on syntax don't actually get in the way of proving anything useful.

The beauty of Gödel's work is that he prevents this objection, because all his machinery is standard arithmetics. Intuitively, proof-checking is so simple you can encode it as a "very simple" program (technically, a primitive recursive one), which always terminates. Since the input, the program and the output are all syntactic things — that is, you could encode them in a computer, and then read off the memory the bit patterns as huge numbers — standard arithmetic can be used to talk about this program. In fact, Gödel even used more standard arithmetic (prime numbers) for his encoding, instead of computer memory, which makes this feel even more like "standard mathematics".

This works so well that even common type theories (such as Coq) allows encoding Gödel's proof, even though they use types as Gödel likes so much.

Gödel's "self-referential" propositions are harmful for CS

Gödel's "self-referential" propositions are harmful for CS.

Gödel exploited a hole in his notation for mathematics to use fixed points to create the "self-referential" sentence "I am not provable."

However, using types such fixed points do not exist. Consequently, Gödel's "self-referential" proposition does not exist.

There are no practical uses for "self-referential" propositions in Computer Science. Also, allowing such "self-referential" propositions leads to mathematical inconsistencies.

See Gödel's "self-referential" propositions are harmful for Computer Science

There are no practical uses

There are no practical uses for "self-referential" propositions in Computer Science. Also, allowing such "self-referential" propositions leads to mathematical inconsistencies.

No, it doesn't. Self-referential propositions give rise to incompleteness due to undecidability.

Practicality of self-referential propositions?

Mathematics is consistent if only because mathematicians will keep it that way. So Hewitt's favorite foundation, Classical Direct Logic, reflects social reality by having a theorem stating its own consistency.

By Gödel's second incompleteness theorem (though I'm not sure I really know the details), it would be tricky to add self-referential propositions to a system that already has a self-proof of consistency, at least without introducing actual inconsistency. And what would we gain? Hewitt claims there are no practical uses for self-referential propositions.

Self-referential propositions

As far as I can tell, Hewitt's approach to avoiding Godel's incompleteness theorem through types is based on a misunderstanding of how it works. He seems to think that Godel employed some kind of a loophole to construct a sentence that directly expresses "this statement is not provable." What Godel actually did almost certainly goes through just fine in Direct Logic, except we could never obtain a formal definition Direct Logic to provide the details.

Preventing self-referential propositions is usually required to prevent inconsistency ("this statement is false"), not incompleteness. Incompleteness you can't avoid, by Godel.

Gödel's "I am not provable." used a loophole he created

Gödel's "I am not provable." used a loophole he created in his notation for mathematics that mistakenly allowed fixed points.
Such fixed points do not exist using types.

Not so

Godel's result would not be celebrated if it relied on overtly self-referential propositions that could be blocked with a type system. He observed that statements about arithmetic (none of them self referential) can themselves be encoded in arithmetic, as can proofs of statements. He then constructed a proposition of the form "Proposition #N is not provable" where N happens to be the very numeral of that proposition. You can't block this proposition with a type because it's well typed. The fixed point construction used to find the appropriate N just involves the standard rules of arithmetic and logic. You can block this construction but only by blocking the ability to do basic reasoning about numbers.

inside and outside

With pen and paper, in principle, someone can construct Goedel sentences (aka compute certain fixed points) even in whatever Hewitt's system is. I am pretty sure that Hewitt is saying that, in his system, you can not deduce from such an actual self-referential sentence that it is self-referential. I.e.: Here is a Goedel sentence, X, but in the system you can't actually prove that X is a Goedel sentence. Because "types".

Whether he is correct about that or not I'm not sure because I've never seen any attempt to present the system in a lucid, rigorous way.

Why it might be important or interesting is a mystery to me.

Maybe

I've know you've attributed some such position to Hewitt in several places in this thread, but I haven't ever seen him stake out such a position in a clear way. Everything I've read from him is consistent with him just misunderstanding the theorem. Feel free to point to a counterexample. I agree with your last point that even if you're right about what he has in mind, the importance needs to be explained.

re maybe

My guess as to Hewitt's intent is condensed around this old comment, I think:

comment from a couple years ago

The relevant quote from Hewitt:

* Because of the typing restriction, a lambda-calculus functional for constructing sentences must take as its argument something that has already been shown to be a sentence.

The problem types solve

Let

X : Sentence -> Sentence 
X = λx. (x x) is not provable

Then,

X X = (X X) is not provable.

would give a Godel sentence. This construction is not permissible, though. Why? "Because of the typing restriction, a lambda-calculus functional for constructing sentences must take as its argument something that has already been shown to be a sentence."

This is what I think Hewitt means, but as I know you know, this misses the role of Godel numbering to make due without this kind of overt self-reference.

Afaics the result doesn't

Afaics the result doesn't need the system itself to be able to determine which sentence is a Gödel sentence; at heart it's a diagonalization, the beauty of which is that you can be sure the diagonal has an element in every row even if you don't know which row you're interested in. (I've been exploring this stuff lately, to get my thinking straight.)

Rosser's version, too

I agree that a simple way to approach Godel's theorem is by first noting that Arithmetic is strong enough to encode the question of whether a Turing machine halts. Then things become simple. Here's a nice blog post that shows an extension of this idea to Rosser's incompleteness theorem.

Cool

Thanks. :-)

Enumerating strings does not create Godel's "self-ref" sentence

Simply enumerating strings does not enable the creation of Godel's "self-referential" sentence.

From page 38-39 of Classical Direct Logic, there is the following explanation:

In Direct Logic, fixed points on propositions do not exist and consequently Gödel's proofs are not valid. Even in the closed theory ℕ, a “self-referential” sentence cannot be constructed by enumerating strings for sentences. Let SentenceFromStringWithIndex be a procedure that enumerates sentences that can be produced by parsing mathematical strings of the closed theory ℕ and IndexOfSentenceFromString be a procedure that returns the index of a sentence that can be obtained by parsing a mathematical string.

I_am_not_provable:Proposition ≡ ⌊SentenceFromStringWithIndex.[Fix[Diagonal]]⌋
where Diagonal.[i:ℕ]:ℕ ≡ IndexOfSentenceFromString.[⌈⊬ℕ ⌊SentenceFromStringWithIndex.[i]⌋⌉]
The fixed point operator Fix cannot be defined using the typed lambda calculus because of the following:
ℕtoℕ ≡ [ℕ]↦ℕ
Helper.[f:ℕtoℕ]:ℕtoℕ ≡ [x:([⍰]↦ℕtoℕ])]→ f.[x.[x]]
Fix.[f:ℕtoℕ]:ℕtoℕ ≡ (Helper.[f]).[Helper.[f]]
The missing type ⍰ does not exist.

Consequently, the above attempt does not construct a string S that ⌊⌈S⌉⌋ ⇔ ¬⊢⌊⌈S⌉⌋

Furthermore, it is possible to prove in Classical Direct Logic that such a string does not exist.

That diagonal is wrong

I believe you're correct that this approach won't work, because Diagonal here is likely to be monotone increasing. It thus won't have a fixpoint (Tom, doesn't this comment apply to your previous proof sketch as well? Why does LeastFixedPoint [GSN] exist?)

For this reason, the approach Godel originally used was not to enumerate sentences, but to enumerate formulas. I think in your approach we can consider a formula to be a proposition with a free variable, which we can take to be a Natural number. In other words:

Formula ≡ [ℕ]↦Prop
IndexToFormula : [ℕ]↦Formula
FormulaToIndex : [Formula]↦ℕ

Now define

Diag[i:ℕ]:Prop ≡ ⊬ IndexToFormula[i][i]

Bad ≡ FormulaToIndex[Diag]

Diag[Bad] = ⊬ IndexToFormula[Bad][Bad]
= ⊬ Diag[Bad]

As you have done, we can restrict to the formulas that can be defined by parsing a mathematical string, since you consider your set of sentences to be uncountable. Then the definition of Diag should be included in that set, since I've just given its definition.

Since Proposition uncountable, no one-to-one [Proposition]↦ℕ

Since Proposition is uncountable, there is no one-to-one [Proposition]↦ℕ

Consequently, Proposition cannot be indexed(enumerated) by ℕ.

Feels like meta-theory confusion to me

I didn't follow this discussion but it looks like meta-theory confusion to me.

The point of Godel is that if you block his argument inside a theory that doesn't imply you've proven your system consistent since Godel then used a meta-theoretical argument. The argument still holds.

First-order dogma: non-existent theory/meta-theory distinction

Gödel's first-order dogma insists on a non-existent theory/meta-theory distinction whereas theories about theories are just more ordinary theories in the specialized field of meta-mathematics.

I might not like that an infinite number of primes exists

Well. I am following this discussion half-heartedly since there's nothing more interesting on LtU so I am going to state something which may be very uninformed.

Godel's proof is about a weak object logic with can do basic arithmetic, but the proof, I gather, is stated in ordinary mathematics; pretty much employing the conceivably strongest meta logic there is.

As an analogy to your argument: I might not like that an infinite number of primes exists. I therefore restrict an object logic that it can only reason about the first 10,000 numbers or so. I cannot prove anymore, from within the object logic, that there are an infinite number of primes. I then state, within the object logic, that a finite number of primes exist; something which cannot be refuted from within the object logic.

What did I do? Did I give a complete, sound, object logic, or, through meta-logical reasoning, can I derive that the object logic is now unsound?

Looks to me you did the second, the object logic now is unsound. Well, that's what I am betting upon. Your object logic doesn't have a model anymore.

Foundational bootstrapping maintenance

Godel's proof is about a weak object logic with can do basic arithmetic, but the proof, I gather, is stated in ordinary mathematics; pretty much employing the conceivably strongest meta logic there is.

Well, for something to be a foundation of mathematics, it would have to be suitable as a meta-logic for everything, including itself. Gödel's incompleteness theorems talk about FOA, and they actually prove they use no meta-reasoning that can't be expressed in FOA, which is why they're such interesting results for foundationalism.

But I do think your broader caution against confusing the meta-logic with the object logic is good. Consider it this way:

If Classical Direct Logic can talk about itself, but it can't come to all the same conclusions for a slight variation of itself, then it's hardly going to be a good framework as we study and innovate on the way we do mathematics. How would we construct slight variations of Classical Direct Logic within itself?

This reminds me of stories about how tough it is to modify a bootstrapped compiler.

If I try to reason about numbers, I throw away numbers

I think Godel is that harsh. If I try to think completely about numbers, then I throw away (the model of) numbers. Because there's always the meta-logical/mathematical Godel argument about arithmetic which only relies on some form of quotation/enumeration (and nobody doubts quotation as a perfectly valid manner of reasoning mathematically); and that argument renders the model invalid (trivializes it into nothing, if you want to be more correct) for any form of completeness for arithmetic.

As all the Wikipedia pages state, it's either soundness or completeness but not both.

I have no idea what Hewitt is doing, I didn't read the paper, but he either cannot reason about arithmetic, or he has simply proven that his logic doesn't have a model. That's my bet.

There is no way out.

Up to isomorphism, there is just one model of ℕ in Direct Logic

Up to isomorphism, there is just one model of the Peano/Dedekind axioms for ℕ in Direct Logic.

Unfortunately, in the first-order logic used by Gödel there are nonstandard models with infinite integers. Consequently, first-order logic is unsuitable for the foundations of Computer Science.

By the argument of Church/Turing, the Direct Logic axiomatization of ℕ is inferentially incomplete.

How do I calculate using this model?

As the 2nd order model of natural numbers is abstract, how do I write the number two? How do I add two numbers together?

Formulas are enumerable

If you can't enumerate formulas then your axioms haven't modeled them completely. I don't see anything novel or interesting about this limitation of your approach.

Strings are countable but propositions are uncountable

Strings are countable but propositions are uncountable because there is a distinct predicate for every real number that holds for just that number.

Ok

Ok. That's fine. But then why doesn't the above argument go through with strings? I'm mobile right now so I'm not going to attempt to mark it up, but with some casts to and from string it should still work.

There are no casts in math from propositions to strings

There are no casts in mathematics from propositions to strings.

Not required

If you study the above argument, you'll see that it doesn't require the ability to find the string corresponding to an arbitrary proposition. It requires only beig able to encode the formula for Diag. You can do that manually if you want - outside of direct logic. So the map from naturals to formulas doesn't need to be onto (or 1-1 for that matter)... It just needs to include Diag in its image.

Types in Direct Logic mean that fixed points do *not* exist.

Types in Direct Logic mean that fixed points do not exist. For example,

Let SentenceFromStringWithIndex be a procedure that
enumerates sentences that can be produced by parsing mathematical
strings of the closed theory ℕ and
IndexOfSentenceFromString be a procedure that returns
the index of a sentence that can be obtained by parsing a
mathematical string.

I_am_not_provable:Proposition ≡ ⌊SentenceFromStringWithIndex.[Fix[Diagonal]]⌋
    where  Diagonal.[i:ℕ]:ℕ ≡ IndexOfSentenceFromString.[⌈⊬ℕ ⌊SentenceFromStringWithIndex.[i]⌋⌉]
The fixed point operator Fix cannot be defined using the typed lambda
calculus because of the following: 
   ℕtoℕ ≡ [ℕ]↦ℕ
   Helper.[f:ℕtoℕ]:ℕtoℕ ≡ [x:([⍰]↦ℕtoℕ])]→ f.[x.[x]]
   Fix.[f:ℕtoℕ]:ℕtoℕ ≡ (Helper.[f]).[Helper.[f]]
The missing type ⍰ does not exist.

Consequently, the above attempt does not construct a string S that ⌊⌈S⌉⌋ ⇔ ¬⊢⌊⌈S⌉⌋

Ttyl

In some ways you seem like a nice guy, but your discussion style in this forum is reminiscent of a support center in Bangalore.

Technical discussions on LtU can be difficult & frustrating :-(

Technical discussions on LtU can be difficult and frustrating :-(

This particular topic requires extreme precision.

Comments and suggestions on how to improve the articles are greatly appreciated!

It would be great to have better tools.

True

This topic does require precision. Quotation (with eval) is known to be dangerous. Change the negated turnstile in my definition of Diag into negation and you might have consistency issues. If you ever publish details on Direct Logic beyond a grammar, we might be able to tell. Good luck.

Liar Paradox derives an inconsistency using fixed points

I think that you are alluding to how the Liar Paradox [Eubulides of Miletus] derives an inconsistency using fixed points:

Liar:Proposition ≡ Fix[f] // note that this fixed point does not exist in Direct Logic   
    where  f[Φ:Proposition]:Proposition ≡ ¬Φ
1) Liar ⇔ ¬Liar    // fixed point
2) ¬Liar           // proof by contradiction from 1)
3) Liar            // chaining 1) and 2)

That's one way to do it

With quotations, the fixed points can sneak up on you, though.

Types do an excellent job of making fixed points not exist

Types do an excellent job of making fixed points not exist.

There is no way for them to creep in :-)

Double post :-(


Formulas are not enumerable

If you can't enumerate formulas then your axioms haven't modeled them completely.

That's right! Hewitt's "mathematics is open" argument claims to show that if the theorems of Classical Direct Logic could be enumerated, there would be a contradiction. This suggests (though I'm not sure it actually proves) that the Proposition and [ℕ]↦Proposition types can't be enumerated either.

Still, I think your construction might work. Although Hewitt claims propositions are uncountable, most of the formal structure for proofs like "mathematics is consistent" or "mathematics is open" seems to stay entirely within a fragment that can be formalized the usual way. If a full Gödel-based inconsistency proof fits within that fragment, then you only need to index that fragment's formulas.

IMO, this is unfortunate. I hope no fragment is powerful enough for that.

I don't see anything novel or interesting about this limitation of your approach.

I have a system in progress that has the same limitation, which is partly why I find this so interesting. Namely, it's a module system, and while I could postulate that the number of modules installed is countable, I don't think that's a question module authors would care to ask. They might care whether the modules are consistent; at least I do.

Slightly different

If you can't enumerate formulas then your axioms haven't modeled them completely.

That's right! Hewitt's "mathematics is open" argument claims to show that if the theorems of Classical Direct Logic could be enumerated, there would be a contradiction.

Hmm. By 'theorem', he means proven theorem, whereas I just mean a proposition with a free variable -- a function definition, basically. Function definitions, being finite data, should be trivially enumerable. If you haven't modeled the formulas well enough to be able to treat them as ordinary data, then it is possible to "assert consistency" without inconsistency, but you haven't really avoided Godel because you haven't actually asserted consistency in a meaningful way. I don't think that approach is interesting.

How is this related to your system? Which limitation do you mean?

Function definitions, being

Function definitions, being finite data, should be trivially enumerable.

I think Hewitt would say something about how there's a sentence for every real number. In the articles (e.g. Inconsistency Robustness in Foundations), I think this is supposed to be justified by the more compelling idea that the notation itself is built of live Actors. (But I'm not sure this idea is actually needed in any of the proofs under discussion.)

In my module system sketch, a function definition may reference an import, and imports refer to authors, who are not necessarily enumerable. (I'm not sure this is crucial to any of the proofs either. The proofs would probably only need to cite a finite number of authors, which perhaps could be specifically included in the enumeration.)

If you haven't modeled the formulas well enough to be able to treat them as ordinary data, then it is possible to "assert consistency" without inconsistency, but you haven't really avoided Godel because you haven't actually asserted consistency in a meaningful way.

Aren't there systems in between, where a theorem of consistency at least has some use, but it doesn't lead to inconsistency? Hmm, after studying this all day, I get the impression most definitions of proof verification require at least as much induction as system IΔ0 to account for the inductive structure of proofs, and that system is already too powerful to prove its own Herbrand Consistency (in the sense of "Herbrand Consistency" defined in that paper, since the notion of consistency itself somehow changes at that level). Any weaker system would have a hard time being its own meta-logic, or even a meta-logic of anything, unless perhaps the structure of a proof was somehow not inductive. (Willard's self-verifying theories are apparently weaker, but paywalls keep me from knowing what meta-logic Willard uses.)

Anyway, all I'm hoping for with this is to find a place we can investigate top-level claims about mathematics, without being so fickle as to avoid claiming we're being consistent. Secondarily, I find proof by contradiction to be self-evident only because it's based on an implicit claim of consistency, so it would be fun to turn it around. If I simply don't get to enjoy either of those things, that's not too bad.

To state the obvious, adding

To state the obvious, adding in an uncountable infinity of additional sentences is irrelevant to whether the countable subsystem is consistent, unless the logic is nonmonotonic. I don't atm recall a claim of nonmonotonicity (although tbh the general trend has been to make more exotic claims rather than fewer).

The countable subsytem

Yes, but I was speaking up because I didn't think it was absolutely clear we could call anything "the countable subsystem" or that we could prove inconsistency from it.

At this point I'm mostly convinced we can (although the details are unclear), so I'm just explaining myself.

I don't atm recall a claim of nonmonotonicity (although tbh the general trend has been to make more exotic claims rather than fewer).

I think Inconsistency Robust Direct Logic is a higher-order extension of Ketonen and Weyhrauch direct logic, which is like relevance-logic-meets-classical-logic. That's non-monotonic by design, but any given lemma makes sure to state explicitly (with a turnstile) the non-monotonic assumptions it's using, and I've never seen Hewitt put uncountably many assumptions in a turnstile.

Classical Direct Logic is the same extension applied to classical logic rather than direct logic, so I do think that Classical Direct Logic is monotonic.

The self-proof of consistency is in Classical Direct Logic, and inconsistency would be a big dealbreaker there, so it seems like that's the system under suspicion.

I don't think it's inconsistent

I thought about it more and decided DL probably isn't inconsistent. I don't see any reason that you couldn't give erasure semantics to Direct Logic as follows:

- Replace ⊢_A B with A -> B
- Erase quoting. ⌈x⌉ becomes x, ⌊x⌋ becomes x
- Replace Sentence[t] and Expression[t] with t

I don't know the details of Direct Logic enough to say for sure. Maybe this wouldn't work with "inconsistency robustness". But the fragment I understand would support these semantics and therefore isn't inconsistent due to the erasable features above.

Erasing to Girard's paradox?

We're on different parts of this. Quoting is the part I understand the least of all, but I think I can tell you "inconsistency robustness" is unheard of in Classical Direct Logic.

Considering the Proposition and Type types, I wonder what consistent system you think it would erase to. I'd be worried about Girard's paradox.

I've only skimmed Charles Stewart's dissertation, but it's all about classical theories with propositions-as-types. Maybe erasing to one of those theories is an option.

Girad's paradox concerns the nonexistent universal type

Girad's paradox concerns the nonexistent universal type.

Charles Stewart's dissertation results are limited to first-order logic :-(

Oops

Girad's paradox concerns the nonexistent universal type.

Oops, good point. I forgot this system's polymorphism uses dedicated syntax for type parameters at the definition level, rather than using a Type type.

Since Proposition uncountable, no [ℕ]↦Proposition is onto

Since Proposition is uncountable, no [ℕ]↦Proposition is onto Proposition and instead picks out an arbitrary proper subset of Proposition.

Consequently, Proposition cannot be indexed(enumerated) by ℕ.

Still pondering this diagonalization

Formula ≡ [ℕ]↦Prop
IndexToFormula : [ℕ]↦Formula
FormulaToIndex : [Formula]↦ℕ

Now define

Diag[i:ℕ]:Prop ≡ ⊬ IndexToFormula[i][i]

Bad ≡ FormulaToIndex[Diag]

Diag[Bad] = ⊬ IndexToFormula[Bad][Bad]
= ⊬ Diag[Bad]

Although there were some problems with this (in particular the assumption that FormulaToIndex can actually be implemented), I still wonder whether it's possible to patch this up using a different Formula type.

It's not that I have any particular idea for how to patch it up. I just don't follow Hewitt's argument for impossibility: "The missing type ⍰ does not exist."

At this point my working assumption is that it is indeed impossible to patch it up while keeping the use of ⊬, but it possible to replace ⊬ with an elaborate formula that explicitly states every inference rule of Classical Direct Logic. This Gödel sentence would then fail to interact with the self-proof of consistency, so it wouldn't be a problem.

(This came up again when I mentioned my lingering doubt in this thread.)

Although strings are enumerable; sentences are not

Although strings are enumerable in Classical Direct Logic; sentences are not enumerable. Consequently, propositions are also not enumerable.

See pages 9-12 of Classical Direct Logic.

Philosophers made ideological commitment to "First-order Thesis"

Contrary to the practice of general mathematics, many philosophers made an ideological commitment to the "First-order Thesis" because it gave them employment by making it possible to prove certain results about first-order theories that are false for practical mathematics.

See pages 33-38 of Classical Direct Logic.

Philosophers celebrated Gödel's results: their jobs depended

Philosophers celebrated Gödel's results because their jobs depended on adherence to the first-order dogma. Since philosophers couldn't prove anything significant about practical mathematical theories, they cut them down to unrealistic first-order theories where results could be proved (e.g. compactness) that did not hold for practical mathematical theories.

In the famous words of Upton Sinclair:

“It is difficult to get a man to understand something, 
when his salary depends on his not understanding it.” 

Not helpful

Assuming that all of the mathematicians of the last century of being part of a big conspiracy doesn't seem like a very useful thing to do.

First-order dogma is too limiting leading to many problems.

Mathematics is doing fine. However, the first-order dogma is too limiting leading to many problems. Consequently, mathematical foundations of Computer Science must reject the first-order thesis.

It's important to understand underlying motivations of philosophers to understand how they got trapped in the first-order dogma.

Skeleton of deriving inconsistency by Gödel

Preventing self-referential propositions is usually required to prevent inconsistency ("this statement is false"), not incompleteness. Incompleteness you can't avoid, by Godel.

I was talking about inconsistency, though. :)

What Godel actually did almost certainly goes through just fine in Direct Logic, except we could never obtain a formal definition [of] Direct Logic to provide the details.

Hm, right. Let's see if I understand what kind of formal definition you're looking for and how it would be susceptible to Gödel's theorems.

Hewitt's "mathematics is open" argument claims the theorems of Classical Direct Logic can't be computationally enumerated, since otherwise (if the proof is correct) there's a contradiction in the form of the Kleene-Rosser paradox. So we at least shouldn't expect the Proposition type to be classified by any closed, computationally enumerable formal system.

If we seek that kind of formal system for Classical Direct Logic anyway, we'll have to say the Proposition type has instances we can't construct. Whatever system we settle on, maybe it's better to call it a closed fragment of Classical Direct Logic. I think Hewitt is already showing us the rules for this fragment, but not necessarily in a well organized and fully fleshed out way. (Hewitt does mention something about uncountable syntaxes that identify real numbers or actors, but I don't see those being necessary for the argument at hand, so maybe we can take the fragment that excludes those.)

Gödel's first incompleteness theorem might go through for the closed fragment. If so, I expect it to show that the closed fragment is incomplete for its own theorems. The Gödel sentence P would be equivalent to saying that P cannot be proved in the closed fragment.

As trumpeted in this thread, Classical Direct Logic self-proves the consistency of all the provable propositions in its own Proposition type. If we can show that every fragment-provable fragment-proposition is a provable member of the Proposition type, then we can show that the closed fragment is consistent itself.

If Gödel's second incompleteness theorem goes through, we can take the proof of consistency for the closed fragment and use it to derive a contradiction. That would topple not only the closed fragment but the open interpretation as well.

I don't know enough to fill in the details, and some of my leaps of faith might be leaps to the wrong conclusions. Is this similar to the approach you're thinking of? You're pretty confident that some approach would work, successfully proving inconsistency?

Closed theory ℕ incomplete by argument of Church/Turing

The closed theory ℕ is inferentially incomplete by the argument of Church/Turing.

Since there are no "self-referential" sentences in the closed theory ℕ, Gödel's results are invalid.

See page 14-15 of Classical Direct Logic.

Gödel's "self-ref" prop leads to inconsistency [Wittgenstein]

Gödel's "self-referential" propositon leads to inconsistency by an argument of Wittgenstein.

See pages 6-7 of Classical Direct Logic.

Mistake?

The PDF you're linking to is titled "Inconsistency Robustness in Foundations" and has only 56 pages.

Page numbers in "Inconsistency Robustness" offset from HAL

Ross,

Thanks for noticing this.

The page numbers in "Inconsistency Robustness" are offset from the page numbers in the PDF in HAL.

I have corrected the page numbers in the postings above to correspond to the page numbers of the current PDF in HAL.

Hewitt, you contradict yourself, please explain

Because of the typing restriction, a lambda-calculus functional for constructing sentences must take as its argument something that has already been shown to be a sentence.

You elsewhere wrote that, on the contrary, a function taking numbers to sentences was permitted.

Direct Logic even allows some functions from integers to sentences,

A number is not a sentence and so you have contradicted yourself.

Will you please clear up your self-contradiction (or admit and explain your mistake)?

Direct Logic allows *some* functions from integers to sentenc

The following is a type: Lambda[Integer]↦Sentence. However, you have to obey the rules to do anything with it.

re: Direct Logic allows *some* functions from integers to senten

The following is a type: Lambda[Integer]↦Sentence. However, you have to obey the rules to do anything with it.

In which step of the proof sketch do you assert that a rule is being violated or do you agree that DL contains a self-proof of inconsistency?

It is not obvious that your post *exactly* corresponds to DL

It is not obvious to me that everything in your long post *exactly* corresponds to Direct Logic. I would be willing to take a look at a completely formal proof (including working through the fixed point operator) that you have constructed a self-referential statement in Direct Logic (without all the other stuff in your post).

re: It is not obvious that your post *exactly* corresponds to DL

It is not obvious to me that everything in your long post *exactly* corresponds to Direct Logic

"Direct Logic" being an incompletely specified system you haven't bothered to formalize for us.

Joke's on me. I took you seriously. Ha ha.

As I said, any examples of ambiguities in the grammar ....

As I said, any examples of ambiguities in the grammar on page 10 of Direct Logic are most welcome.

So the requirement for you to make a contribution stands: A completely formal proof (including working through the fixed point operator) that you have constructed a self-referential statement in Direct Logic (without all the other stuff in your post).

re As I said, any examples of ambiguities in the grammar ....

As I said, any examples of ambiguities in the grammar on page 10 of Direct Logic are most welcome.

The grammar of terms, sentences, and propositions does not fully specify a formal system of mathematics.

Grammar + inability to enumerate bar self-referential sentences

Grammar + inability to enumerate from within Direct Logic bar self-referential sentences.

Any ambiguities in the rules of Direct Logic are also welcome.

re: inability to enumerate from within Direct Logic

Grammar + inability to enumerate from within Direct Logic bar self-referential sentences.

Inability to enumerate what?

I think that you have contradicted yourself again.

Elsewhere you have written that given a DL theory it is possible to enumerate all of the proofs of in that theory. Consequently it must be possible to enumerate all terms and all sentences.

Theorems of a math theory are enumerated in math

Theorems of a math theory are enumerated in math (closure). But the math theory does not have access to this enumeration and the enumeration function is not part of the language of the theory.

It's entirely possible for

It's entirely possible for someone to be good at some things and bad at others. Hewitt is, it appears, not good at thinking carefully about computability/Godel (which I find a more interesting connection than Curry-Howard; but I digress). We've been giving him considerable benefit of doubt (even I have, though I placed a lower likelihood on this than you did), encouraging him to clarify what he's doing. He, though, has three or four points he falls back on alternately — self-referential sentences can't be expressed in DL, sentences can't be enumerated in DL, try to do that in DL and see what happens, and occasionally, DL is inconsistency robust. On our side, things have pretty much stabilized for some time on please clarify, to help us understand better what you're doing, so that his set of fallback points is starting to sound quite repetitive. I fear he isn't at this time able to clarify further.

[I see he was writing a reply at the same time I was writing this. <gets out popcorn>]

re It's entirely possible for

John, I think I mostly agree with you.

His latest claim at least suggests something specific and meaningful might be lurking. He wrote:

Theorems of a math theory are enumerated in math (closure). But the math theory does not have access to this enumeration and the enumeration function is not part of the language of the theory.

If you look back at my sketch I use two functions that I didn't specify in detail: P_Encode for enumerating Terms, and S_Encode for enumerating sentences.

There's a technicality in the grammar that proofs are Terms while theorems are Propositions proved by proofs that establish some Sentence. So I need two encode functions: one for Terms, the other for Sentences.

So the generous interpretation here is that terms and sentences are "conceptually" enumerable and I could enumerate them in a model of DL, but that somehow DL prevents me from defining P_Encode and S_Encode in DL itself (about the sentences and terms of the self-same theory).

And supposedly this thing -- whatever it is -- that prohibits defining P_Encode and S_Encode is a syntactic restriction revealed in the grammar that the paper contains.

Supposedly, P_Encode and S_Encode, mapping terms and sentences to natural numbers, violates some syntactic type restriction. Yes?

Maybe I don't understand Hewitt's notation but I don't see any such type restriction in the grammar. Do you?

I'm a bit frustrated because I think it might well be possible to construct a robust system that would prohibit a type mapping natural numbers to sentences. That would block Goedel's construction even if the same system contained a self-consistency proof. In effect it would be a way to enforce a limitation that we have a semantic tower of models instead of more direct reflection. But I can't find this concept in the more formal parts of what Hewitt has actually presented.

How would you write Encode?

How would you write Encode? You previously proposed enumerating possible sentences until you found a match, but how do you know you'll ever find a match? To prove that, you'd need to know that every sentence is of one of the forms that you know about, with an induction principle to tell you that they're finite trees built up from those forms. I don't see anything like that in the rules, and since Hewitt has said that there are uncountably many Sentences, it seems unlikely that he'd have such a principle.

If there were something like that, I could internalize the Integrity principle and make my alternate version of the proof go through as well (I think).

Note that axioms guaranteeing particular forms for sentences would also throw a monkey wrench in the interpretation I proposed to demonstrate consistency. That is, if there were axioms to pattern match on Terms / Sentences, then obviously there would be no way to get that information back out of a boolean value.

So I think the situation is:

1. Direct Logic is consistent because it doesn't allow inspection of Terms/Sentences and thus a statement of |- A can be interpreted trivially as just asserting A, A |- B as asserting A --> B, etc.

2. If you added patterning matching on Terms/Sentences, it would become inconsistent.

Hey, Matt M:

Do you have a proposal for a workable formalization of DL? Not merely an underspecified syntax but also core axioms?

Hewitt doesn't seem to but you have some ideas about what one would look like.

If you've got one, I could try to figure out if P_Encode and S_Encode work in *that*.

I don't mean to just burden shift back to you here. I just really no longer care to keep guessing about what DL officially is since no coherent story about it seems to be forthcoming from C.H.

Sorry that you are having trouble understanding Direct Logic

Sorry that you are having trouble understanding Direct Logic.

Suggestions for improvement are greatly appreciated.

re: Suggestions for improvement are greatly appreciated.

My suggestion is that you try to articulate a minimalist but robust formal definition of DL. A definition that has clearly explained notation, explicitly listed rules of inference, ... so forth.

The problem is not that "we" here on LtU are stupid. The problem right now is that *you* are too vague.

Direct Logic has grammar + rules

Direct Logic has grammar (page 9) and rules (pages 21-23) in Mathematics self-proves its own Consistency.
What more do you want?

It may be that vagueness is not the problem. Perhaps the problem is that a more tutorial presentation is needed as well :-)

Are you being serious?

I read page 9 as being informally stated well-formedness rules, not just a grammar. Aren't those rules part of Direct Logic? But you're missing all of the other rules, like ... beta-reduction, when things are equal, how one would ever establish that a term is a proof (are you using Curry-Howard?), how are sets formed, ... The list really isn't short.

I've been extrapolating based on what I think are reasonable assumptions, but Tom is right that everything is much too vague to do anything but speculate. (Tom: I don't want to undertake specifying Direct Logic, but if you have any particular rules that you'd like my guess on, let me know).

Direct Logic needs a tutorial book

Direct Logic needs a tutorial book à la Language, Proof, and Logic: 2ND Edition.

Mathematics self-proves its own Consistency is a scientific article addressed to professionals for the purpose of advancing the state of the art. As such, it doesn't reiterate standard material, e.g., lambda calculus, etc. However, it does state the axiom for sets because Direct Logic does not use the first-order formulation for reasons stated in the article and instead uses the following in which there are no restrictions on the predicate P:

       e∈{x∈D | P[x]} ⇔ (P[e] ∧ e∈D)

It also needs a formal definition

Certainly specific articles that don't go over standard parts of the system are fine but usually there's a comprehensive set of rules somewhere. Am I wrong? Most of the time I see a new formal system introduced they list all of the rules in the paper introducing the system, even if the text only glosses over what some of them are supposed to mean. That's quite different than a tutorial, which would also probably be helpful. And if this is all standard, can't you at least give references?

Article provides formal definitions for purposes of article

The article at the beginning of this forum topic provides formal definitions sufficient for the purposes of article. However, it does not rehash old well understood material (e.g. the lambda calculus) that are explained in the references of the article.

Of course, if anyone comes up with a genuine ambiguity then it will have to be addressed.

"a scientific article addressed to professionals"

I think you might be deceiving yourself.

Article of this forum topic is addressed to professionals

The article at the beginning of this forum topic is addressed to professionals. Now we are going to find out how many of them read the article ;-)

Hewitt, I'll volunteer a bit.

Hewitt,

First off, I'm not sure if it is polite in your book to call you "Hewitt". You sign your LtU stuff that way so that's what I use. I hope it is polite. If "Carl" or "Dr. H." or something else is better I'll adopt that. I'm comfortable with "Tom" or "Thomas" for myself.

Second, I think you can see from the comments on this topic that you have an interested but highly skeptical audience that is impatient for a clear and rigorous definition of DL, against which your extraordinary claims can be examined in detail.

Now maybe you don't care about this LtU audience but it seems like you do.

Would you be interested in trying to write down an explanation of DL for this audience, the skeptical one right here on LtU? Is there some way that you can see that me or any of us might be able to help?

I don't mean to sound silly about this but if you think you are writing for "professionals" who get all this better than the rest of us who are here.... can some of those pros step forward here and defend you a bit and perhaps help explain what you are trying to say a bit better?

I don't think that DL needs a tutorial exactly. It needs something more like Euclid's "Elements" or Conway's "On Numbers and Games".

Making a contribution

Thanks for your friendly offer!

Students usually begin by calling me "Professor Hewitt" but I encourage "Carl." However, as soon as there is another Carl, they revert to "Professor Hewitt" ;-)

Unfortunately, I don't have time to write something special for LtU :-( However, LtU is a useful forum and I am happy to participate. Unfortunately, none of my colleagues have shown up here :-( In order to encourage them to participate, it is best to keep insults to a minimum ;-)

I try to make articles as readable as possible and they all are under continual revision (made possible by the Internet). So if you can find a bug or have a particular concrete suggestion or question, I will be happy to consider it. Academic articles tend to be written with incredible conciseness, which is why I suggested a tutorial book.

There are also videos; some of which have been published, e.g., Stanford CS Colloquium June 2012. More videos will be published this year.

It is a challenge to keep everything up to date because there are continual small improvements in the state of the art by students and colleagues.

Attitudes like being skeptical are not the point. I encourage students and colleagues to think critically and precisely because what counts are the arguments. At the same time it helps to be curious and have an open (but non-credulous) mind.

What font encoding

What font encoding is LtU using, as this just looks like gibberish in Chromium?

LtU is garbeling Unicode :-(

For some reason, LtU is garbeling Unicode :-(

At first, the Unicode looks OK but later has been garbled.

It needs to be fixed.

I think it needs to be fixed as it makes unicode rich text almost impossible to decipher. It's also mangling Godel's name with the umlaut. Such things are what make me want to stick to ASCII characters for this stuff, and make me reluctant to use systems that don't.

Sentences of a math theory cannot be enumerated in Direct Logic

Sentences of a mathematical theory cannot be enumerated in Direct Logic even though the propositions that are theorems can be enumerated.

I'd suspect you'd agree that

I'd suspect you'd agree that the 2nd incompleteness theorem goes through without difficulty if the consistency sentence were taken to be (forall n:Nat, Provable[n] /\ Provable[coded_not(n)] -> 1 = 0), or something else along these lines. Then the question of forming Encode and Decode never enters into the proof. (There may be a gap in my understanding of the theorem. If I'm wrong on this I'd like to be corrected.) The only difficulty is in demonstrating, at the meta-level, that the Hilbert-Bernays conditions hold for Provable. This means showing that DL is strong enough to prove certain statements about arithmetic; syntax is still only relevant at the meta-level. There is no apparent reason to think that DL is not strong enough.

There is a question as to whether this consistency sentence is equivalent to Hewitt's. I was under the impression they are not equivalent, but your discussion with Matt M on whether Encode is expressable has raised doubts in my mind. In any event, if they are equivalent then it follows that Hewitt's system is inconsistent, and hence not suitable for doing mathematics as envisioned.

If, on the other hand, they are not equivalent, then we are left to wonder what significance the self-proof version of consistency has. I don't see it as so different from the Coq formalization I posted "proving" that there can be no contradiction, which merely reflected a built-in property of the CoC. It did not say anything like "no contradiction can result by applying the rules of CoC". Neither does Hewitt's theorem. I don't see that the presence of the turnstile makes much difference here.

Summarizing, the self-proof is unconvincing, but apparently formally valid. Nonetheless, either it is not a self-proof of consistency in the precise sense of the formal statement of the 2nd thereom (i.e., the theorem also finds it unconvincing), or it is and Hewitt's system is inconsistent.

Fixed points are not required for Godel's theorem to apply.

Gödel's disproof of completeness does not require fixed points. It requires only transforming axioms.

If you have statements whose truth value are known, and some rules that you can use to derive from them additional statements whose truth values are also known, gödelization can be applied. Whatever transforming axioms you have for statements, can be encoded as functions whose range and domain are integers.

These functions transform an integer corresponding (according to some schema) to a statement, into integers corresponding (according to the same schema) to a related statement. It is not required that a statement ever be "converted" into one of these numbers; it is enough that we can say with certainty that such an integer exists.

This set of functions corresponding to the axioms of a mathematical system can be used, if the set of axioms is complete and consistent, to derive integers corresponding to every theorem that can be shown to be true according to the axioms of that system. The sequence of application of these functions correspond exactly to the steps of a proof of the theorem.

The inconsistency of a mathematical system would be demonstrated if there exists a potential theorem S such that integers corresponding to both S and ¬S can be derived.

The incompleteness of a mathematical system is demonstrated if there exists a potential theorem S such that integers corresponding to neither S nor ¬S can be derived. This is what Gödel did with mathematics as it was then formulated.

Now, Gödel's observation was this. Consider a theorem which states that the set of theorems having indeterminate truth value is nonempty. It must definitely either be true or false. Nevertheless, neither the integer corresponding to that theorem nor the integer corresponding to its negation can be derived. Therefore that theorem is itself a disproof by counterexample of the completeness of ordinary mathematics.

It is important to note that while the theorem itself is a member of the set of theorems it is talking about, the theorem does not in fact refer to itself. It merely asserts the existence of a set of theorems having a particular property. The fact that it is also a member of that set is happenstance.

This proof rests on the fact that the theorem *cannot* refer to itself. If the proof of a theorem were permitted to use the theorem itself, then it would be an axiom, not a theorem. Mathematics, if it allowed theorems to refer to themselves in order to prove themselves, would be inconsistent rather than merely incomplete.

Because banning self-referential statements does not prevent a set of mathematical functions from existing in your system, it does not prevent mathematical functions that model your transformation rules from existing in your system. Because banning self-referential statements does not prevent any particular integer from existing in your system, it does not prevent integers corresponding to the axioms and theorems of your system according to some schema from existing.

Therefore I do not see any way you could be avoiding Gödel's incompleteness theorem and still have an interesting mathematics that permits operations on unlimited-precision integers.

Ray, you missed re fixed points not required

Gödel's disproof of completeness does not require fixed points.

Clean miss.

The topic is Goedel's proof (the 2nd theorem) that in some sorts of formal system, a self proof of consistency implies inconsistency.

DL (with any set of consistent additional axioms you like) is incomplete and nobody disputes that. Goedel's first proof applies just fine. DL seems to be just as useful as any system to which Goedel's 2nd theorem applies yet, nevertheless, DL contains a formal proof of self consistency. DL's extreme restrictions on sentence formation prevent Goedel's 2nd theorem construction of a contradiction from being carried out in DL. That's nifty.

How about putting it this way

How about putting it this way. Godel says for a sufficiently powerful system, if it can prove its own consistency that it is in fact inconsistent. We're given that DL can prove its own consistency. So, is it inconsistent, or is it not sufficiently powerful? And if it isn't sufficiently powerful, what meaningful capability has been lost?

How about putting it this way

How about putting it this way. Godel says for a sufficiently powerful system, if it can prove its own consistency that it is in fact inconsistent. So, is it inconsistent, or is it not sufficiently powerful? And if it isn't sufficiently powerful, what meaningful capability has been lost?

A theorem of consistency leads to a contradiction, using Goedel's construction,by constructing a sentence G such that the system shows that the Goedel number of G is not "provable" and the system shows that G asserts that it's own number is not provable. If the system can do all those things it has proved the self-contradictory statement G. Hewitt's type restriction on sentence formation means that an internal-to-DL construction of G can't even get off the ground.

We can use Goedel's first theorem methods from outside and construct G using our meta-world knowledge. In meta-world we can prove that DL doesn't (or had better not) have a proof for G or its negation. DL itself can't even begin to express this, though. DL contains G. You probably won't find any proof of G in DL. But unless we leave DL for the meta level you won't prove, within DL, that DL can't prove G because the syntactic restriction prevents you from even beginning to express the facts of G's tricky nature.

Direct Logic is sufficiently powerful

Direct Logic is sufficiently powerful to do all of classical mathematics, but it has no self-referential sentences.

Ah. Thanks.

Ah. Thanks. It's seriously helpful to know how you see it. Not to quibble too much, but since classical mathematics does self-referential sentences, if DL doesn't (supposing, that is, the constraint works as intended) then seems it can't actually do everything classical mathematics can. But, taking the point poetically (which I'm quite willing to do, for getting the gist of things), if it's really as powerful as classical mathematics, it's an uphill battle to plausibility that with that power it wouldn't also have the "paradoxes" in some form, even though they might have to pop up in a different form due to the difference in what you're allowed to say.

Classical mathematics can't allow self-referential sentences

You are very welcome!

Classical mathematics can't allow self-referential sentences because Wittgenstein showed that self-referential sentences result in contradiction.

There is a tension here:
* Direct Logic attempts to have mathematics and mathematical theories that are very strong
* Direct Logic attempts to have mathematics and mathematical theories be consistent

The art and science lies in negotiating the above tension.

I see where we may be

I see where we may be talking past each other a bit, here, regarding what classical mathematics "can do". I would say classical mathematics most certainly can do self-referential sentences, and doing so often results in antinomies. It seems when you speak of what classical mathematics "can do" you mean what it can do without causing antinomies. But the difference between what it "can do" with versus without self-referential sentences is, in this case, the whole point. If inconsistencies arise from the deep property of how powerful the system is, rather than from the superficial property of the form of the sentences, then forbidding certain sentences without reducing the power can be fully expected to merely change the form in which the problem manifests itself, rather than eliminating the problem. From the many different forms of paradoxes I've seen over the years, my deep expectation (and you do invoke intuitive expectations right at the start of your discussion, when claiming justification for proof by contradiction) — my deep expectation is that the problems arise from the deep property of power rather than the superficial form of sentences.

Removed

Removed

Hmm

Could you double check your proof? You've got identity functions in there that look funny. Looks like this would work, though.

λf. λg. f (g (λx. f x x)) (g (λx. f x x))

Also, could you explain briefly why this is paradoxical?

Removed

Removed

How is it a paradox?

The form of the Curry paradox I'm familiar with starts by defining X = X -> Y. The set theory version you linked to is along the same lines using (z in z) to set up the loop.

The simply typed proof above seem to be missing that crucial aspect. How could you use it to obtain contradiction?

Removed

Removed

Wording

I guess I was just confused by the wording. I would have said that those naive comprehension axioms to set up the self-reference are a crucial ingredient in order to have Curry's paradox. The proposition (X -> (X -> Y)) -> ((X -> Y) -> X) -> Y is valid in the kinds of logics I'm familiar with, so it seems funny to me to call it a paradox. I think it's Hewitt's intent to avoid Curry's paradox by avoiding self-reference (see page 6 of his document).

Removed

Removed

DL sentences requires types, e.g., Λ[Sentence]↦Sentence

Direct Logic sentences require types for their construction, e.g., Λ[Sentence]↦Sentence. Consequently, it is impossible to construct arbitrary sentences using fixed points.

Removed

Removed

Fixed points don't exist for typed sentences

Fixed points don't exist for typed sentences. For example, the Fix defined below cannot be shown to be of type Λ[Sentence]↦Sentence:

Diagonal ≡ λ[s:Sentence]→ ⌈λ[x:Sentence]→ ⌊s⌋[⌊x⌋[x]]⌉
Fix ≡ λ[s:Sentence]→ ⌊Diagonal[s]⌋[Diagonal[s]]

PS. Thanks for pointing out Diaconescu's theorem.

Removed

Remoeved

Fixed points for computation; but they can't construct sentences

Fixed points for computation; but they can't construct sentences. Grammar of Direct Logic limits ways in which sentences can be constructed.

Removed

Removed

Firxed points for terms; not sentences

InfinitelyDeep is a procedure which is Λ[ ]↦Set. Fixed points can be freely used to define procedures on terms. But sentences are not terms.

The relevant clause from the grammar of Direct Logic is the following:
If f is Λ[Sentence]↦Sentence, s is Sentence, and Halt[⌈f.[s]⌉] then f.[s] is Sentence
Consequently, a procedure that is known to compute sentences from sentences, can compute a sentence from an already known sentence.

Removed

Removed

Complexities in which lambda expressions are Term

An unavoidable complexity is the issue of convergence in the lambda calculus. For example Factorial.[-1] might not converge. On the other hand for every x, it is the case that x=x. Consequently, if Factorial.[-1] is Term then Factorial.[-1]=Factorial.[-1], etc.

So there is a problem stating when Factorial.[x] is Term.

Sentences are easier because the untyped lambda calculus is not allowed.

Removed

Removed

Intuitively navigating issues

A goal is to intuitively navigate the following issues:
* Self-referential sentences lead to inconsistencies in mathematics and consequently must be barred. Self-referential sentences cannot be constructed if (1) fixed points do not exist in the grammar of sentences and (2) there is no way to enumerate sentences. Fortunately, self-referential sentences do not have any practical uses.
* A computational expression may not converge and can be used as a term in logic only if it converges. Consequently computational expressions should be grammatically different from terms in logic.

unconstructable sentences?

Is it the case that DL includes unconstructable sentences? For example, one could establish the existence of unconstructable Constants. Constants are Terms. There exist in DL sentences that contain Terms which are unconstructable Constants?

(The paper doesn't much explain Constants other than to say that there "can be" uncountably many of them.)

Constants not countable; consequently sentences not countable

Constants are not countable because, for example: for each real number x, ⌈x⌉ (reification of x) is a constant. Consequently, sentences are not countable.

We can have the following axiom

∀[e:Expression]→ (Halt[e] ⇒ ∃[x]→ x=⌊e⌋)

Removed

Removed

Inconsistency Robust Sentences are not reducible to truth values

Inconsistency Robust Sentences are not reducible to truth values. Suppose ⊢TP and ⊢T¬P. Do you want to conclude that True=False in theory T?

Added in editing:

Since Tarskian model theory is based on assigning truth values to sentences, it is not directly applicable to inconsistency robust theories because there are no Tarskian models. For discussion of probabilities, fuzzy values, etc., see the following: Formalizing common sense for scalable inconsistency-robust information integration using Direct Logic

On the other hand, Direct Logic does have standard Boolean equivalences, e.g. p⇔¬¬p

Removed

Removed

Removed

Removed

Reification/abstraction make truth values problematical in DL

Reification and abstraction make reducing propositions to truth values even more problematical in Direct Logic.

Removed

Removed

Reification/abstraction convert between Sentences/Propositions

Reduction to truth values is impossible because reification and abstraction convert between Sentences and Propositions as well as because some Expressions converge to Sentences and Propositions.

Removed

Removed

On my list of things to

On my list of things to avoid like the plague if you're trying to get people to listen to your new idea (which I've been trying to turn into a blog post for ages). Don't invent lots of new terminology. Hamilton made that mistake with quaternions, producing a slew of terms almost all of which vanished (when's the last time you heard the word "versor"?), although "scalar" and "vector" are still around, and "tensor" with a completely different meaning.

Removed

Removed

Direct Logic doesn't have the property of "compactness"

Direct Logic doesn't have the property of "compactness" of Agebraic Semantics See page 8 of Mathematics Self-proves Its Own Consistency.

Removed

Removed

Finite statements about the transfinite

From what he's just been saying, he is making finite statements about the transfinite. He's saying that if such-and-such doesn't halt then so-and-so combination of symbols isn't grammatically admissible. Which would mean that whether or not an expression is grammatically correct is formally undecidable.

It's possible that his system could be radically rearranged so as to make it useful. One would have to refactor it so that the role of undecidability in the system appears in a part of the system where its consequences are manifest rather than hidden. It's even possible that one would then end up with what I've been working toward — a system in which the foundation-shaking paradoxes are transformed into mere unalarming nontermination. But the big challenge in that is in just how to arrange its rigorous treatment, and he's not there yet.

Removed

Removed

In some previous posts I

In some previous posts I asked him about mutual recursion between syntax and semantics in his system. And he then modified the definition of term, so as to not use halt.

That's called inconsistency robustness.

Yes, this is my fear, that

Yes, this is my fear, that not asking about the Unicorn in the room may fail to solve the problem of its presence. That was the point (some time back now) of my anecdote about Feynman: by not leaving him alone in the office, they prevented him from observing their weak security, but didn't make it less weak. I'm also extremely uncomfortable with all this emphasis on inconsistency-robust logic, because Hewitt started out by saying the expectation of consistency justifies proof by contradiction; if you're using inconsistency-robust logic, then that seems to me to justify rejecting proof by contradiction, because you wouldn't care about robustness against something you don't think happens.

I do think it may be possible to arrange that if Unicorns cause a problem, the problem takes the form of non-termination rather than paradox. But of course I also don't want type theory, especially not at a fundamental level, as I consider it a complication that obscures what's really going on.

"To be is to be the value of a variable." — W. V. Quine

(Though Quine in his essay was talking about a different kind of mythical horse-like creature.)

A Wittgenstinian reply to DL

In traditional mathematics, at least naively:

An "utterance" (a word not normally used in the literature) is a physical thing. It is specific marks on a chalk board or glyphs on a printed page or sounds made or ...

Here is one utterance that could be regarded as some bits on a disk at LtU central or as the state of some pixels on your screen: "2 + 2 = 4".

Here is another utterance of a similar sort: "2 + 2 = 4".

A "sentence" is an abstract equivalence class, so to speak, of utterances. The two utterances above could be regarded as "the same sentence", for obvious reasons. In various (related) mathematical conversations, all utterances which are the same sentence play a consistent role in the language games.

A mathematical utterance might not be a sentence per se. Another possibility is that it is a "proof" -- another kind of equivalence class among utterances, similar to "sentence".

Classical formal systems in mathematics give us effective (computable) definitions for "sentence" and "proof" and an effective way of telling whether a given proof establishes the (formal, within the system) "truth" of a given sentence.

Goedel proved some interesting facts about classical formal systems in general.

When you talk about DL, you are talking about something very different, with no obvious relation to classical formal systems.

It might sound amusing to say that some conclusion in DL contradicts Goedel's theory of proofs and sentences but when you say that, you are making a pun, not a point.

Removed

Removed

Direct Logic provides a foundation for math practice in CS

Direct Logic provides a foundation for mathematical practice in Computer Science for both of the following:
* Mathematical theories that are thought to be consistent (e.g. Peano numbers, homotopy theory, etc.)
* Practical theories that are pervasively inconsistent (e.g climate change, the human liver, etc.)

All of this is very much in the spirit of Wittgenstein. (See paper at the beginning of this forum topic.)

Unfortunately, the self-referential sentences (used by Gödel, et. al.) result in mathematical inconsistencies. Consequently, the foundation provide by Direct Logic for Computer Science has to be different from Gödel, et. al. At same time Direct Logic has to be intuitively usable for both mathematical theories and practical theories.

Thus, Direct Logic is a substantial undertaking that builds on a enormous amount of previous work. It will be exciting to see what happens ;-)

Removed

Removed

Beware of undertaking a land war in Asia!

Beware of undertaking a land war in Asia!

"I predict you will sink step by step into a bottomless quagmire,
 however much you spend in men and money."
  Charles De Gaulle

Indeed

Indeed, it's one of the classic blunders.

Updated article published in book "Inconsistency Robustness"

There is an updated article in the book "Inconsistency Robustness" at
Inconsistency Robustness in Foundations.

I'll take that bet, so long

I'll take that bet, so long as the payout criterion contains a self-reference preceded by a negation operator.

I think I found another problem

In classical mathematics, only known theorems can be used in a proof. You can't deduce the existence of a theorem and then insert it macro-style into the proof without statement. In principle, quoting theorems is just a shortcut to including their text, but the proof above can't macro-expand without a macro-expander that includes the ability to find the statement Φ within a hyper-infinite (at least aleph-one) set of possibilities. That's not within the spirit of classical mathematics.

I therefore contend it's not a valid proof within classical mathematics for that reason.

you didn't re: I think I found another problem

"but the proof above can't macro-expand without a macro-expander that includes the ability to find the statement Φ"

The proof begins by assuming the hypothesis "Inconsistent", in order to try to obtain a contradiction from that hypothesis. (The real goal is to prove that the hypothesis is not true.)

The proofs of DL are effectively enumerable. One could write a program that will run forever, listing them all, listing them in order from the shortest ones to longer ones, with a secondary sort in alphabetical order. There will be a finite time between each proof printed out. Every proof of finite length will eventually be printed out.

If Inconsistent were true, then a suitable Φ would be discovered, automatically, by that program, in a finite amount of time. So the proof proceeds by assuming we've done that, and now we have Φ in hand.

Now that we have a hypothetical Φ in hand we immediately obtain a contradiction and so, by proof-by-contradiction, our hypothesis (Inconsistent) is false. The system thus self-proves "not Inconsistent". Constructivists can stop there. Classicists can proceed on to "Consistent".

Any theorem can be used in a proof without restriction

Actually, any theorem can be used in a proof without restriction. And, of course, every theorem has a proof even if you don't know what it is.

Wittgenstein and DL

The syntactic restriction in DL reminds me indirectly of the famous Wittgenstein quote:

Wovon man nicht sprechen kann, darüber muss man schweigen.

See my comment about

See my comment about Feynman's anecdote.

Translation of Wittgenstein quote?

What is the translation of the Wittgenstein quote?

re: Translation of Wittgenstein quote?

"Whereof one can not speak, thereof one must be silent."

my preferred translation

My preferred translation is "Whereof I cannot think of the answer right now, thereof you must be silent."

Sorry: didn't mean to double

Sorry: didn't mean to double post.

lol.

lol.

my preferred translation

My preferred translation is "Whereof I cannot think of the answer right now, thereof you must be silent."

Unstated assumption

The problem with the proof is quite simple, but easy to miss because its not physically present on the page:

Proof by contradiction only works if you assume that you are working in a system without contradictions (ie, consistent).

State this assumption explicity, and your theorem becomes:
assume mathematics is consistent and mathematics is not consistent
derive a contradiction
so it is false that (mathematics is consistent and mathematics is not consistent)
therefore,
(mathematics is not consistent) or (mathematics is consistent)

Real breakthrough, there.

re Unstated assumption

Actually, not only is the circularity you are complaining about "present on the page", it's explicitly state in the first 28 words of the abstract:

That mathematics is thought to be consistent justifies the use of Proof by Contradiction. In addition, Proof by Contradiction can be used to infer the consistency of mathematics

Hint: the proof must be interesting for some other reason you haven't thought of, in spite of the obviously unconvincing nature of that circularity.

Hewitt explicitly denied

Hewitt explicitly denied it's circular, when I raised this point somewhere above.

re: Hewitt explicitly denied

It's not a formally circular argument. Although the proof is formally valid it is also "unconvincing". That's because the (informal) justification for axiomatically allowing proof-by-contradiction in the formal system is a belief (in the sense of very strong suspicion) of consistency.

A hazard here is that by

A hazard here is that by prohibiting certain propositions as circular (supposing DL succeeds in doing this), one might have only prevented DL from acknowledging problems, rather than actually preventing problems. (Hence my anecdote about Feynman.) Informal reasoning about the self-proof is plainly invited, through the informal justification at the start, and at that level of reasoning it is explicitly circular — and consequently, the fact that the self-proof isn't prohibited as 'formally circular' is actually a reason to suspect the prohibition can be bypassed, in the same sense that the prohibition purports to bypass Godel.

Math self-proof of consistency not intutively convincing because

The math self-proof of consistency not intuitively convincing because it simply demonstrates that consistency is built deeply into the structure of mathematics through proof by contradiction.

Consequently, any proof of the consistency of mathematics that uses proof by contradiction is suspect.

Please don't hint.

The form of this hint provoked an emotional reaction of irritation in me. Probably because it doesn't outright state what is interesting, so rhoark can't know whether it is also interesting to him without investing a lot of energy.

I think that the interesting thing you refer to is that Direct logic has a proof of it's own consistency and also surprisingly appears to BE consistent. (from your post) http://lambda-the-ultimate.org/node/4784#comment-76354

(I really enjoy your posts because I can understand and make use of them)

Someone somewhere above

Someone somewhere above remarked more or less, as I recall, that you can take any logic and add an axiom that says "based on no premises, provably this logic is consistent", but doing so doesn't contribute anything to our beliefs about whether or not the logic is consistent. I would agree that the self-proof is vacuous in the limited sense that it doesn't make any positive contribution to our belief that the logic is consistent. Proofs that don't contribute to belief are, in a sense, the classical difficulty with antinomies: given an antninomy in classical logic, one can prove any proposition P, therefore knowing that there exists a proof of P ceases to enhance our belief in P. However, there is a stronger sense in which the self-proof is not vacuous: it does have some negative impact on my believe in the consistency of DL, because I'm suspicious of any logic that allows vacuous stuff like this.

Someone somewhere above

Someone somewhere above remarked more or less, as I recall, that you can take any logic and add an axiom that says "based on no premises, provably this logic is consistent", but doing so doesn't contribute anything to our beliefs about whether or not the logic is consistent

John, in the kinds of formal system that Goedel was thinking of, if you added an axiom of self-consistency you would, thereby, make the system inconsistent.

Goedel imagined only systems which contained sentences describing such things as "a function from integers to sentences".

With all that reflective power, Goedel showed how to combine a self-proof of consistency, or axiom of self-consistency, with the construction of an unprovable sentence, in order to derive a formal contradiction. All the "steps" of deriving the contradiction are in the formal system itself. Goedel's 2nd theorem is a recipe for doing that construction.

Goedel imagined only systems which contained sentences describing such things as "a function from integers to sentences" but Hewitt's grammar rules those out.

I think that a really clean construction of DL would make it more obvious that "Sentences" is not quite an ordinary set in DL.

Math allows many kinds of sentences, but not self-refererential

Mathematics allows many kinds of sentence, but not self-referential ones. Direct Logic even allows some functions from integers to sentences, but it does not allow self-referential sentences.

re; Math allows many kinds of sentences, but not self-refererent

Direct Logic even allows some functions from integers to sentences,

Example, please?

Earlier you said that sentences could only be constructed from primitive sentences and from earlier constructed sentences and now you suggest that sentences can be constructed from integers.

So?

Why do you believe that Godel's proof mechanism requires a self-referential sentence to exist?

Gödel imagined only systems

Gödel imagined only systems which contained sentences describing such things as "a function from integers to sentences".

On the face of it, this statement makes no sense. Surely Gödel imagined only systems which contained sentences describing such things as a "function from integers to integers"? To be more careful, the original paper dealt specifically PM. The generalizations that came later still only assume that the systems include at least some fragment of arithmetic, e.g., PA. The 2nd incompleteness theorem does not suddenly require that the system includes reflection capabilities in its syntax.

I follow your arguments and agree that it is interesting that we can mimic the proof of the 2nd incompleteness theorem, follow Gödel's recipe as you say, using the built-in reflection features of DL. I think I can read your statement in this context in a way that makes sense, but it is still confusing.

I think we risk confusion by conflating the two versions of the theorem, and it seems to me that much of the talking past each other going on in this thread is due to precisely this conflation.

For example, I am unable to tell if you think the actual, precisely worded, 2nd theorem applies to DL or not: the wording of your posts makes it sound like you do not think so.

Should we not be a little more careful and precise in our language, given the risk of confusion that is so apparent in this thread?

re Gödel imagined only systems

For example, I am unable to tell if you think the actual, precisely worded, 2nd theorem applies to DL or not: the wording of your posts makes it sound like you do not think so.

Does this help?

I think the 2nd theorem implies, for DL, that DL can not prove a self-definable model of itself to be consistent (unless DL is inconsistent).

I believe that DL can self prove its own consistency.

I believe that DL can prove that a model of DL contains a proof of self-consistency.

I believe that DL can not express a concept like "Prov(#P) => P" because it can not express the concept of a mapping from numbers to sentences within the theory itself.

We don't have "Prov(#P) => P" and we also don't have "~ Prov(#P) => |/- P".

DL can prove that it's model can not prove P (by proving "~ Prov(#P)" but it can't conclude from that that it, itself, can not prove P.

So DL can prove "~ Prov (#G)" where G is a Goedel sentence but it can't conclude "|/- G". So we can't combine the proof of "~ Prov (#G)" with the proof of self consistency to derive a contradiction that way.

We can't do it all in the model, either. DL trivially has "|- Consistent" and trivially has "Prov (#Consistent)" but it does not have a theorem of consistency for the model. It doesn't have "(forall x:numbers) ~(Prov (x) and Prov (#not(x)))"

Did you see Hewitt's post?

it can not express the concept of a mapping from numbers to sentences within the theory itself.

You kept repeating this after I pointed out that it wasn't true, but I figured you'd stop when Hewitt told you it wasn't true. See six inches up where he writes:

Direct Logic even allows some functions from integers to sentences, but it does not allow self-referential sentences.

All of the evidence says that types are there to block overtly self-referential sentences and any avoidance of Godel is just due to lack of rules that would let you pin down precisely what reflected proofs do.

re Did you see Hewitt's post?

No, I didn't see that comment from Hewitt but thanks for pointing it out. I've asked him the direct question of what functions from integers to sentences he is thinking about.

Agreement?

It doesn't have "(forall x:numbers) ~(Prov (x) and Prov (#not(x)))"

Indeed, the "normal" 2nd theorem says precisely that if DL did have this, it would be inconsistent. So we are in agreement then?

I think I (tentatively) agree also with most everything else you said. That is, the things that need to be shown of Prov can be shown "all in the model"---as is the usual mode of operation for the 2nd theorem---and (perhaps) not outside it.

I say "tentatively" because Hewitt has now explicitly said DL can express functions from numbers to sentences (but only some of them), whereas you said it cannot. Does this change your mind on the outside-the-model proof not going through? I.e., was that a critical part of your argument? I'm fascinated by how this question will ultimately come out, but am not persuaded on one side or the other yet.

Edit: I see Matt made this point before I did.

re agreement?

I think so.

Hewitt has now explicitly said DL can express functions from numbers to sentences

Yeah, that's odd. Let's see what he says.

DL aims to have everything *but* self-referential sentences

Direct Logic aims to have everything *but* self-referential sentences.

So the grammar for a Sentence includes the following:
* If s is a SentenceT and i is an identifier, then ≪lambda [i:σ]→ s≫ is a Lambda [σ]↦Sentence and i is a Term in s.
* If f is a Lambda [σ]↦Sentence and t is a σ, then f[t] is a Sentence
The above imposes typing constraints to prohibit taking a fixed point that could embed a spurious reference to a sentence in itself.

So it is OK for σ to be the type Integer in the above.

Hewitt, please explain

Alright, so let's go through this again. Here is a proof, in DL, that DL is inconsistent. Parts of the proof are only sketched. For example, unlike Goedel I don't go through and tediously work out an encoding for sentences as numbers -- I just assume it can be done. If this proof is not valid in DL because the syntactic restrictions, please point out where the problem is, Hewitt.

Do you agree that in DL I can devise an encoding for sentences, as numbers:

Definition:

S_Encode : Sentence ⟶ ℕ

such that S_Encode[s] gives a Goedel number for sentence s

We can also encode proofs:

Definition:

P_Encode : Term ⟶ ℕ

such that P_Encode[p] gives a Goedel number for term p

So far I have not used any self-referential sentences, right?

I can define an effective proof checker:

Definition:

Proves : ℕ ⟶ {0, 1}

such that Proves (P_Encode(P), S_Encode(S)) is 1 if (⊢ P S) and 0 otherwise,

Can I not in fact prove, in DL:

Theorem(?):

(⊢ P S) ⇒ Proves (P_Encode(P), S_Encode(S))

So now let's construct a Goedel sentence.

I earlier thought that the syntax of sentences in DL prohibited the following, but now I think you say it is allowed. Can I define GoedelSchema as follows:

Definition(?):

GoedelSchema ≡ « λ i :  ℕ | ¬ ∃ [p ∈ Term] → Proves (P_Encode(p), i) »

Assuming I can define that, I assume DL also allows me to define this:

Definition:

GoedelSchemaNumber : ℕ → ℕ

GoedelSchemaNumber[x] ≡ S_Encode (GoedelSchema [x])

If I have taken a little bit of care with my encoding scheme, I can guarantee that it has fixed points and thus that it has a least fixed point.

I assume that nothing in DL prevents me from defining:

Definition:

G_number = LeastFixedPoint (GoedelSchemaNumber)

Now let us define:

Definition:

G : Sentence

G ≡ GoedelSchema [G_number]

We have a trivial theorem, now:

Theorem: 

G_Number = S_Encode [G] = S_Encode [GoedelSchema [G_Number]]

Let's prove Goedel's 1st theorem, now.

Goedel's First Theorem:

(¬ ∃ [p ∈ Term] → Proves(P_Encode [p], S_Encode [G]) = 1)
∧
(¬ ∃ [p ∈ Term] → Proves(P_Encode [p], S_Encode [¬ G]) = 1)

Proof:

Suppose p is a Term such that:

    Proves (P_Encode [p], S_Encode [G]) = 1

    (hypothesis for contradiction)

[1] ⊢ p G

    by definition of Proves

[2] ⊢ p ¬ ∃ [p ∈ Term] → Proves(P_Encode [p], S_Encode [G]) = 1

    a contradiction by definition of G

On the other hand, suppose q is a Term such that:

    Proves (P_Encode [q], S_Encode [¬ G]) = 1

    hypothesis for contradiction


[3] ⊢ q ¬ G

    by definition of Proves

[4] ⊢ q ¬ ¬ ∃ [r ∈ Term] → Proves(P_Encode [r], S_Encode [G]) = 1

    by definition of G

[5] ⊢ q ∃ [r ∈ Term] → Proves(P_Encode [r], S_Encode [G]) = 1

    eliminating the double negative

[6] ⊢ q G

    a contradiction by definition of Proves


Q.E.D.

The first theorem told us that we can't find a proof for G or for ¬ G without obtaining a contradiction. The theorem implicitly assumed that DL itself is consistent.

Have I, at any stage there, violated the syntactic restrictions of DL? Earlier I thought I had violated the syntactic restriction as soon as I define the banal looking, not obviously self-referential GoedelSchema (see above for the definition). Then I think Hewitt said, no, that's allowed.

So, now, we can apply Goedel's 2nd theorem, exploiting DL's theorem of consistency to derive a contradiction in DL:

Goedel's 2nd theorm (DL is inconsistent):

[1] G ⟺ ¬ G

    lemma of the first theorem

[2] ¬ ⊦ G

    by consistency

[3] ¬ ∃ [p ∈ Term] → Proves(P_Encode [p], S_Encode [G]) = 1

    by [2] and the definition of Proves

[4] G

    equivalence to [3]

[5] ¬ G

    a contradiction by [1]

Q.E.D.

So where's the part that violates the rules of DL or is DL in fact inconsistent?

gurgitate

It seems to me that if you do the quantification on the Nat side of the pairing instead of quantifying over Sentences & Terms directly, you greatly reduce the area of things that can go wrong. And then you can restrict your attention to just a subset of DL that has the power of PA + the DL quotation/reflection mechanism. It seems to me that the first thing that could go wrong does go wrong, as I don't see how to define and prove Encode total.

BTW, would you mind elaborating how you set things up so that you can claim there to be a least fixpoint? I haven't seen (or don't remember) that trick. I would expect an explicit substitution scheme to setup the fixpoint.

re gurgitate

You can make a version of Encode that has gives a least fixed point for GoedelSchemaNumber very directly and trivially. Assume you have Enc_prime which is an encoding function but you aren't sure if it has a fixed point.

Then define:

Encode [x] ≡ 0 if x is the sentence GoedelSchema [0]

Encode [x] ≡ 1 + Enc_prime [x] otherwise

In that case, the sentence G is:

¬ ∃ [ p : Term ] -> Proves (P_encode (p), 0)

I don't get it

How can Encode depend on godelSchema when GodelSchema already depends on Encode? And why do we bother with defining the whole schema if we can just write down a Godel sentence?

Alternate approach

Here's a sketch of how I'd do it. First, I'd enumerate formulas instead of sentences:

Decode : ℕ ⟶ ℕ ⟶ Sentence 

-- Decode[n] is the nth formula (ℕ ⟶ Sentence)

Then, we enumerate proofs and (for expedience) let our Proves predicate take a formula and parameter instead of enumerating Sentences:

Proves: ℕ ⟶ ℕ ⟶ ℕ ⟶ {0,1}

-- We need to show that Proves[Prf][Frm][Parm]=1 iff Prf is 
   a valid proof encoding that proves Decode[Frm][Parm]

GN : â„•
GN ≡ Quote { « λ i :  ℕ | ¬ ∃ [prf ∈ ℕ] → Proves[prf][i][i] = 1» }

G : Sentence
G ≡ Decode[GN][GN]
  = ¬ ∃ [prf ∈ ℕ] → Proves[prf][GN][GN] = 1

In the above, I have used Quote{} instead of Encode. They're similar, but Quote is in the meta-language. It means produce the Nat that describes this formula.

Then Godel's 1st goes through as you've shown:

Suppose ï¿¢G.  Then,

    ∃ [prf ∈ ℕ] → Proves[prf][GN][GN] = 1

Then by our choice of Proves, we can prove:
 
    Decode[GN][GN]

But that's just G, so we're at a contradiction.  Thus, G.

We've thus proven G, which says that there is no Nat encoded proof of the quoted equivalent of G.

Working on the arithmetic side, as above, makes it a little harder to push Godel's 2nd through. What we have to do is show that we've formalized enough of the Proof system inside our Nat encoding of proofs that we can quote the above argument to get a proof of Quote{G}. A statement of consistency will then let us convert that into a Nat encoded proof of G and obtain a contradiction.

The only step I have particular concerns about is the one where we can show that the correctness of Proves inside the quoted proof of the above. That is, lifting the DL reflection mechanism into quoted logic. I think we can, though, and that would mean DL is inconsistent.

Are sentences countable?

Are sentences countable given that there are uncountably many propositions (one for every real number). So if every proposition can be reified ...

Doesn't matter

The above doesn't require that Decode be surjective, only that it cover the subset needed to push the argument above through. And that subset is enumerable (e.g., no mention of reified Propositions).

The only step I have

The only step I have particular concerns about is the one where we can show that the correctness of Proves inside the quoted proof of the above.

I share this concern, and I have the same problem with "by the definition of Proves" in Thomas Lord's sketch. Since you think it's possible, do you mind elaborating a little? Maybe just a general approach or two that might pay off?

Thanks to these sketches I at least find it plausible that direct logic can have a sentence that refers to itself by way of an unambiguous self-specification. But that self-specification seems to rely on reflection through Proves, a self-specification of (a subset of) direct logic. I don't see a reason to believe direct logic has enough power to recognize itself this way.

I'm very inexperienced with this, so don't let my doubt discourage you.

I think it's consistent

I was thinking about that step in greater detail to respond to you here and I kept hitting trouble. Stepping back, I already gave an argument for why Direct Logic's reflection mechanism might be provably consistent -- the interpretation of (A, B |- C, D) as (A and B --> C or D). We can interpret (|-p A) in a proof-irrelevant way as (|- A). While I don't know if that argument would work for all of Direct Logic (which I don't have a complete description of), it would seem to work for the fragment here. Thus, I don't think it's very likely that this reflection mechanism is introducing inconsistency in the fragment we've seen.

This also shows why I don't think a Direct Logic proof of (¬ ∃ A:Statement. (⊢ A) ∧ (⊢ ¬ A) ) can be reasonably called a proof of consistency. We can interpret all nested turnstiles as ordinary implication and make sense of all Direct Logic theorems that way, such that the interpretation of the "statement of consistency" is the same as (¬ ∃ A:Prop. A ∧ ¬A ).

(⊢ A)∧(⊢¬A) means inconsistency; bug in above post

(⊢ A)∧(⊢¬A) means inconsistency; there is a bug in the above post.

Thanks

Fixed. Can you think of a rule in Direct Logic that would be unsound with respect to this implication interpretation? i.e. We interpret terms as the values they denote and interpret both Sentences and Propositions as either T or F.

I haven't thought this through too carefully.

re Hewitt, please explain

At the risk of being a noodge I'm a little disappointed that this question hasn't yielded a clear, direct reply. We've been repeatedly assured that a syntactic restriction in DL prevents the 2nd theorem's construction of a proof of inconsistency from going through. Here has been given a pretty detailed sketch of what such a proof might look like. Where specifically does the syntactic restriction kick in and how?

Earlier I apparently misunderstood the syntactic restriction. I (apparently incorrectly) thought it to prohibit defining GoedelSchema a mapping from natural numbers to sentences that include those numbers. That restriction would prevent the 2nd theorem's construction of inconsistency from succeeding. And in my way of looking at things, the mistaken restriction would make good "philosophical" sense. By analogy, the characters in a book (the natural numbers) don't have the capacity to write any part of that book (the sentences).

Well, that mistaken interpretation is wrong so what's the right one?

You might try putting your proposal in Direct Logic

You might try putting your proposal in Direct Logic in as succinct way as you can on how you propose to construct a self-referential sentence including keeping track what is being done in mathematics versus a mathematical theory (restrictions are different).

re: You might try putting your proposal in Direct Logic

You might try putting your proposal in Direct Logic in as succinct way as you can on how you propose to construct a self-referential sentence.

Surely you have noticed that you have not provided anyone with a formal enough specification of DL for that to be possible.

Also, is there some reason you are unable to answer the damn question?

Direct Logic has tight restrictions

Direct Logic has tight restrictions to bar self-referential sentences. Any examples of ambiguities in the grammar on page 10 of Direct Logic are most welcome.

re: Direct Logic has tight restrictions

Direct Logic has tight restrictions to bar self-referential sentences. Any examples of ambiguities in the grammar on page 10 of Direct Logic are most welcome.

Please look over the sketch of a DL self-proof of inconsistency and point out where you think that proof requires a "self-referential sentence" of the sort prohibited.

At this point I am beginning to assume that you can not point out any problem in the proof because DL is, in fact, inconsistent.

Direct Logic grammar has to be obeyed exactly

The Direct Logic grammar has to be obeyed exactly:
*Everything used has to be justified by a clause in the grammar (quote which clause)
*Can't add any extra clauses to grammar

functionals which are (numbers -> sentences)

You can add by axiom any that don't create the kind of self reference needed to generate a contradiction. I suspect that's what he means.

The shortfall of rigor seems

The shortfall of rigor seems to me to be a really fundamental problem here. Because the claimed strategy rests on really technical grounds, so that the devil (or, at least, Godel) is in the details.

A way of thinking about it (not essential to the preceding observation): Some sentences lead to problems. I suggest that not all self-referential sentences lead to problems. Distinguishing between those that do and those that don't would probably be comparable, at some deep level, to solving the halting problem. Therefore the only way to decidably eliminate all the problem sentences is to eliminate a larger class of sentences that includes all the problems and some other, non-problematic sentences. Poetically not unlike retreating from Chomsky type 0 grammars to Chomsky type 1 grammars; some power has been lost. I would like to know what power has been lost. (This differs from my own nascent approach, which is to try to transform logical paradoxes into non-halting, since programmers already live with the halting problem.)

A mathematical theory cannot enumerate its own theorems

A mathematical theory cannot enumerate its own theorems. The closure property of a mathematical theory is part of the over-reaching framework of mathematics. See page 12 of Mathematics self-proves its own Consistency.

check me on this...

Tom? I may be remembering this wrong, but as I remember it I have to question what you wrote above.

You say that Godel showed that a self-proof of consistency or axiom of consistency, leads to a contradiction (inconsistency). I had believed that the self-proof or axiom of consistency led to a demonstrable *incompleteness*, ie, consistency and completeness are complementary qualities that you can have at most one of. Did you mistype, or did I remember it wrong?

re check me on this....

Ray, could you please look over my post "Hewitt, please explain..."?

It gives a pretty detailed sketch of Goedel's first proof as applied to DL.

Then it combines that result with DL's theorem of self-consistency in order to derive a contradiction -- that's Goedel's 2nd theorem.

The question is: What step in this proof (if any) is ruled out by DL's syntactic rules? I thought for a while that it would not be legal to define what the proof names GoedelSchema but now Hewitt seems to say otherwise. So I used Goedel's 2nd to provide a self-proof of inconsistency in DL.

Let's see what he says.

re: Please don't hint.

provoked an emotional reaction of irritation in me.

Sorry. I only hinted because the comment I was replying to was based on a pointedly unkind reading of Hewitt's text. I think that if a person suspects Hewitt has written something really, really off the rails that the first thought that person ought to have is that they themselves are not reading Hewitt correctly. Harsh criticisms should be reserved for more generous readings, usually.

I'll unpack the hint:

Hewitt is describing a formal system for mathematics. A useful intuition to have about "formal systems" is that given a formal system, you could write a computer program based on its rules. The computer program could run forever, printing all of the valid proofs in the system, starting with the shortest ones and continuing to longer and longer proofs. The system is formal enough to allow that level of automation.

Hewitt's system is powerful enough to talk about the natural numbers. For example, if you wanted to, you could take all of the proofs in the famous number theory book "An Introduction to the Theory of Numbers" by Hardy and Wright, and you could express those proofs in Hewitt's system with no difficulty. They would look and act just like you are used to. Hewitt's system is intuitively familiar in that way.

Hewitt's formal system for math ("Direct Logic" with suitable axioms) has some limited ability to talk in general about the propositions that it can prove and about the "sentences" in the formal language of the system. The system can talk a little bit about itself, in that limited sense.

Using that ability to talk about its own propositions, it so happens that Hewitt's system contains a formally valid proof of its own consistency. The proof doesn't really give us any deep reason to believe the system is consistent or isn't consistent... it's just formally valid is the key thing. Cute but...

Goedel's "2nd incompleteness theorem" says very roughly that if a formal system can handle number theory, and that system can prove itself to be consistent.... then that system is actually inconsistent.

So superficially, you might think:

"Well, Hewitt's system can do number theory."

"And Hewitt's system proves itself consistent."

"So according to Goedel, Hewitt's system must be inconsistent."

And this is why what Hewitt is saying is interesting. Because as it turn out....

Goedel's proof of the 2nd theorem is a recipe. The recipe tells you how to start with a formal system that can prove its own consistency, and to conjure up, in that system, a proof of a self-contradictory statement.

This is the new and interesting thing: In Hewitt's system, Goedel's recipe doesn't work.

Other people besides Goedel have given some famous recipes for deriving a contradiction from a system that can prove its own consistency. Those recipes don't work either -- not for Hewitt's formal system. (I won't try to rehash in detail, right here, why Goedel's recipe won't work but the gist is that certain kinds of formal sentence, that Goedel's recipe requires, aren't permitted by the formal grammar for Hewitt's system.)

Most of the paper is taken up with looking at a kind of "history of ideas" around Goedel and how people interpreted his results. Hewitt is kind of pointing out why nobody thought of this before although Wittgenstein was kind of on the right track. The historical discussion also talks about the kind of psychological concepts people have used to think about "this stuff" and so maybe some suggestion about how to think about Hewitt's system.

Thomas, you have it almost exactly right!

Thomas, you have it almost exactly right!

Just one little niggle:

Theorem ⊢Mathematics is Open
Proof. Suppose to obtain a contradiction that it is possible to prove closure, i.e., there is a provably computable total procedure Proof such enumerates the theorems of Mathematics

As a consequence of the above, there is a provably total procedure ProvableComputableTotal that enumerates the provably total computables that can be used in the implementation of the following procedure:
Diagonal[i] ≡ ⌊ProvableComputableTotal[i]⌋[i]+1

However,
⊢ ComputableTotal[Diagonal] because Diagonal is implemented using provably computable totals
⊢ ¬ComputableTotal[Diagonal] because Diagonal is a provably computable total that differs from every other provably computable total.

Torkel Franzén [2004] argued that mathematics is inexhaustible because of inferential undecidability (“incompleteness”). The above theorem that mathematics is open provides another independent argument for the inexhaustibility of mathematics.

your niggle

Let me try to translate from Hewitt-speak to Tom-speak a little bit.

Let's think about all of the computable sequences of 1s and 0s, including the sequences that don't terminate. Every program that we can prove either terminates, or keeps on spitting out more 1s and 0s, corresponds to some "provably computable" sequence of 1s and 0s.

A program that prints a few 1s and 0s and then hangs forever does not correspond to a "provably computable" sequence of 1s and 0s.

A program that prints 1s and 0s and happens to never hang -- but none of us is able to prove it never hangs -- well, the output of that program (g*d knows) is computable, but unless we can find another way to print the same sequence it isn't provably computable.

A sequence is "provably computable" if we have a non-hanging computer program that prints the sequence, and we can prove that the program doesn't hang.

Some smart ass comes along and says "I have a formal system for proving stuff about these computable sequences. My formal system will generate proofs, from shortest to longest, identifying one program after another that prints a provably computable sequence. Every provably computable sequence is printed by a program of finite length with a proof of finite length to go with it. Eventually, just by exhaustively checking all the strings of length N, then of length N+1, and so forth -- my formal system will generate all the programs that provably generate sequences of 1 and 0 without hanging. So here is a theory of all the computable sequences".

A smarter ass comes along and makes a diagonalization argument. He writes a new program that takes as its input, the list of proofs from the first program. The proofs from the first program could be numbered. Here is the first proof that algorithm A produces a computable sequence. Here is the second proof that algorithm B produces a computable sequence. And so forth.

And the smarter ass says here is a new algorithm: it takes your list of proofs as input. For proof #N in your sequence, this new algorithm prints either a 1 or a 0 --- whichever is different from what your algorithm #N would print.

And the smarter ass's meta-program pretty trivially seems to print a sequence of 1s and 0s without terminating. It prints a "provably computable sequence". But on the other hand its output is different from all the sequences printed by the first program (isn't it?) -- so that first program must not be able to print all of the provably computable sequences after all!

And, of course, we could go ahead and add in the second program as a patch to the first program. So now we have all the sequences of the first program PLUS this new sequence from the second program. But even if we do that, won't someone come along and do the same rotten trick again, making *once again* a provably computable sequence that differs from all the sequences in our existing list?

No matter what happens we can always construct a new program that computes a provably computable sequence that none of our earlier programs computed. In that sense, maths is inexhaustible.

It's not too mystical, though.

Let's start with a formal system X that claims to list all the algoritmms for provably computable systems. We know that's a bogus claim so we make this diagonalized algoritm, Diag[X], that differs from all of the computable sequences proved by X in at least one digit.

System X' is the same as system X except that includes Diag[X] as the 0th element of list of provable computable algoritms emitted by X'.

System X'' is the same as system X' except that it includes Diag[X'] as the 3rd element of its list.

System X''' is the same as system X'' except that it includes Diag[X''] as its 6th element,

In this way we can see that "provably computable" sequences is "open". We have inexhaustible and in fact uncountable supplies of new axioms we might pick from to add.

Most of the paper is taken

Most of the paper is taken up with looking at a kind of "history of ideas" around Goedel and how people interpreted his results.

Speaking of my hard-won insights into how to not sound cranky: It's a capital mistake to use historical discussion in explaining one's ideas. The temptation can be pretty overwhelming, I know: Finding yourself in a conceptually unconventional place, you puzzle over what is interfering with others understanding your work, you find it in the historical evolution of ideas, and you think, ah yes of course, if only people understood this they would realize that I'm not a crank but just coming from a different perspective that they could have seen too if they hadn't been trapped by this historical sequence. This impulse must be ruthlessly suppressed. The only circumstances in which it would be worth people's time to study this sort of history would be after they're already thoroughly convinced you're right six ways from Sunday. Which isn't going to happen at all within the scope of your first paper, or possibly even for several papers thereafter. And maundering on how people supposedly made mistakes in the past is a very common pattern for people who actually are cranks, so by adopting it you camouflage yourself as a crank. When presenting a radical idea, do so in an exquisitely plain, objective, clinical way.

“Those who do not learn from history are doomed to repeat itâ€

“Those who do not learn from history are doomed to repeat it”

And one of the lessons to be

And one of the lessons to be learned from history is, that there's such a thing as the wrong moment for a lesson in history.

No implicit assumptions in Math self-proof of consistency

The self-proof of the consistency of mathematics at the beginning of this forum topic does not have implicit assumptions. Classical mathematics allows proof by contradiction absolutely.

Of course, if a contradiction were to be found in mathematics then everything would be provable in mathematics. Over the centuries, humans have debugged mathematical arguments to the point that mathematics is thought to be consistent. Good luck finding an inconsistency in the mathematics (⊢) of Direct Logic!

Perhaps it's those who do

Perhaps it's those who do not learn from the future who are doomed to repeat it....

Are you describing proof by

Are you describing proof by contradiction? :)

Sounds more like reductio ad

Sounds more like reductio ad absurdum to me.

Self-referential sentences also tripped up Alonzo Church

[Church 1931] attempted to construct logical foundations on the basis of the lambda calculus. However, using self-referential propositions, Church's students Kleene and Rosser were able to show Church's system to be inconsistent. Consequently, Church gave up using the lambda calculus for logical foundations and only used the lambda calculus for computation.

You're leaving out an

You're leaving out an important step in the historical process, in which Church and Rosser showed that a subset of the logic is consistent.

Lack of rigor seems a show-shopper for you, but I'm nonetheless curious — what self-referential sentence do you see as involved in the Richard Paradox?

No substantial part of logic of Church [1932] proved consistent

There may be some confusion: No substantial part of the logical part of Church [1932] was proved consistent. Later on, the Church-Rosser theorem [1936] showed a kind of non-logical "consistency" for the computational part of the lambda calculus in the sense that it doesn't matter in which order reductions are performed. Of course, the Church-Rosser kind of computational "consistency" should not be confused with logical consistency.

You might try formulating the Richard Paradox in Direct Logic and see how far you get ;-) A goal of Direct Logic is to bar all of the monster sentences for the known paradoxes.

The computational part is

The computational part is part of the logic, and it's substantial. If you don't understand how that proof fit into consistency and into Hilbert's program, you don't understand the history nearly as well as you think you do.

You're the one making an extraordinary claim here. Even your allies say your paper isn't as rigorous as it should be, and your strategy is technical so that the Devil is in the details and a lack of rigor is fatal to your case: if you don't have a rigorous treatment, you have nothing but claims. And when you ask someone else to "try", you're basically asking them to fill in for themselves whatever you've left out. Asking them to do what you should have done to justify an extraordinary claim. And sure enough, when I ask a specific question, your response is to tell me to go do your work for you. I'm giving you an opportunity to explain your work. The Richard Paradox was used in showing inconsistency of Church's logic, and you say that you've made such things impossible by prohibiting self-referential sentences. Very well; explain what it is about the Richard Paradox that you're making impossible. If you can't explain something simple like this about your strategy, you weren't ready to go public with it yet.

More confusion about inference versus computation

According to Cantini:

In the twenties and in the early thirties, the orthodox view of logic among mathematical logicians was more or less type- or set theoretic. However, there was an effort to develop new grand logics as substitutes for the logic of Principia Mathematica. These frameworks arose both as attempts to recover the simplicity of the type-free approach, as derived from the so-called naive comprehension principle, as well as in order to satisfy meta-mathematical needs, such as the clarification of fundamental concepts underlying the notions of “formal system,” “formalism,” “rule,” etc.
...
However, the theories of Curry and Church were almost immediately shown inconsistent in 1934, by Kleene and Rosser, who (essentially) proved a version of the Richard paradox (both systems can provably enumerate their own provably total definable number theoretic functions). The result was triggered by Church himself in 1934, when he used the Richard paradox to prove a kind of incompleteness theorem (with respect to statements asserting the totality of number theoretic functions).
...
The reason for the inconsistencies was eventually clarified by Curry's 1941 essay. There he distinguishes two basic completeness notions: a system S is deductively complete, if, whenever it derives a proposition B from the hypothesis A, then it also derives the implication A⇨B (deduction theorem or introduction rule for implication); S is combinatorially complete if, whenever M is a term of the system possibly containing an indeterminate x, there exists a term (Church's λ[x]→ M) naming the function of x defined by M. Curry then remarks that the paradox of Kleene-Rosser arises because Church's and Curry's systems satisfy both kinds of completeness, thus showing that the two properties are incompatible.

Non sequitur

That remark is a non sequitur, with no bearing on anything I said.

Removed

Removed

Removed

Removed

FOL can't prove there are no nonstandard Peano models

First-Order Logic cannot prove that there are no nonstandard Peano models. In particular, first-order logic cannot prove that there are no infinite numbers in a model of the first-order Peano axioms.

In this way, Direct Logic is stronger than First-Order Logic. See page 8 of Mathematics Self-proves Its Own Consistency.

Removed

Removed

Many a problem in CS can be solved using a level of abstraction

If you had read page 8 in the reference linked in my post, you would know another way in which a problem in CS can be solved using another level of abstraction. First-order logic cannot prove that there are no infinite numbers in the standard model of the full second-order Peano axioms. (All of the models of the full second-order Peano axioms are isomorphic by a unique isomorphism.)

Removed

Removed

It looks like we agree about a limitation of First-Order Logic

So it looks like we agree that: First-Order Logic cannot prove that there are no infinite numbers in the standard model of the full second-order Peano axioms. (All of the models of the full second-order Peano axioms are isomorphic by a unique isomorphism.)

Other issues are more complicated. You might want to actually read the paper Mathematics Self-proves Its Own Consistency.

Removed

Removed

Removed

Removed

Elements of a model are not necessarily natural numbers

A model of a first-order theory of Natural Numbers may contain elements that are not integers. See page 8 of Mathematics Self-proves Its Own Consistency

Removed

Removed

Categoricity of Natural Numbers and Real Numbers

[Dedekind 1888] and [Peano 1889] proved that models of the second-order axioms of Natural Numbers and Real Numbers are unique up to isomorphism with a unique isomorphism.

Removed

Removed

Direct Logic Proves Categoricity of Natural Numbers and Reals

Direct Logic proves categoricity of natural numbers and real numbers. For example, see end note lxxv of Formalizing common sense for scalable inconsistency-robust information integration

Nonstandard models are a security hole in understanding between humans and computer systems in understanding natural numbers and real numbers.

Removed

Removed

Categoricity of Natural Numbers is fundamental to CS

As shown for example in Mathematics self-proves its own consistency, Categoricity of Natural Numbers is not first-order provable. Consequently, Direct Logic uses the original Dedekind/Peano proof which is not first-order.

For security reasons, it is important that humans and computer systems agree *exactly* on the Natural Numbers because they are fundamental to understanding the operation of computers. It would not be good if it were to be possible to smuggle in an infinite Natural Number.

Removed

Removed

Removed

Removed

Removed

Removed

Not sure if I can add anything to this but...

1) Godel's paper does not claim that "mathematics is incomplete". In fact, Godel proved that first-order logic is complete.

2) It is false that Wittgenstein or anybody else proved that self-referential statements necessarily lead to contradictions. Self-referential statements are no different than any other kind of statement; that's one of the key insights that comes when you understand exactly what Godel-numbering is doing... it's just giving you a way to index a particular statement out of all possible statements. And if the index happens to match the current statement, then you have a self-referential statement. This is so common in computing (recursion) that it's not even exotic anymore.

3) From (2), we can see that sufficient conditions for Godel's proof in a given formal system are that (a) there are infinitely many statements within the formal system and (b) it is possible - within the given formal system - to encode any given statement in the language of the formal system itself. Thus, systems which can be complete and consistent are those which either have only a finite number of possible statements, or those which have an infinite number of possible statements but those statements cannot be encoded within the formal system itself.

4) Unfortunately for someone wishing to escape the implications of incompleteness on "real" math, the rules of basic arithmetic on the integers is sufficiently powerful to satisfy the conditions given above - namely, there are infinitely many of them, and a simple encoding system can be used to map the natural numbers to statements about the natural numbers. Godel's original numbering is just one particular example of such a mapping.

In summary, the incompleteness theorems do not apply to "mathematics" - they apply to any formal system that satisfies the properties which permit the construction of a Godel sentence in that formal system (by whatever means). Among the sufficient conditions for constructing a Godel sentence in a formal system are that the formal system has infinitely many possible statements and that it is possible to encode those statements within the formal system itself - conditions satisfied by the natural numbers and the basic arithmetic operations.

Clayton -

It is unreasonable to suppose that sentences can be enumerated

Why is it reasonable to suppose that there is an enumeration of all sentences? A grammar of sentences does not provide such an enumeration. Also sentences are uncountable because the real numbers are uncountable and there is a sentence for every real number. See Mathematics self-proves its own consistency

Well, I wasn't talking about

Well, I wasn't talking about "enumerating all sentences"; but there's no reason to suppose that enumerating sentences (say, in lexicographic order) is any more problematic than enumerating natural numbers or the primes.

"there is a sentence for every real number"

This is flatly false. Please write out for me the sentence that names the Chaitin constant for a specific reference machine U.

A real by any other name computes the same

Of course, there is a sentence that defines Chaitin's constant Ω; just as there is a sentence that defines π, which is the ratio of a circle's circumference to its diameter.

You can't prove that a real number doesn't exist simply because all real numbers cannot be enumerated. There is no denumerable lexicographic order for the real numbers although they can be well-ordered by the axiom of choice.

"there is a sentence that

"there is a sentence that defines Chaitin's constant Ω"

A sentence is ordinarily understood to be finite in length. By construction, there is no finite-length description of Omega.

"You can't prove that a real number doesn't exist"

I wasn't trying to. I was pointing out that Omega exists in the real numbers and there is no sentence which describes it, contrary to your claim that "there is a sentence for every real number".

counting confusions

A sentence is ordinarily understood to be finite in length. By construction, there is no finite-length description of Omega.

Let's unravel this a bit.

Hewitt asserts that in DL there are uncountably many sentences, pointing out that there are uncountably many reals and at least one sentence about each such real.

You, claytonkb, correctly pointed out that in the ordinary historic sense of the term of art, "sentence" refers to a finite string drawn from a countable or, without loss of generality, finite alphabet.

Thus, you correctly implied claytonkb, that "uncountably many sentences" is kind of an oxymoron.

It's only an oxymoron speaking with reference to dominant tradition, though. Hewitt is in some sense arguing for liberation from that particular tradition.

Hewitt replied with:

Of course, there is a sentence that defines Chaitin's constant Ω; just as there is a sentence that defines π, which is the ratio of a circle's circumference to its diameter.

He is correct but he hasn't actually responded to your point. There is some talking past one another here.

You replied:

A sentence is ordinarily understood to be finite in length. By construction, there is no finite-length description of Omega.

You missed, there.

Of course there is a finite length description of Omega. There's a formalizable one on the Wikipedia page, for example.

Hewitt's mistake in meeting your point was that his examples of Omega and Pi have finite definitions and the existence of reals that have finite definitions doesn't change the underlying fact that the ordinary sense of the term of art "sentence" refers to finite sentences over finite or countable alphabets.

Hewitt's assertion is that there *ought* to be uncountably many sentences because there are uncountably many reals. He relies on us agreeing that there ought to be mathematical sentences that mention, as constants, reals that can not be finitely defined.

So let's examine it from that understanding. Ought we agree with Hewitt about the idea we should have an uncountable number of sentences?

Goedel's central construction of an indirectly self-referential sentence can be reinterpreted in this context as a proof about a subset of DL that deals only in finite sentences.

Goedel shows us how to construct, in DL, a finite sentence that is true and that is indirectly about the finite sentences and finite proofs of DL. This sentence, G, will lack any finite proof in DL (if DL is consistent). And additionally, because of the indirect self reference, we will happen to know that G is true.

Hewitt is interested in Goedel's second theorem which says that if a theory is strong enough to construct G *and* strong enough to prove its own consistency, then it trivially proves itself inconsistent. After all, the theory can prove that it can not prove G and, thereby, it has proved G. That's how Goedel's 2nd works in systems with only finite sentences.

DL contains a (trivial) self-consistency proof. And it can encode a kinda-sorta G that refers only to the finite sentences and finite proofs of DL.

So what does Goedel's 2nd tell us about DL?

Reinterpreted for the brave new world of DL, Goedel's 2nd tells us that DL contains no *finite* proof of G, but there is still the open possibility that DL contains a proof of G that can not be finitely specified.

Even assuming DL could be formally perfected per its intentions, I'm not sure what is usefully added by contemplating this uncountably many infinite sentences and proofs. Perhaps there is a practical application in computer assisted math I'm not seeing clearly? Perhaps not.

"Of course there is a finite

"Of course there is a finite length description of Omega. There's a formalizable one on the Wikipedia page, for example."

Well, perhaps we're talking past each other re. the word "description." By *description* of Omega, I mean one of the following:

1. A writing-out of Omega in its entirety, i.e. 3/2 = 1.5
1a. This could even be a "human-readable" writing out, such as 1/7 = 0.142857142857...
2. An algorithm which could write out Omega in computable time
3. An algorithm which could check a witness of Omega in computable time

By construction, there is no finite-length description of Omega, where "description" is defined in one of the senses 1-3 above. I checked Wikipedia's article and I see no description of Omega that purports to satisfy any of 1-3, either.

describing Omega

By *description* of Omega, I mean one of the following:

By "description", I took you to mean an unambiguous definition without regard to the decidability of the values of the individual bits.

Well, that's one of the

Well, that's one of the nearly metaphysical aspects of Omega: what exactly is "ambiguous" versus "unambiguous"? On an informal level, I think that Omega actually connects time and knowledge through computability - what we mean by "unambiguous" cannot be separated from the time required to make it so. The nth bit of Omega is eventually computed in unbounded time, just not in computable time. How odd that definitions and time are connected...

Wittgenstein: self-referential sentences lead to inconsistency

Wittgenstein showed that self-referential sentences lead to inconsistency.
See Mathematics self-proves its own consistency

Incompleteness proved without self-referential propositions

Incompleteness of mathematics can be proved without self-referential propositions. Actually, incompleteness is an immediate consequence of the computational undecidability of provability. See Mathematics self-proves its own consistency

Real computer scientists don't use Gödel numbers ;-)

Real computer scientists don't use Gödel numbers ;-)

They use abstraction and reification instead. See Mathematics self-proves its own consistency"

Who says LtU is slowing down?

This thread has 390 (now 391) posts. When it rounds past 400, it will be on to its third page of comments. Have we ever had a comment thread that made it to page 3? Usually, the buggy Drupal paging code seems to curb whatever enthusiasm has made it to the end of page 1.

Drupal is drooping at LtU

Drupal is drooping at LtU :-(

At first, I was totally baffled by the bugs wondering where to find comments for the broken links at the right column.

Now I jump straight to the last page and manually scan for updates.

An incantation you might like

When you see a post in the "recent posts" thread that you want to jump straight to, you won't necessarily know what page it's on. But what you can do is right-click on it and copy the link address. Then paste it into your URL bar and insert "?comments_per_page=1000" after the thread number. That will take you right to the comment. You could left click instead of copy-pasting the link, but you'll lose the asterisks on new posts.

For example, your comment would be:
http://lambda-the-ultimate.org/node/4784?comments_per_page=1000#comment-77054

Removed

Removed