Mathematics self-proves its own Consistency (contra Gödel et. al.)

The issue of consistency has come up in many LtU posts.

Some readers might be interested in the following abstract from Mathematics self-proves its own Consistency:

                   Mathematics self-proves its own Consistency
                          (contra Gödel et. al.)

                             Carl Hewitt
                         http://carlhewitt.info

That mathematics is thought to be consistent justifies the use of Proof by Contradiction.
In addition, Proof by Contradiction can be used to infer the consistency of mathematics 
by the following simple proof:
   The self-proof is a proof by contradiction.
     Suppose to obtain a contradiction, that mathematics is inconsistent.
     Then there is some proposition Ψ such that ⊢Ψ and ⊢¬Ψ¦.
     Consequently, both Ψ and ¬Ψ are theorems
     that can be used in the proof to produce an immediate contradiction.
     Therefore mathematics is consistent.
The above theorem means that the assumption of consistency
is deeply embedded in the structure of classical mathematics.

The above proof of consistency is carried out in Direct Logic [Hewitt 2010]
(a powerful inference system in which theories can reason about their own inferences).
Having a powerful system like Direct Logic is important in computer science
because computers must be able to carry out all inferences
(including inferences about their own inference processes)
without requiring recourse to human intervention.

Self-proving consistency raises that possibility that mathematics could be inconsistent
because of contradiction with the result of Gödel  et. al. that
if mathematics is consistent, then it cannot infer its own consistency. 
The resolution is not to allow self-referential propositions
that are used in the proof by Gödel  et. al.
that mathematics cannot self-prove its own consistency.
This restriction can be achieved by using type theory
in the rules for propositions
so that self-referential propositions cannot be constructed
because fixed points do not exist. 
Fortunately, self-referential propositions
do not seem to have any important practical applications.
(There is a very weak theory called Provability Logic
that has been used for self-referential propositions coded as integers,
but it is not strong enough for the purposes of computer science.) 
It is important to note that disallowing self-referential propositions
does not place restrictions on recursion in computation,
e.g., the Actor Model, untyped lambda calculus, etc.

The self-proof of consistency in this paper
does not intuitively increase our confidence in the consistency of mathematics.
In order to have an intuitively satisfactory proof of consistency,
it may be necessary to use Inconsistency Robust Logic
(which rules out the use of proof by contradiction, excluded middle, etc.).
Existing proofs of consistency of substantial fragments of mathematics
(e.g. [Gentzen 1936], etc.) are not Inconsistency Robust.

A mathematical theory is an extension of mathematics
whose proofs are computationally enumerable.
For example, group theory is obtained
by adding the axioms of groups to Direct Logic.
If a mathematical theory T is consistent,
then it is inferentially undecidable,
i.e. there is a proposition Ψ such that
¬⊢TΨ and ¬⊢T¬Ψ,
(which is sometimes called incompleteness)
because provability in T
is computationally undecidable [Church 1936, Turing 1936].

Information Invariance is a
fundamental technical goal of logic consisting of the following:
  1. Soundness of inference: information is not increased by inference
  2. Completeness of inference: all information that necessarily holds can be inferred
Direct Logic aims to achieve information invariance
even when information is inconsistent
using inconsistency robust inference.
But that mathematics is inferentially undecidable (incomplete)
with respect to ¬ above
does not mean incompleteness
with respect to the information that can be inferred
because it is provable in mathematics that ¬⊢TΨ and ¬⊢T¬Ψ.

The full paper is published at the following location:
Mathematics self-proves its own Consistency

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Wait.... It's not april first yet.

You're trolling, right?

Meta-trolling?

Ray,

Have you read the paper?

Removed

Removed

Principle of Integrity: ⊢Φ implies Φ

The Principle of Integrity in Direct Logic is that ⊢Φ implies Φ. Consequently, a theorem can always be used in a proof.

Removed

Removed

Removed

Removed

Re: Meta-Trolling?

OK, I need some context here. On a superficial level, I see a lot of name-dropping, a bunch of mathematical symbolism, and some mumbo-jumbo.

If this is an important result, why couch it in impenetrable mathematics? CACM has for a long time provided context for theoretical results in a section called Technical Highlights that preceeds the actual research paper. That way mere mortals can understand what's being said, why it's important, and what its consequences are. Without that context here, there is no deeper level.

Again on a superficial level, by a non-mathematician, if Direct Logic disproves Gödel, then Direct Logic is inconsistent and hence, in an important sense, wrong. Any inconsistent formal logic can be twisted to prove any result that's desired, in this case that mathematics self-proves its own consistency.

And before you say, this result demands highly specialized mathematics, let me point you to Leslie Valiant's book Probably Approximately Correct. It's intensely and deeply mathematical (computational) and yet it's written in English and can be understood by non-mathematicians (with a lot of effort, I might add, but that's OK.)

I'm sorry for the rant, but if you want more than n people where n is a small number, to understand what you're saying, you need to speak their language.

P.S. "Suppose to obtain an inconsistency that mathematics is consistent" is not an English sentence. There seems to be a verb. [Also an non-sentence.]

Turnstile (⊢) shouldn't scare you

Turnstile (⊢) shouldn't scare you. It was invented long ago by Frege and is standard mathematical notation. It is not "mumbo-jumbo".

However, the self-referential propositions of Gödel et. al. are indeed "mumbo-jumbo". A long time ago Wittgenstein showed that the kind of self-referential proposition that Gödel used in his proof leads straight to contradictions in mathematics.

PS. It looks like you haven't bothered to read the paper.

PPS. "Suppose to obtain an inconsistency, that mathematics is inconsistent" is indeed an English sentence with the verbs "suppose" and "is",

Yes, no and no

Yes, you are right. I have not read your paper, but I have bothered to try. The message I was trying to get across is that I found your paper impenetrable. (And your slides even more so.) It's like the tree-in-the-forest conundrum: If no one reads your paper, does it matter? Possibly there are others who can make their way through it, but I am not one of them. Perhaps it will become one of the most cited papers in theoretical computer science; perhaps not.

No, turnstile doesn't scare me any more that a summation sign scares me. I do have a degree in computer science, albeit from the Middle Ages of computer science, so mathematical notation per se doesn't scare me. It's more a notion kind of thing than a notational kind of thing (to paraphrase Gauss).

I still don't think Suppose to obtain an inconsistency, that mathematics is inconsistent. is an English sentence. It's missing something. The only interpretation that makes sense to me would have a comma after "Suppose". (Have you read Eats, Shoots & Leaves? It's hilarous. The problem there is an extra comma that completely changes the title's meaning.) Did you mean In order to obtain an inconsistency, [I will] assume that mathematics is inconsistent?

In order to obtain a contradiction, assume ....

I agree that it would be better phrased as "In order to obtain a contradiction, assume that mathematics is inconsistent."

You can't find a more elementary proof of a fundamental result.

You can't find a more elementary proof of a fundamental result than the one in paper:

Proof by Contradiction can be used to infer
the consistency of mathematics by the following simple proof:
   The self-proof is a proof by contradiction.
     Suppose to obtain a contradiction, that mathematics is inconsistent.
     Then there is some proposition Φ such that ⊢Φ and ⊢¬Φ.
     Consequently, both Φ and ¬Φ are theorems
     that can be used in the proof to produce an immediate contradiction.
     Therefore mathematics is consistent.

Self-promotion

I've been reading the controversy surrounding your Wikipedia edits prior to your ban. It makes for genuinely fascinating reading - especially your edits to the section on denotational semantics, and the attempts to link the Actor model to general relativity and quantum mechanics. Is LtU your new vanity publishing forum of choice?

Corruption of Wikipedia and some of its Advocates

Roly,

I see that you are a strong advocate of Wikipedia including its policies of censorship, outlawry, and vilification of academics by anonymous Administrators. Also, you are even willing to engage in the kind of libel that is common on Wikipedia.

You insinuate that there are no connections between the Actor model and the physical concept of indeterminacy or between the Actor Model and relativity of simultaneity. Unfortunately, you are wrong on both counts. (See the published literature on the Actor Model.)

And exactly what is your objection to the denotational semantics of the Actor Model?

Are you advocating that LtU should adopt Wikipedia policies of censorship and outlawry by anonymous Administrators?

PS. You might be interested in the following abstract of an article on Wikipedia:

                Corruption of Wikipedia
                      Carl Hewitt
                 http://carlhewitt.info


Wikipedia's business model is generating Web traffic
(primarily from search engines) for articles
of conventional wisdom and morality
(that are heavily censored by a commune of mostly anonymous Administrators)
to motivate (financial) contributions. 

Wikipedia does not allow proper vigorous academic discussion and debate
because they are incompatible with its business model as follows:
In normal academic practice,
the views of experts are solicited, discussed, and respected.
On Wikipedia, academic experts who have tried to participate have been
denigrated as “self-promoters”, censored, and then banned
on the grounds that their views
are not in accord with Wikipedia-imposed Administrator point of view.
As part of its business model,
Wikipedia has engaged in libel and vilification
in an attempt to intimidate academics
into conforming to the censorship of its Administrators.

The article has the following contents:


Introduction	
Censorship by Wikipedia	
Wikipedia relies on Outlawry	
Vandalism and Harassment by Wikipedia	
Wikipedia is a “Communal Blog”, not an encyclopedia	
Wikipedia’s Business Model	
Wikipedia Libels and Vilifies People	
Wikipedia instigated the defamatory attack by The Observer on Carl Hewitt
Scientific Paradigm versus Wikipedia Paradigm
Deception by Wikipedia
Current trends do not look promising for Wikipedia
For the greater good of Wikipedia, Jimmy “Jimbo” Wales should resign from its Board of Trustees.
Acknowledgments
References
Appendix 1: Wikipedia censorship of “Logic Programming”
Appendix 2: Wikipedia censorship of “Incompleteness Theorems”

The article can be found at the following location:
Corruption of Wikipedia

LtU has no problem with first person sources

This is as good of place for these kinds of threads as any. At worst, the article is ignored because no one gets it, and at best it inspires a good debate.

It is not like LtU is a very noisy forum, and this isn't spam. Let's be open minded at the very least (FYI, I don't get it, but then theory isn't my thing).

The subtle art of the Request for Arbitration

The crafting and prosecution of Wikipedia's RfArs is a political art that often has little to do with fairness. While I think Carl's activity was not in tune with Wikipedia's principle of neutrality, the RfAr struck me as vindictive, because the members of the physics group were (understandably) angry with him, their frustration came through very effectively in their statements of evidence, and he didn't have the kind of colourful defenders he would have needed to impress the ArbCom (Wikipedia's supreme court).

If you read closely, you will see that while nobody provided Carl with muscular support, few computer scientists were happy with the process (later, I received a message from one of the most active members of the physics group who also thought the outcome was bad). If I had an idea of where the process was headed, I would have been less equivocal in my participation.

Don't judge Carl on the basis of a superficial reading of this document.

Wikipedia doesn't seem to have any way to correct its errors

Thanks for your friendly comment. Unfortunately, Wikipedia has fundamental, unresolved problems with its processes.

For example, Wikipedia doesn't seem to have any effective way to correct errors and make apologies. The whole affair (recounted starting on page 15 of Corruption of Wikipedia) was a morass of hubris, libel, and corruption.

I made some beginner's mistakes as a Wikipedia editor. However, the flawed processes of Wikipedia resulted in a bad outcome that further damaged Wikipedia's reputation in academia. That Wikipedia has been unable to make a correction has compounded the damage.

I've seen some of the

I've seen some of the dysfunctional social dynamics of Wikipedia, from the perspective of someone who contributes to one of its sisters (the sister projects, like Wiktionary and Wikinews and so on, are independent of Wikipedia, but are under the aegis of the same charity that runs Wikipedia, the Wikimedia Foundation).

Displaimers: I've not read up on this case (I've got enough horror stories of my own about Wikipedians behaving badly, without borrowing someone else's), and I don't even know if Hewitt was in the right or in the wrong (I've got some real problems with some of Hewitt's mathematical/logical/computational claims here on LtU, after all, and have no idea whether those things that were bothering me are even relevant to the case).

But I do think Wikipedia has taken itself to a very bad place. There may not be any one reason for this, and there probably isn't any one solution for it, but I've come to believe a major root cause is Wikipedia's principle of "Assume Good Faith" (AGF). It sounds wonderful, doesn't it? I was initially attracted to Wikipedia partly because AGF sounded so idyllic. Unfortunately, from having that principle in place for years, what's happened is that a whole class of nasty characters has developed, who are expert at being vicious in a "civil" way. They'll use and twist policies and guidelines of the project to torment newcomers, and when the newcomers react, the newcomers get slapped on the wrist for failing to assume good faith. So that in the long run, Assume Good Faith has become a protection for people who don't act in good faith, and helps them to chase away people who are of good faith. I'm under no illusions that it's easy to know how to orchestrate a project like Wikipedia, but I can see that the way they're doing it has flaws.

Wikipedia's reputation...

...is something I'd rather not spend too much time discussing, but if you are not aware of it, Dean Buckner has been engaged in a writing project with Eric Barbour that will be of interest: Wikipedia through the Looking Glass: The Wikipedian POV.

(edited: link fixed, thanks Sean)

link error

The url has some extra text prepended.

Hats

Yeah, clearly a tangential topic not to get wrapped up in. I'll make a small supplemental comment/caveat, though: there's a whole population of external critics of the wikimedia movement who are, themselves, a big part of the problem. Just saying, someone who says bad things about wikimedia is not necessarily wearing a white hat. (I'm automatically skeptical of anything Sanger or Wales says on the topic.)

[dup]

[dup]

Captions for Tenniel illustrations are hilarious

The book's captions for the Tenniel illustrations are hilarious!

IMHO

The problem with the above proof is easy to see. There are NO assumptions. It is about everything. What we need is a "mathematical lingo". Substitute your favorite natrual language for lingo. I find this site to be very suggestive.

A proof by contradiction uses an assumption

The structure of a proof by contradiction is to assume the negation of what is to be proved and derive a contradiction using the assumption.

I was afread of this. But

I was afread of this. But what is implicitely assumed is everything, by the absence of specific assumptions, and the negation of everything by the only stated assumption.

The self-proof uses the assumption that Math is Inconsistent

In order to derive a contradiction, the self-proof uses the assumption that Mathematics is Inconsistent. Since using the assumption leads to contradiction, it follows that Mathematics is Consistent.

Why should I assume that

Why should I assume that mathematics is inconsistent? In model theory the real closed fields, and the algebraically closed fields are consistent, as well as anything that can be validly stated within those theories. For reasons like these I am inclined to accept you theorem, but not the proof. Model theory has methods to combine or extend theories and there some very powerful results, but I don't think we can say that mathematics proves itself, yet.

In order to obtain a contradiction, assume ....

The reason for assuming that mathematics is inconsistent is to derive a contradiction. In proof by contradiction, anything at all can be assumed. But it doesn't get you anywhere unless you can derive a contradiction.

Gentzen developed a proof of the consistency of first-order Peano number theory. But many people were not convinced by the proof.

Similarly, the self-proof of consistency at the beginning of this forum topic is not very convincing. The self-proof simply shows that consistency is deeply built into the very architecture of classical mathematics.

Simpler and clearer approach

The self-proof simply shows that consistency is deeply built into the very architecture of classical mathematics.

All that's needed to show that is to analyze the structure of proof by contradiction, in which any contradiction is assumed to be an invalid conclusion.

The "self-proof" seems to serve no useful purpose. It only obfuscates, raising meta-issues of dubious relevance that detract from the above point.

Self-proof of consistency points out commonly held falsehood

The self-proof of the consistency of mathematics points out a commonly held falsehood that mathematics cannot prove its own consistency. (Last relative clause added after comment immediately below.)

Also, it gets the philosophers' underwear in a knot :-)

The self-proof of the

The self-proof of the consistency of mathematics points out a commonly held falsehood.

I find myself slightly uncertain whether you're viewing that proof as valid.

If you aren't viewing it as valid, that would make sense of some things, and my next question would be what commonly held falsehood you're referring to.

If you are viewing it as valid, I would observe that the proof has two things in common with Anselm's proof of the existence of God. First, both are circular proofs, whose premises include what they claim to prove, so they don't contribute toward establishing their claimed conclusion. Second, in both cases, the proponent's assessment of the merits of the proof appears to be influenced by their desire to believe the conclusion.

Proof is not circular; human desires and dislikes irrelvant

The proof that Mathematics self-proves its own Consistency is valid by the laws of classical Mathematics and is not circular. Euclid (and any succeeding mathematician) could have thought of this proof. Why didn't they?

Classical mathematics is a rigorous subject with exact rules. The desires and dislikes of those who create proofs are irrelevant to the validity of proofs.

If X says that X is

If X says that X is consistent/true/halting/secure/uncorrupted/sane/healthy then it means one of two things:
* X is consistent/true/halting/secure/uncorrupted/sane/healthy and has produced a correct proof.
* X is inconsistent/false/divergent/cracked/corrupt/insane/ill and, because of this, is capable of spitting out nonsense, including the incorrect statement that it is consistent/true/halting/secure/uncorrupted/sane/healthy.

That's Goedel's second incompleteness theorem, and it applies to everything, for example: I am Odin; all Vikings must worship me, since Odin (I) say so.

That dichotomy is much

That dichotomy is much easier than Godel's second incompleteness theorem, which roughly says that, for sufficiently powerful logics, the first astericks isn't a possiblity.

Steer clear of monstrous self-referential propositions

Ever since the Greek philosopher Eubulides of Miletus (4th century BC) developed the Liar Paradox ("This sentence is false."), we have known about inconsistencies that result from self-referential sentences.

Wittgenstein pointed out (see elsewhere in this forum topic) that the self-referential sentence that Gödel proposed ("This sentence is unprovable.") also leads to contradictions.

My advice is: Steer clear of the monsters!

Why didn't they?

OK, so why didn't Euclid and succeeding generations of mathematitions tell us about the self proof of consistency??

Edit: For some reason the system didn't put this under Hewitt, Oh, it's under Hewitt but the wrong one.

How the self-proof was overlooked and then discovered

Before the paradoxes were discovered, not much attention was paid to proving consistency. Hilbert et. al. undertook to find a convincing proof of consistency. Gentzen found a consistency proof for the first-order Peano theory but many did not find it convincing because it used ordinal induction, which is not elementary. Then following Carnap and Gödel, philosophers unquestioningly accepted the necessity of self-referential prepositions in mathematics. And none of them seemed to understand Wittgenstein's critique. (Gödel insinuated that Wittgenstein was "crazy.") Instead, philosophers turned their attention to exploring the question of which is the weakest theory in which Gödel's proof can be carried out. They were prisoners of the existing paradigm.

Computer science brought different concerns and a new perspective with the following characteristics:
* powerful so that arguments (proofs) are short and understandable
* standard so they can join forces and develop common techniques and technology
* inconsistency robust because computers deal in pervasively inconsistent information

The results of [Gödel 1931], [Curry 1941], and [Löb 1055] played an important role the development of Direct Logic:
* Direct Logic easily formalized Wittgenstein's proof that Gödel's self-referential sentence leads to contradiction. So the consistency of mathematics had to be rescued against Gödel's self-referential sentence. The self-referential sentences used in results of [Curry 1941] and [Löb 1055] led to inconsistency in mathematics. So the consistency of mathematics had to be rescued against these self-referential sentences as well.
* Direct Logic easily proves the consistency of mathematics. So the consistency of mathematics had to be rescued against Gödel's "2nd incompleteness theorem."
The common solution adopted in Direct Logic is to bar self-referential sentences. Inferential undecidability ("incompleteness") of mathematical theories still has to be provable in Direct Logic, which can be accomplished using computational undecidability.

In summary, computer science advanced to a point where it caused a paradigm shift.

For me, your answer suggests

For me, your answer suggests another answer. Before the logic as foundation movement, Kant's analytic and synthetic were separate categories, with mathematics as analytic. Logic as foundation implicitly merges analytic and synthetic into something we might call linguistic. It is in this linguistic construct that we have Godel's inconsistency. Quine made the argument that analytis and synthetic are not separate in the 1950's.

Pragmatics and Cybernetics are destroyed by this assumption. C. I. Lewis realized this in 1912, and introduced strict implication and modal logic to counter Russell's material implication. But this was not enough to save Pragmatics or Cybernetics from decline starting in the 1950s.

A good book on Lewis and how he solved the challenges of analytic philosophy and Quine is C. I. Lewis in Focus by Sandra B. Rosenthal.

<=>

You literally just restated "consistency of M <=> no contradictions in M" in prose. This is not a theorem, it's a definition.

Ten Signs a Claimed Mathematical Breakthrough is Wrong

Ten Signs a Claimed Mathematical Breakthrough is Wrong lists entertaining telltales that some breakthrough in math might be not as imminent as a paper claims. For my untrained eye this paper matches at least #1.

Ten dumb ways to decide if a mathematical result is wrong

The reference that you cited lists ten condescendingly dumb ways to decide if a mathematical result is wrong. The first of the ten is probably the dumbest of them all. The current defacto standard for published papers is PDF. And for better or worse, the easiest and most powerful WYSIWYG way to create mathematical articles in PDF is currently Microsoft Word.

Knuth would turn over in his grave

Oh, wait, Knuth isn't dead yet!

Over 40 years ago Knuth invented a language to write beautiful mathematics. Granted (La)TeX is not WYSIWYG, but the results are immeasurably more beautiful than what Microsoft Word can produce. I thought mathematics was about beauty...

But these are just opinions about taste and there is no accounting for taste.

LaTeX should have been WYSIWYG

In order to survive long term, LaTeX should have been WYSIWYG.

PS. Latex attractively renders text and mathematics to Postscript. However, naively converting the Postscript output to Adobe's Portable Document Format (pdf) using Acrobat Distiller or ps2pdf yields a document that displays poorly electronically.

Use XeLaTex or pdfLaTex

Use XeLaTex or pdfLaTex which natively produce PDF output.

From my experience the pdfs

From my experience the pdfs produced by MS Word are much more broken than the TeX ones. Of course, uneducated users in both cases do not know how to properly generate bookmarks and hyperlinks.

Your approach, of just

Your approach, of just axiomatizing known inconsistencies and then limiting your reasoning modes to contain fallout of the resulting inconsistency seems like guano insanity to me... After you've given up consistency, how do you figure out which "theorems" to believe? I'm not very knowledgeable of Direct Logic, but have you established that the inconsistency in your logic cannot lead to non-termination of provably terminating functions? e.g. you reason that it should be safe to delegate to an evaluator, because of self-consistency, and then the evaluator comes to the same conclusion and recurses to divergence.

But this post has inspired me to write-up some total self-compiler notes which I'll post to another thread.

Mathematics follows rules wherever they might go

Mathematics follows rules wherever they might go. When the rules have led to undesirable consequences, they have been adjusted.

The paper discussed at the beginning of this forum topic explains how the rules need to be adjusted.

Here's a simpler proof of

Here's a simpler proof of consistency: Suppose false. Contradiction. Wonder how Godel missed that...

How could Gödel et. al. miss the Self-Proof of Consistency?

How could Gödel et. al. miss the Self-Proof of Consistency? For that matter, how could their predecessors going back to Euclid not notice? The proof is extraordinarily simple and doesn't require any complicated concepts.

I hope you're trolling


Inconsistency can be very elusive.

Inconsistency can be very elusive. Some very smart mathematicians have developed axiomatizations for theories that have later turned out to be inconsistent. When strongly held philosophical believes are involved, it can be especially difficult. Gödel was certain and he brooked no dissent.

And another thing

I clicked twice.

The problem is that your

The problem is that your proof is necessarily tautological, even if mathematics turns out to be actually inconsistent. Which is why intuitionstic logics ought to be more widely used since they don't allow such trickery, where you can just assume consistency and the law of the excluded middle to establish results.

Intuitionistic Logic also has Proof by Contradiction

Intuitionistic Logic also has Proof by Contradiction. Consequently, by the following proof, mathematics is not inconsistent.

   The self-proof is a proof by contradiction.
     Suppose to obtain a contradiction, that mathematics is inconsistent.
     Then there is some proposition Φ such that ⊢Φ and ⊢¬Φ.
     Consequently, both Φ and ¬Φ are theorems
     that can be used in the proof to produce an immediate contradiction.
     Therefore mathematics is not inconsistent.

In Intuitionistic Logic, being not inconsistent is different from being consistent. But it is still quite remarkable.

A formalization in

A formalization in Coq:

Theorem consistency: (exists P:Prop, P /\ not P) -> False.
Proof. firstorder. Qed.

Remarkable? Hardly.

Reading your paper, one gets the impression that you have no understanding of what Gödel actually did. Footnote 9 is telling: you seem to attribute (bizarrely) the idea of Gödel numbering to recent efforts to formalize the incompleteness theorems. Also, after reading that "direct logic" bars self-referential propositions, I expected to find some indication of how "direct logic" escapes Gödel's trick. There was none. The only actual result in your paper, beyond the above trivial theorem, appears on page 11 with no proof, and no indication as to why it is significant.

I actually believe you do not think you are a troll. Nonetheless I sure feel like I am feeding one.

Incorrect formalization of "Consistency"

A more correct formalization of consistency is as follows:
¬∃[P:Proposition]→ ⊢(P∧¬P)
Unfortunately, Coq is not very powerful and so may not be able to prove the above even if it could formalize it.

As explained in footnote 9 of the paper, the formalization of Gödel's proof is a relatively recent endeaver.

In formalizing Gödel's proof, [Shankar 1994, O'Connor 2005],
sentences are coded using integers to introduce self-referential sentences
in order to get around the fact that
the logical systems being used do not allow
self-referential sentences in order to avoid inconsistency!
    Self-referential propositions give rise to numerous
inconsistencies including the paradoxes of Curry and Löb.

The paradoxes of Curry and Löb are then explained the remainder of footnote 9.

Computer science has developed much better ways than Gödel numbers
to explain the relationship between sentences and propositions.
See the section of the paper on "Reification and Abstraction" on page 4.

Unfortunately, you missed reading the grammar of sentences (on page 9)
which uses the typed lambda calculus to bar the construction of self-referential sentences because fixed points do not exist.
See footnote 28 for the exact place where the restriction is enforced.
This is how Direct Logic bars the construction of Gödel's self-referential propositions because they are "monsters" as explained in the paper.

The proof of the Theorem on page 11 is a very elementary and well known.
It is due to Alonzo Church and Alan Turing.
The significance of the theorem is the proof that
there are inferentially undecidable propositions
without using monstrous self-referential propositions.

If you want to get more deeply into technical subject matter, details really do matter. Thanks for taking a look.

I wasn't careful in reading

I wasn't careful in reading the turnstile as "Φ is provable." My mistake.

I am well aware that the formalizations of the incompleteness theorems are relatively recent. But notice how your footnote gives the impression that Gödel numbering was introduced as part of these endeavors in order to overcome deficiencies of the systems in which the formalizations were being done, when of course Gödel numbering is one of the key ideas in the original proof (hence the name).

Russell thought he had barred paradoxes from PM with the syntactic restrictions of his theory of types. The whole point of Gödel numbering was that the ability to form self-referential propositions that lead to paradoxes follows from the ability of a system to reason about basic arithmetic, regardless of any syntactic restrictions.

Nowhere in your paper (on page 9 or otherwise) is it clear why propositions in Direct Logic cannot be encoded as numbers or why the rules of Direct Logic cannot be encoded as statements about arithmetic. So unless Direct Logic is too weak to reason about basic arithmetic, I don't see how Gödel's "monster" is barred.

You may have barred self-references from the "better" ways of relating propositions and sentences "developed by computer science", but have you barred them from Gödel numbers?

The whole point of Gödel

The whole point of Gödel numbering was that the ability to form self-referential propositions that lead to paradoxes [...]

But they don't lead to paradoxes in Arithmetic, just incompleteness.

Agreed, and good

Agreed, and good clarification. I should have been more precise in my language: the ability to form a Gödel sentence G, which resembles the liar paradox, follows from the ability to reason about basic arithmetic. G is true but, assuming consistency, not provable, and so not itself a paradox. A further thought/question: in a system that admits inconsistency (is this not the point of Direct Logic?), is it still clear that G is not a paradox?

Math Theories are thought to be consistent vs practical theories

Mathematical theories are thought to be consistent, which justifies the use of proof by contradiction. However, practical theories (e.g. theories about climate, livers, etc.) are invariably not consistent and consequently Direct Logic does not allow proof by contradiction for practical theories.

Self-referential propostions lead to inconsistency in math

Wittgenstein was one of the first to point out that self-referential propositions lead to inconsistency in mathematics when he wrote:

“True in Russell’s system” means, as we have said, proved in
Russell's system; and “false in Russell's system” means that the
opposite [negation] has been proved in Russell's system.
Let us suppose I prove the improvability (in Russell’s system) of P
[⊢⊬P where P⇔⊬ P]; then by this proof I have proved P [⊢P].
Now if this proof were one in Russell’s system [⊢⊢P]—I should in
this case have proved at once that it belonged [⊢P] and did not
belong [⊢¬P because ¬P⇔⊢P] to Russell’s system.
But there is a contradiction here! 

However, formal grammars had not yet been invented so formal tools were lacking to rule out monstrous self-referential propositions.

See the paper referenced at the top of this forum topic for a formal grammar that uses types to rule out self-referential propositions.

A typed grammar can rule out self-referential sentences.

A typed grammar can rule out self-referential sentences and provide the definition of exactly what is and is not a sentence. Since Gödel did his work before formal grammars were invented, it never occurred to him that it is necessary to formally define exactly what is a sentence.

Similarly, Euler did not formally define polyhedron when he developed his formula for the relationship among the number of faces, edges, and vertices. However, it became necessary to define polyhedron when monsters were discovered.

Since Gödel did his work

Since Gödel did his work before formal grammars were invented, it never occurred to him that it is necessary to formally define exactly what is a sentence.

This seems rather like saying that Galileo didn't know about acceleration because he lived before the differential calculus was invented.

Galileo would have benefited from better tools

There is no doubt that Galileo would have benefited from better tools, e.g., the differential and integral calculus, tensors, etc. And they would have made a large difference in improving his work.

Similarly, Gödel et. al. would have benefited from knowledge of formal grammars because it would enable posing the following question: "What is a sentence?"

In Direct Logic, there is no way to encode sentences as numbers

In Direct Logic, sentences are not character strings. Consequently, there is no way to encode sentences as numbers. You might think of sentences as abstract syntax trees in which the terminals are also abstract.

PS. Thanks! I agree that the footnote might give the wrong impression and will revise it. The reason that the formalizations of Gödel's results was so cumbersome was that they mimicked Gödel in using numbers to code sentences instead of using the tools of computer science, namely reification and abstraction.

Presumably you mean there is

Presumably you mean there is no way to encode sentences as numbers within Direct Logic, internal to the system. (I would guess this claim applies just as well to PM, the system Gödel considered, though I know little about it.) Gödel numbering happens at the meta-level, external to the system. I assume meta-level encoding is still possible for Direct Logic, unless you're going to tell me that the abstract syntax trees of Direct Logic are not representable in the computer. If you don't care about encoding and self-references occurring at the meta-level, then what is the connection to the incompleteness theorems?

Enumerating Sentences does provide self-referential sentences

Being able to enumerate sentences, does not, by itself, provide self-referential sentences if none of the enumerated sentences are self-referential.

You would need a witness

You would need a witness for Φ in intuitionistic logic.

Which not coincidentally also hints at a problem with the proof in a classical logic context.

If you want to use the inconsistency of a system as a premise, you have to accept the consequences of that. The consequence in this case is that the proof is only relevant in the context of an inconsistent mathematics, which I'll call W (being an upside-down M.)

In that context, the proof, interpreted in W, shows the consistency of W via the inconsistency of W. This sort of thing is to be expected from an inconsistent system, no surprise there.

But this doesn't say anything about the consistency of any mathematics that is not already known to be inconsistent. The proof in W doesn't prove the consistency of some other mathematics M. The normal trick of proof by contradiction doesn't work when the contradiction changes a relevant aspect of the nature of the system in which the proof is interpreted.

Restating, you are essentially claiming that the proof selects between M and W, but it does not achieve that since as a proof in W, it says nothing about M. All it does is restate a trivially true property of W. To actually select between M and W would require proof of the existence or nonexistence of Φ. The former could be provided by a witness, as the intuitionistic interpretation suggests.

Math Theories extend Math so that theorems are enumerable

Please see the discussion of Mathematical Theories in the paper at the beginning of this forum topic.

Metamathematics equivocation

There seems to be an equivocation in the handling of "mathematics" as a concept, which when cleared up I believe illuminates the underlying argument. The first proof uses two statements:

  1. Suppose to obtain an inconsistency that mathematics is inconsistent.
  2. Therefore mathematics is consistent.

In the first, he talks about the consistency of a mathematical theory; that is, whether a system of deduction proves (P and not P). In the second, he's speaking on the traditionally metamathematical level in which we accept the provability of certain statements in a theorem to be a meaningful property of that system. The consistent object theory cannot speak to this, but the meta system can consistently treat these provability constraints as meaningful.

Thus, I think this is really about the meta-meaningfulness of incompleteness; when a system doesn't prove itself inconsistent, the standard mathematical approach is to treat it as consistent. Hewett has embedded this in a meta-theoretic proof while the tradition is to state such suppositions in sheer linguistic meta-description.

There is some meat here in formalizing what mathematicians do informally, especially in the context of computational reasoning systems that have to have both meta- and object-level systems formalized. I remember Natarajan Shankar making a compelling argument that reasoning systems should be able to handle these sorts of meta-proofs natively, e.g. when you can't find a proof of "A", you should be able to look for a proof of "provable A", from which you can infer "A" without exhibiting a proof directly. As to the rest of the conclusions Hewitt drawn from this, I don't have any firm grasp to share.

Mathematics is broader than any particular Mathematical Theory

Mathematics is broader than any particular Mathematical Theory. Types and Sets (including Peano Numbers) are the common property of all of Mathematics. Group Theory is a particular mathematical theory that augments Mathematics with the axioms for groups along with the axiom that the proofs (and consequently the theorems) of Group Theory are computationally enumerable.

Unfortunately, philosophers never formalized meta-mathematical reasoning in fullness. Consequently, when it became necessary to do a full formalization so that computers could do all of the reasoning, it turned out that philosophers had overlooked some important things. The paper at the beginning of this forum topic does that kind of formalization.

For example, with respect to your last point, it certainly is the case that ⊢⊢A implies ⊢A. In fact, you can go further because ⊢A implies A.

Removed

Removed

Provability Logic

See the section on Provability Logic in Mathematics self-proves its own Consistency

Anselm

From memory: God is that more perfect than which nothing can be. Something that does not exist is less perfect than something that does exist. Therefore, God exists.

Something that does not

Something that does not exist is less perfect than something that does exist.

Plato would disagree.

Has humanity ever found a perfect circle? Or does such a thing exist only in our imperfect imaginations?

Plato would disagree?

Plato would disagree.

Would he? I thought the point of Platonism was that the perfect circle is more real than the imperfect shadows of it that we see.

I suspect this line of inquiry would necessarily lead to an investigation of the words Plato used, and the nuances of meaning of those words in Ancient Greek. Which I'm quite sure would be beyond me.

Your quote spoke of

Your quote spoke of existence, not of reality. Platonic forms, pretty much by definition, do not exist in any material or physical sense. No matter how many particular circles you point at, you'll never find the platonic form of circles or circleness.

Which is the very nuance

Which is the very nuance of words I had in mind. I'm glad we can agree on something. :-)

Oh, no, undecidability again

Oh, gawd, not again.

It's worth pointing out, though, when undecidability is being discussed, that undecidability requires infinite storage. A deterministic machine with finite memory must either halt or repeat a previous state. Termination in a finite machine can be hard or time consuming (worst case requires going though every possible memory state) but not undecidable.

There's a decidable subset of infinite mathematics, too, and it's useful for program verification. See the classic Nelson-Oppen paper "Simplification by Cooperating Decision Procedures". A system with addition, subtraction, multiplication by constants, arrays, structures, and Boolean operations is decidable. Many program proof systems use that approach. But you can't multiply two variables together and retain decidability, unless you have an upper bound.

It's possible to build up finite mathematics with something like Boyer-Moore constructive theory and an upper bound on number size in such a way that it's decidable. A long time ago, I worked on that, building up array theory from first principles, without axiomatic set theory. It can be done, and I got it through the Boyer-Moore prover, but the machine proofs were really clunky. (Years later, I realized there was a way to clean that up, but I had moved on to doing other things.)

Computational Undeciability versus Inferential Undecidablity

It is important not to confuse two kinds of Undecidability:

* Computational Undecidability is that there is no total computational procedure that correctly returns True or False for the domain in question.

* A proposition P is in Inferentially Undecidable for a Theory T if and only if ⊬T P and ⊬T ¬P.

Hewitt2011 ?

What is [Hewitt2011] refering to? The bibliography contains multiple entries by Hewitt in 2011.

@Hewitt, why do you use such an ambiguous citation style?

Repaired reference to [Hewitt 2010]

Thanks! I have made the correction and posted the new version to the standard location at:

Mathematics self-proves its own Consistency

clarifications requested

I've tried to read and evaluate the paper. I think I have some idea of its main points but I'm not certain I've read it correctly. I could use some help with a second reading (from anyone, not just Hewitt).

The main theorem and its trivial proof, in the main text and in the abstract, refers to "mathematics".

From the abstract (emphasis added): "That mathematics is thought to be consistent justifies the use of Proof by Contradiction."

From the proof: Consistency of mathematics can be defined as follows: [...]. I'm not confident I can transcribe the notation correctly in an LtU comment so I'll just describe: Consistency of mathematics is here defined to mean that for all mathemetical sentences, s, there are not proofs for both the abstract proposition described by s and the negation of that proposition.

My problem here is that I am not familiar with any conventional, technically precise enough definition of the word mathematics to make sense of those statements.

For example, taking another look at the sentence:

That mathematics is thought to be consistent justifies the use of Proof by Contradiction.

What is the definition of "mathematics" being used here? I've heard plenty of people talk about the consistency or inconsistency of specific axiomatic systems but that's clearly not what is meant here.

Later, in the proof the main theorem, the consistency of "mathematics" is defined in terms of a set: the set of all "sentences". I understand this to imply that we have an enumerable set of sentences, and that these are in some way "all of the sentences of mathematics".

Again, I'm not familiar with any sense of the word "mathematics" that could make suitable technical sense of this stuff.

I think a clue to the issue here might be found in the way that Hewitt summarizes "Gödel et. al.":

because of contradiction with the result of Gödel et. al. that "if mathematics is consistent, then it cannot infer its own consistency."

There are quotation marks around this summary of Gödel but I don't think anyone is being quoted. I don't think that that is a good summary of Gödel, either. Gödel's results were about classes of theories where by "theory" I mean particular axiomatic systems.

Whereas Gödel developed results applicable to particular axiomatic systems, at least superficially Hewitt seems to want to talk about the consistency of "mathematics in general" in some unclear way.

What is mathematics?

Thanks! This is an excellent question!

Classical mathematics is a formal system that incorporates Types and Sets (including Peano Numbers). Mathematics is thought to be consistent. But mathematics is not closed in the sense that it is provable that the theorems of mathematics cannot be enumerated by a provably total computational procedure. Theorems of mathematics are denoted by ⊢, which was invented by Frege.

Classical mathematics is the common foundation of each mathematical theory (e.g., group theory) that adds domain axioms to mathematics along with the axiom that the theorems of the theory are computationally enumerable. The theorems of a mathematical theory T are denoted by ⊢T. It is also the case that every mathematical theory (in the sense above) self-proves it own consistency (even if the theory is inconsistent!).

See Mathematics self-proves its own Consistency

further clarification req. -- re what blocks Goedel's 2nd

Thanks, Hewitt. I guess it was a good question in the selfish sense that your answer made a lot of the material easier for me to read.

To test my (partial) understanding (so far) I'd like to try to give an informal, vernacular, high level description of the work. This might help other people "get it".

At the bottom, after my "vernacular summary", I have a further clarifying question about why I can't trivially apply Goedel's second theorem here to derive a contradiction within DL.

-----------------------------------
begin summary
-----------------------------------

Direct Logic (DL) is a formal and effective, extensible, axiomatic system of mathematics.

It is formal and effective in the sense that its proofs can suitably encoded and machine-verified, automatically enumerated, etc. For example, given a choice of any additional axioms (or none), a program could spit out all Direct Logic proofs with a primary sort of size (shortest to longer), and a secondary sort of alphabetical order.

DL is extensible in the sense that we are meant to add domain axioms as suits our purpose. For example, we could add definitions and axioms about groups, or about Conway games and numbers, or whatever else we like. In a given context we could add axioms that make the system meaningless and inconsistent but we will try to avoid that.

In its unextended form, DL is taken to include Peano numbers, some "safe" set theory, a notion of types, and the particular sets and types that are used in defining DL itself.

Among these is the set "Sentences". DL sentences are defined by a simple, constructive, induction on typed terms. A proof in DL concludes with a sentence whose "abstraction", a proposition, has thus been proved.

Conversely, if we want to prove some proposition in DL, we must first find a way to express it as a DL sentence, subject to the rules of sentence construction. (Per Hewitt, the rules limiting sentence construction play a central role in how we evade Goedel's 2nd incompleteness theorem.)

The "built in" features of DL comprise a recognizable mathematical toolbox of components universally believed to be mutually consistent. When people do really fancy math, like developing a theory of differentiation or a theory of graphs, they usually feel free to make simple and safe uses of basic arithmetic, sets, and so forth. (If these tools are one day discovered to be inconsistent then it is not only DL but pretty nearly all of math that will fall apart until and unless the inconsistency can be patched up.)

DL deploys those familiar, presumed-safe tools in a conservative way. There is no facial reason why we should suspect DL has introduced an inconsistency.

These basic math tools are sufficiently powerful that we also facially believe DL can encode and prove the arithmetic theorems of number theory.

DL has a limited ability to reflect on itself. DL theorems can talk, to some extent, about what DL can and can not prove. This leads to an interesting result:

DL contains a (trivial) proof of its own consistency. That's the short proof at the top of the "abstract" posted at the top of this LtU thread. It is stated a little more formally in the body of the paper.

Hewitt says that this proof is a valid proof within the system but allows that it is not a "convincing" proof. Yes, the system validly proves (by contradiction) that the system itself is consistent ... but an inconsistent system could easily prove itself consistent as well.

A "convincing" proof would give us some more visceral intuition about why DL's rules for constructing proofs never lead to proving both P and not P.

Nevermind though: Whether the self-consistency proof is "convincing" or not isn't important for the purposes of this paper. What is important is that the proof is "valid".

DL apparently -- and again speaking informally -- contains all the "usual" theorems of arithmetic and it contains a valid proof of self consistency.

A naive but popular misunderstanding of Goedel's Second Incompleteness Theorem would suggest that no theory like DL should exist! (Unless, in fact, DL is somehow inconsistent in a way we haven't discovered.)

What's that? A math system powerful enough for arithmetic AND containing a valid self-consistency proof? Hewitt must have made a mistake somewhere! Woo hoo! (Not really, though.)

A more careful approach, though, is to assume for argument sake that Hewitt hasn't made any big mistakes and to ask:

(1) Why doesn't Goedel's 2nd proof "go through" for this system?

(2) Is this system indeed powerful enough that we may as well regard it as a successful formalization of "mathematics"?

Right away we can say that (2) is an open question but that "yes" is facially a possible answer. Nothing obviously points to "no".

And (1) is a mechanical question we can work through. Why doesn't Goedel's second theorem apply, even though it superficially seems like it ought to apply?

Very abstractly speaking, per Hewitt, Goedel's second theorem does not apply to any system that does not include directly "self referential" sentences.

By its rules of typed construction, DL contains no such sentences (per Hewitt, I haven't absorbed the form and function of the rules yet). That is why Goedel's proof doesn't apply here.

An alternative title for the paper might be: "Goedel's Second Incompleteness Theorem doesn't quite mean what is popularly assumed".

An alternative subtitle: "Here's a formal, effective, and partially reflective system of mathematics (DL) which you can safely use for just about everything. Use caution when adding new axioms. Steer particularly clear of introducing self-referential sentences. If condition persists, consult your physician."

-----------------------------------------------------------

end of vernacular summary

-----------------------------------------------------------

Clarifying question:

Per Goedel, I can devise an arithmetic encoding of DL sentences and proofs, and an elementary definition of proof validity.

At the meta level I can make a sentence in DL that says:

There does not exist a number X such that IsAValidEncodedProofFor (X, ___)

where I fill in the blank ("___") with a fixed point of my encoding scheme specialized for that monstrous sentence schema.

If DL is actually consistent then of course it will contain no proof for this Goedel sentence I've constructed.

DL contains its elementary proof of self consistency.

Therefore, although I can construct a sentence, Self, which is a fixed point of the above schema -- I gather that I must not be able to show within DL that the number filling in the blank in Self encodes Self itself.

Can you help me better understand how DL is blocked from showing that the number filling in the blank in Self encodes Self?

Here is how I read Hewitt

Here is how I read Hewitt (if I mischaracterize, I invite Hewitt to correct me).

Hewitt is talking specifically about doing mathematics in Direct Logic, with particular foundations in mind, detailed somewhat in Appendix 2 in this paper of his.

So, in that first quotation, he's talking about adding "Proof by Contradiction," a specific inference principle he has in mind, to Direct Logic for the purposes of doing mathematics (and only for doing mathematics), and the motivation for doing so.

I find a lot of what Hewitt says makes more sense once you realize that the framework of Direct Logic is assumed. For example, if anyone else said "classical mathematics is a formal system" I would guess they were talking about ZFC, but obviously Hewitt is not.

Also, when he writes

Also, when he writes "computer scientists", read that as "Hewitt.". If you read it at all.

Computer Science versus Philosophy

Computer Science has scientific and engineering goals and constraints that differ from philosophical concerns of taking positions and making arguments for and against. Consequently, the achievements of computer science can be rather different from those of philosophy.

Direct Logic helps formalize mathematics and math theories

Direct Logic helps formalize mathematics and mathematical theories.

Direct Logic axiomatizes common foundations of mathematics using Types, Sets (including Peano Numbers), and Provability (Natural Deduction à la Jaśkowski). Proof by Contradiction has been a standard part of mathematics ever since Euclid.

Direct Logic also axiomatizes mathematical theories that are thought to be consistent (e.g., groups, ZFC, Hilbert spaces, etc.) and consequently use proof by contradiction. However, practical theories (climate change, human livers, economy of US) are invariably inconsistent and consequently don't use general proof by contradiction.

[misthread]

[misthread]

Cybernetics

I can't resist taking a shot at the self proof if mathematics from a Cybernetics point of view. The idea is that mathematics parses itself by desigh. See these papers Primer for "The Good Regulator",and "The Good Regulator". This amounts to thingness, reality, and/or existence. It is as much cognative as mathematical.

My $0.02, FWIW

Let's start with the following idea: the whole point of mathematics is to be consistent. Godel wasn't trying to say "Ha! You're all fools! It's all inconsistent!" He was saying that if you want to hold on to consistency, you have to give up on completeness. That you want consistency is a given. That's why it's called Godel's Incompleteness Theorem and not Godel's Inconsistency Theorem. So it may well be that we cannot prove (say) P != NP, even though it may be true that P != NP - that's what incompleteness is saying. (Although all Godel actually proved is the formal existence of nonsense, and it doesn't even necessarily apply to "interesting" questions, but I digress.)

What are some ramifications of mathematics not being consistent? Here, Hewitt makes an important and deep point: IF mathematics is inconsistent THEN proof by contradiction is invalid. And that really screws everything up, because then we have to throw out a ton of classical proofs and probably rethink the whole damned thing. We don't want to do that so we try to construct mathematics to be consistent. Have we succeeded? Who knows?

Where Hewitt crosses the line from interesting observation to (sorry) crackpot theory is in turning the correct observation around. ASSUMING proof by contradiction is valid THEREFORE mathematics is consistent. Sure, that's also true, and I accept he has proven this is true ... but he hasn't proven that proof by contradiction is valid in the first place! Everyone else can assume this for the reasons stated above (sure, my theorem may in future be invalidated by a proof that all of mathematics is inconsistent, but I can live with that). But someone who is trying to PROVE that mathematics is consistent cannot ASSUME that proof by contradiction works.

So yes, the argument is circular. IF mathematics is consistent THEN proof by contradiction is valid THEREFORE mathematics is consistent.

At least, that's the way I see it, FWIW.

Having read the paper, I

Having read the paper, I have a direct question:

If Direct Logic is inconsistency-robust, and Inconsistency Robust Logic rules out proof by contradiction, why does Direct Logic allow proof by contradiction (as used in the "self-proof of the consistency of mathematics contra Godel et al")? How is its use justified within an inconsistency-robust framework, when proof by contradiction is invalid within an inconsistent mathematics almost by definition?

Mathematical Theories versus Theories of Practice

Direct Logic deals in the following two kinds of theories:
* Mathematics (denoted by ⊢) and Mathematical theories are thought to be consistent and consequently proof by contradiction is allows. All theories (including theories of practice) are allowed to use theorems of mathematics.
* A theory of practice T (denoted by ⊢T) (climate change, human livers, etc.) is pervasively inconsistent and consequently general proof by contradiction is not allowed.

OK, so I'm confused about

OK, so I'm confused about the paragraph that says "the above reasoning is done using Direct Logic etc.", right after the proof of consistency (which is in the domain of classical mathematics, right?)

You might find some of the

You might find some of the history/background in my recent blog post on this of interest, here.

There's also an interesting character Gerhard Gentzen who didn't make it explicitly into the post, but also fits my profile (in the post) of someone already engaged by the questions addressed by Godel when Godel's result came out. His attitude toward Godel's result, as I understand it, was remarkably different from that of later generations. He understood Godel's results. He didn't dispute them. He just didn't see them as a reason to abandon Hilbert's program (modify, but not abandon); rather, it seemed obvious to him that one would naturally want to know, for any given subset of arithmetic, what additional axiom outside that subset would suffice to prove the consistency of the subset. He made some progress, and apparently would have made more if he hadn't been starved to death in a Communist POW camp after WWII.

Thank you.

Thank you. That was an excellent read. This is a nice trip down memory lane for me, being a trained mathematician but an industrial computer programmer. Of course, Russell already showed that classical mathematics (as axiomatically formalized in 19c) is inconsistent, which proof revolves around sets that self-refer. I guess you could say the "self-proof" here shows that classical mathematics *thinks* it's consistent :)

Mathematics self-proof of consistency is not circular

The self-proof of consistency at the beginning of this post is not circular. Classical mathematics allows proof by contradiction without restriction. Therefore classical mathematics self-proves its own consistency (even if it is inconsistent!).

Mathematics is thought to be consistent.

You mean, of course, "(even

You mean, of course, "(even though it is inconsistent!)" [emphasis added].

Whether or not the "proof" is circular depends on whether you're presenting it as an example of a meaningless formal manipulation within a known-inconsistent system, or presenting it as actually establishing that mathematics is consistent. If the former, then it's not circular but I also see nothing deep about it. If the latter, then of course it's circular.

Though the first sentence of the post is just plain false. "That mathematics is thought to be consistent justifies the use of Proof by Contradiction." Even if mathematics were thought to be consistent (which is either trivial or false depending on what you choose to mean by the word "mathematics"), no, that wouldn't justify the use of Proof by Contradiction.

If your point is that "the assumption of consistency is deeply embedded in the structure of classical mathematics", then you are being circular since you started out, before the "proof", by asserting that.

"Mathematics is thought to be consistent" is a huge achievment

That "Mathematics is thought to be consistent" is a huge achievement and requires maintenance. Direct Logic is an extremely powerful system that may need further debugging. In the meantime, Computer Science soldiers on.

Your "self-proof" uses proof

Your "self-proof" uses proof by contradiction to show that something is true. Proof by contradiction cannot, by itself, be used to prove that something is true, only that something is false; so your using it to show something is true is a red flag that the "self-proof" is using more than just proof by contradiction. (You "prove" that it isn't true that mathematics is inconsistent, but you then need the Law of the Excluded Middle to establish that, since mathematics is not inconsistent, it is therefore consistent.)

The statement that "mathematics is thought to be consistent" would be false if you include in your definition of "mathematics" things that are known to be inconsistent. So evidently you are defining "mathematics" to be something that is thought to be consistent. Which is all very well, except that when you include the Law of the Excluded Middle you're talking about something that isn't thought to be consistent.

If you start by defining "mathematics" to be something thought to be consistent, use that to justify using proof by contradiction, use proof by contradiction to prove that "mathematics", as you've defined it, is consistent, and then say that you're not being circular, then you are being inconsistent.

and in point of fact,

Regardless of whether the OP is being inconsistent in his *statements* about the proof, the proof itself is invalid. The starting conditions contain a disproof by counterexample of the law of the excluded middle. In assuming P & -P we already have a disproof of the law of the excluded middle. Therefore its use in a step of the proof of anything having P & -P among its starting conditions is invalid. So inferring consistency from the absence of inconsistency depends on a 'law' that cannot hold in any system that admits the starting conditions.

Further, this is merely an example of the contagion of inconsistency. In reasoning from P & -P you can create a proof of the negation of any proposition that you can create a proof for.

I seriously considered advising the OP to edit/retract this odiferous thing before it became established in an archive where anyone who might otherwise become a potential employer will find it, but realized that such advice would be unlikely to be heeded.

In assuming P & -P we

In assuming P & -P we already have a disproof of the law of the excluded middle.

You know, I had to stare at this a while before I realized it's not quite right either. Ghastly easy to trip up on, this stuff. The law of the excluded middle doesn't say P and -P can't both be true, it just says at least one of them is true. I'd never thought of it this way, but this is why those two axioms keep coming up together: of the two statements, P and -P, one axiom says at least one of them is true, while the other axiom says at most one of them is true. (I gather the three Laws of Thought, attributed to Aristotle and used by Scholasticism, were that A and not-A are different from each other, that they aren't both true, and that one of them is true.)

the contagion of inconsistency

I like that phrasing. This is why the situation in the early twentieth century rose to the level of a paradigm crisis, rather than an axiomatic flaw to be fixed: there wasn't just an antinomy, there was a systemic problem that took a myriad different forms and kept coming up like a bad penny.

re: Your "self-proof" uses proof

John I think you have completely missed Hewitt's point and the role of the proof in the paper. As in, you slung your bow and fired your arrow and squarely hit just where you meant to aim, embedding it squarely in the door through which you recently walked backwards while the actual target is unseen, somewhere behind you.

Hewitt's paper is about a formal system with these properties:

1) There looks to be here a familiar and conservatively applied foundation, sufficient for doing pretty much all of the formal mathematics you typically encounter. There's certainly plenty there to do number theory. And the building blocks are familiar enough and are conservatively used in ways that we don't really suspect any inconsistencies.

2) Within this formal system there exists a seemingly formally valid self-proof of consistency.

The conjunction of (1) and (2) superficially appears to provide a counter-example to Goedel's second incompleteness theorem. There are somewhat deep (or at least interesting) reasons why it does not actually violate that theorem.

By the way, oh look, here is this perhaps useful formal foundation for pretty much all the math you are likely to use.

It is arguably needlessly provocative and a little bombastic to call this formal system "mathematics" but those weren't your criticisms.

In any event, the Direct Logic self-consistency proof is not intended to convince you that Direct Logic is consistent. The unproven belief that Direct Logic is consistent derives from how Direct Logic is constructed (in conservative ways, from conservatively chosen parts).

If Direct Logic includes

If Direct Logic includes enough to be useful, and can prove its own consistency, that's pretty compelling evidence that it isn't consistent. And Hewitt's failure to give a simple and compelling reason why not is further suggestive, because if a way were found around Godel's no-go theorem, one would expect it to be simple and compelling.

"Hewitt's failure"

If Direct Logic includes enough to be useful, and can prove its own consistency, that's pretty compelling evidence that it isn't consistent. And Hewitt's failure to give a simple and compelling reason why not is further suggestive, because if a way were found around Godel's no-go theorem, one would expect it to be simple and compelling.

Yes, but that is something we can interrogate directly. I started to do so with two earlier comments -- first the "clarification requested" and now the inquiry at the end of the follow-up requesting further clarification.

Notice that my request for further clarification squarely and directly starts to try to apply the 2nd theorem to DL.

I should add, John, that even if DL has bugs I'm very confident that a project like what DL attempts can succeed. What he's trying to do has practical utility but it isn't mind-bendingly unexpected. He's found a nice way to formalize the way that mathematicians post-Goedel have learned to stay out of trouble. He has a very lucid historic argument for why the gals and guys contemporary with Goedel didn't think of it. And he's historically right that Wittgenstein had kind of the right sense of how to look at it -- just not the chops to come up with a worked solution. And I think he's historically right that the development of thinking about computing means he inherited the tools to connect some dots.

I think there may or may not be bugs in DL and that's worth exploring.

I think by my standards he presents DL poorly and a better presentation is worth exploring.

I suspect that because we can "meta-reason" about a debugged DL in ways that DL can't do internally, we might benefit by adding an account of semantic towers here (and its DL, all the way down that tower).

I don't think there is any (and I don't think he claims any) literal counter-example to Goedel. Rather, he claims a finer understanding of Goedel and an unexpected foundational system to take advantage of that finer understanding.

He's got my betting dollar at this point and this is coming from a guy who, like you, "has some real problems" with the way he talks about the computation power of actors.

He's got my betting dollar

He's got my betting dollar at this point

Yes. And to be very clear, I wasn't being sarcastic when I wished you luck. I on the other hand have no interest in betting against him (I dislike betting against others generally, because I generally don't wish anyone ill who doesn't wish me ill), but after some inquiries here I find nothing so far has convinced me it'd be worth my time to struggle to decode something in his work that, if it were there, would be something beautifully simple that he's evidently unable to express himself without coming across sounding like a crackpot. I'm quite aware that people working outside the mainstream can end up sounding like crackpots for reasons other than actually being crackpots (in fact, one of my many partially-written blog posts is about why this happens and how to avoid it). But in making an educated guess as to whether he is or not, the earlier business with computation theory doesn't fill me with confidence.

The self-proof that math is consistent is simple and compelling

The self-proof (at the beginning of this forum topic) that math is consistent is simple and compelling

The self-proof is simple and

The self-proof is simple and (pardon my bluntness) obviously BS. That doesn't necessarily mean you don't have some valuable insight buried somewhere in your work, but it does mean you're both failing to present your ideas in a plausible way and failing to recognize that you're failing to present them in a plausible way.

the "huge achievement"

John I think I might play the role of Rosetta stone here.

Hewitt said "that mathematics is thought to be consistent is a huge achievement".

You replied by criticizing the self-proof of consistency in Direct Logic, the proof that uses proof-by-contradiction.

When Hewitt says "that mathematics is thought to be consistent is a huge achievement" he is not talking about the proof or about a personal achievement. He is talking about a human civilization achievement. We are collectively pretty certain we are safe from contradictions when we use a conservative set theory, basic number theory, certain rules of inference, and so forth. This set of assumptions is implicit in just about everything people "doing math" usually do. The set of assumptions has been cross-checked for consistency lots of ways with lots of partial proofs. There is no absolute proof but.... "Mathematics is thought to be consistent" and that fact is a "huge achievement [of humankind]".

The role of the proof here is not to persuade you of that consistency but to exhibit a formal property of a formal system that also happens to formalize arithmetic well enough that... wait a minute, shouldn't this formal system be ruled about by Goedel's 2nd? Only there is this technical question of the definition of a "sentence" and the way Hewitt uses it here.

I'm not certain yet that DL doesn't still have bugs but it would be nice to see the discussion be more about looking for those bugs....

I wish you luck in exploring

I wish you luck in exploring this angle. However, speaking as someone who had blogged quite recently about prospects for bypassing Godel's no-go theorem, I see no evidence to suggest Hewitt has any real insight here. I'd not be surprised were this to turn out to be of a piece, at some abstract level, with Hewitt's earlier failure to grok computability.

"any real insight"

I see no evidence to suggest Hewitt has any real insight here.

What do you mean by "real insight"? Do you mean that DL is an intuitively acceptable system that contains a valid self-proof of consistency along with arithmetic, contra [popular understanding of] Goedel? Or do you mean that DL contains a mistake? Or some third thing?

John when I was an undergrad, long ago, for a while I was in the fancy advanced studies track for undergrads at CMU. I don't know if they still have it but back then they called it "Math Studies".

It was a really good program in which they took the brightest math students to volunteer for this and made them start over from near scratch.

The curriculum lasted 3 years or so -- I forget -- and started with first order logic and ended with things like a survey of the math of general relativity.

A curious thing about that curriculum was the way it used a consistent foundation for the entire course ... but explicitly and deliberately evaded foundational questions.

We were allowed to define sets by predicates. There was no specific prohibition against recursive definitions. But for the purposes of that course if someone brought up "the set of all sets which are not members of themselves" the official answer from the curriculum would be along the lines of "uh.. let's just not do that. you can talk about that in some other course".

The point of this course was mostly to prepare people who would go on to become professional mathematicians (hence part of the reason I dropped out). The point of the way they treated foundations was that there was a "safe territory" that everyone could learn to recognize but nobody had quite formalized. It was kind of a know-it-when-i-see-it territory, backed up if you really wanted to dig into it, by various foundational studies.

Hewitt's DL looks like a plausible attempt to formalize that know-it-when-i-see-it. That's its significance. It's kind of fun trivia that one of the consequences is the valid but unconvincing self-consistency proof.

Hewitt is about as nuts as someone, in this day and age, trying to make a case for something close to fexprs.

"Real insight" as opposed to

"Real insight" as opposed to thinking you've seen something when there isn't really something there.

And yes, I've thought deeply, for many years now, about how to pursue a scientific idea outside the mainstream. Between my dissertation and my master's thesis I've worked in-and-around two different failed paradigms in computer science, and made some contribution to partially lifting the curses on both. The executive summary of my hard-won advice on alternative science is, don't sound like a crackpot and don't be one. Both of which are difficult.

I might add, that Hewitt is

I might add, that Hewitt is likely working with a disadvantage: confidence. He's had success in the past that might make him feel he'd be taken seriously because of it. But when presenting a radical idea, one has to work really hard at how to present it, regardless of whether one has an established reputation.

Gödel was certain

According to Paul Cohen:

He [Gödel] struck me as having
an almost unshakable belief in this “realist” position,
which I found difficult to share. 
His ideas were grounded in a deep philosophical belief
as to what the human mind could achieve. 
I greatly admired this faith in the power and beauty of Western Culture, 
as he put it, and would have liked to understand
more deeply what were the sources of his strongly held beliefs. 

On the other hand, ccording to Lakatos:

“Certainty” is far from being a sign of success; 
it is only a symptom of lack of imagination and conceptual poverty.
It produces smug satisfaction and prevents the growth of knowledge.

At the risk of stating the obvious,

At the risk of stating the obvious, my remark about confidence is not about the researcher believing they're right, but about the researcher believing that others will believe them. Godel explained himself well (well enough, as demonstrated). I fear you may not realize you've failed to explain yourself well. I knew I was right about the theory of vau-calculus years ago, but I also knew I hadn't yet found a way of explaining it that would be understood by people already familiar with Wand's result, so I kept working on the problem. If I hadn't viewed the failure of communication as a problem that needed to be solved by improving my presentation, I wouldn't have continued trying to improve my presentation.

Understandability

Sorry that you are having trouble with the material. Is there anything that is particularly troublesome? Maybe the presentation could be improved.

understood vs understandable

I said I needed to explain my result in such a way that it would be understood by people already familiar with Wand's result. That's different from explaining my result in such a way that it would be understandable by people already familiar with Wand's result. The question I needed to ask was not why they couldn't understand, indeed it wasn't certain that they couldn't; the question was why they didn't understand.

When people already familiar with Wand's result saw my result, their first reaction was, not infrequently, to figure I was probably too ignorant of the subject to understand that Wand had already disproved what I ignorantly imagined I had proved. It doesn't matter whether it was "reasonable" of them to react that way; the reaction occurred and therefore had to be dealt with. And it could not be dealt with by YELLING MY CLAIMS MORE LOUDLY. Why did they react that way? The vast, vast majority of claims that appear to contradict a previous correct result are meritless, and trying to take them all seriously would only result in having no time or energy to do anything useful. If one can get past that first reaction, which I sometimes did, we then had spectacularly dysfunctional communications because the entire conceptual framework they were using, and the one I was using, differed on multiple subtle and pervasive points, so that we would talk past each other. Even if we tried to discuss something we thought might be "the" thing tripping us up, we'd be tripped up in trying to discuss that thing by the other things that were also tripping us up.

I do not know whether there is some great, valid insight underlying what you're doing. There are, from my perspective, lots of telltales in what you're saying that cast doubt on your understanding of the subject. That could be because you're failing to explain what you're doing in a way that makes you sound credible. But from my perspective (and here I have to be considered typical of a significant segment of your potential audience), pouring time into trying to understand what you're doing isn't a sound investment for me until and unless I've got better evidence that you know what you're talking about. From your perspective, you have the nasty, but not at all unusual, problem of either  (1) finding a radically different presentation that causes you to sound immediately credible, or —even more difficult and unpleasant—  (2) figuring out that you were mistaken.

A key factor here in how your work comes across, and therefore key for anything you do from here (either 1 or 2), is that Godel's result is the sort of thing that, if there is a way around it, that way is probably both incredibly simple and so stunningly obviously right that nobody's going to doubt it once they see it. That that is demonstratedly not the case with your result is in itself a problem.

Standing back and looking at the overall pattern of discussion here, it also seems you have a tendency to waffle back and forth between saying Godel doesn't apply because you've used types to prohibit the paradoxes, and saying (so it seems) that you don't need to worry about consistency because your system is robust against that sort of thing. The waffling back and forth feels complicated and suspicious, so if you want to change your image you need to avoid creating that feeling. It also doesn't help that each of those two claims is individually dubious: On the one hand, types are generally complicated rather than simple, and Russell and Whitehead had applied typing to the problem before Godel's result. On the other hand, tolerating inconsistencies doesn't feel like a suitable foundational approach, but rather like a way to live with the flaws of imperfect specs — note that you start out by saying that use of proof by contradiction is justified by belief that the foundations are consistent, which common sense says does not square with deliberately choosing foundations so they'll tolerate inconsistencies. Why would you care whether the foundations tolerate something that you supposedly believe aren't there?

Your problem, supposing you don't discover you've made a mistake somewhere, is to figure out how to give your entire presentation a radical make-over. And there's a good chance only you can effect such a make-over, working from the inside.

understandable

is that Godel's result is the sort of thing that, if there is a way around it, that way is probably both incredibly simple and so stunningly obviously right that nobody's going to doubt it once they see it.

That's the case here. You are simply forbidden from defining DL sentences using, for example, a function whose domain is the natural numbers. You can't use Goedel's 2nd theorem construction as a result.

Maybe there is some weird way to rescue the gist of Goedel's 2nd, for DL, using some yet undreamed of alternative construction. It's impossible to prove that negative. Good luck if you want to try.

I agree that the presentation of DL in this paper has some severe stylistic deficits, though, from the perspective of many potential readers.

This doesn't meet either of

This doesn't meet either of my criteria, in terms at least of presentation. It doesn't look incredibly simple, since it rests of types, and types are nowadays usually ludicrously complicated beasts (even though Russell and Whitehead's type system actually was simple). And it isn't at all obvious that it's right. It reminds me, forcefully, of a (half-remembered) story of Feynman's, that went something like this: At Los Alamos, where the military brass kept secret documents in safes in their offices, Feynman would crack their safes during brief periods when he was alone in their offices, to see if he could. He diligently reported to the higher-ups that there was this security problem that their safes could easily be cracked. They reacted to this information by issuing a memo that Feynman was not to be left alone in their offices.

re This doesn't meet either of

John, Goedel gave you a recipe for deriving a contradiction in a formal system if that formal system contains the basics about natural numbers and that system contains a valid self-proof of consistency.

I encourage you to go ahead and try to carry out Goedel's construction in DL. At some point you will want to define a function from numbers to sentences. That step is syntactically forbidden. You won't succeed, following Goedel's recipe, at deriving the promised contradiction.

You're kind of missing the

You're kind of missing the point I was trying to make to Hewitt. Even you have commented the stuff's not rigorous. Others have comments like 'he seems to be misunderstanding what Godel says'. If he's on to something, he needs to radically change his approach to telling others about it, because as it is he's presenting something that doesn't even look worth trying to disprove. The more valid his ideas actually are, the more important it is for him to completely change his presentation to the world.

re: You're kind of missing the

I agree that it would benefit from a more rigorous presentation and a presentation that makes fewer assumptions about the reader's knowledge going in. Not so sure about your condescension, though.

Fewer assumptions about

Fewer assumptions about readers' knowledge going in? (And you call me condescending?) Simplicity is desirable, but as for assumptions about readers, I'm mainly concerned that he seems to ask readers to believe him despite the trappings of crankiness. I went to great lengths, literally and figuratively, to give him the best advice I could on how to sound as uncranky as possible (based on my own decade-or-so of experience in trying to present an extra-paradigmatic idea while sounding as uncranky as possible). It's frustrating you seem to be encouraging him to ignore my advice.

DL may choose to forbid

DL may choose to forbid whatever it wants to, but that won't prevent the definition of functions from DL syntax to natural numbers and vice versa outside of DL. And from the standpoint of the incompleteness proof, this inside/outside distinction doesn't matter.

I've read many, many crank efforts (I am not saying Hewitt is a crank) at subjugating incompleteness by "forbidding" things; it doesn't help.

Also note: incompleteness is a statement about certain specific individual first order mathematical theories; it is not a statement about "mathematics" in toto.

re DL may choose to forbid

And from the standpoint of the incompleteness proof, this inside/outside distinction doesn't matter.

It matters for the 2nd theorem because inside the system you need to combine the self-consistency proof with proof of G's paradoxical implications if G obtains.

Suppose I am wrong. Can you please walk us through the 2nd theorem's construction of a contradiction in DL? I think you won't get far before you run afoul of DL's syntactic limitations but you might prove me wrong.

Also recall my other

I also asked, what power is being lost? Interfering with our ability to prove there's a problem (supposing that is successfully done) only helps our confidence that there isn't a problem if we can see, clearly, that the power that caused the paradoxes has been reduced. And showing that clearly is going to be difficult since we have clear evidence that hanky-panky like a self-proof of consistency is possible.

re: also recall

You could build a model of DL within DL. DL is strong enough to prove that the model system is incomplete. DL is strong enough to prove that the model system contains a self-proof of consistency. DL is not able to prove that the model system is consistent. DL is strong enough to prove -- using roughly the ideas of Goedel's 2nd but differently applied -- that DL can not prove that the model is consistent unless the model is inconsistent. DL can not prove that in talking about the model it is talking about itself. In particular it can prove that G, in the model, can't be proved in the model but it can't from that import the unprovability of G within itself.

Suggestions for improving the style of the paper are welcome

Suggestions for improving the style of the paper at the beginning of this form topic are most welcome.

a simple challenge for DL

John, I sketched out how I think Goedel's 2nd incompleteness theorem would be applied to DL to derive a contradiction. Hewitt's claim seems to be that some step implied by this sketch is not valid in DL and I'm hoping he'll elaborate a bit to give us all a clearer idea.

(If anyone else sees how DL prevents this, let me know! It's a bit frustrating because I don't think there are enough and precise enough definitions in the paper to be sure exactly how DL is constructed -- so I have to make what I hope are some reasonable guesses.)

http://lambda-the-ultimate.org/node/4784#comment-76480

So what you're saying is

So what you're saying is that within the system of classical mathematics, proof by contradiction is always allowed, therefore this counts as a proof within classical mathematics.

I sort of see your point. Sort of.

But if proof by contradiction is the irresistible force, surely circular arguments are the immovable object. So we're in the sphere of comical paradoxes in classical thought, really. Of course the argument is circular, and a classical mathematician can easily point that out. It's not *explicitly* circular, I'll grant you that.

Regardless, "classical mathematics" in this sense is not even formally defined. It includes any sub-language you care to invent. Arithmetic, calculus, geometry, topology, etc. So the proof is not even self-referential - it's referential of some much larger entity that is in principle infinite in scope - like an ant contemplating the universe. Godel's results are about formally-defined systems of arithmetic with limited scope, not informal universally-scoped self-referential systems that you can break if you wave your hands hard enough. Given that, how is this result really contra to Godel?

To contradict Godel you have to show a system X which is sufficiently complex to come under Godel's remit but which is provably consistent (so proof by contradiction can be used) and then you can prove consistency using proof by contradiction, but only because you are standing on the shoulders of whomever proved X's consistency in the first place. Even then, X does not "self-prove" its consistency, unless the actual proof of its consistency is within X and not within meta-X. So you still need to offload the actual work to the guy who can actually prove the consistency of some system X within X. That would be contra Godel, and would be a significant achievement. But saying "X assumes Y, and using Y I can prove f(X), therefore X self-proves f(X)" seems like nonsense to me, sorry. You need "X *proves* Y, and using Y I can prove f(X), therefore X self-proves f(X)".

Gödel 's results apply to Math and Mathematical Theories

Mathematics encompasses Sets (including Peano Numbers). So mathematics is certainly sufficiently powerful for Gödel's results to apply. In addition, every mathematical theory builds on mathematics. The same simple proof enables every mathematical theory to prove its own consistency.

Second-order inference

I've spent a bit of time with it, and I find this system interesting, but I class the paper as "fringe logic" because:

  1. It deploys terminology in a way that is avoided by professional logicians because it is confusing. Here, the Identity and Reflection axioms and the backchaining rule suggest (it is bad that I am not sure!) that we are working with a second-order theory. Logicians avoid talking about inference systems for such theories, or use of the singly dashed turnstile relation, because these systems do not behave like first-order theories. As Stephen Simpson put it, these theories do not have proof theories; your actor model for them is something else, a computationally grounded model theory, and your "natural deduction" seems to be the same. To try to be constructive about this: can we give a more typical inference system for the theory, and if not, can we cast light on why not?
  2. The paper is hard for me to understand for reasons not directly stemming from the difficulty of the material. I say a bit more below about the kind of consistency proofs that Gentzen and Willard give, but in general, I see much more effort put into showing what the theory can do than into the equally important issue of what it cannot do. If I were to cast the paper into a map of the connections between the ideas presented, it would mostly be marked "here lie dragons". This is not a good basis for a foundational critique.
  3. The rhetoric is grand and not clearly based on the formal results, I think partly because of terminological haze. I might be in some sympathy with the idea that the quest for certainty is Quixotic, but there is something of slapstick in repurposing the fundamental concepts of the revolution in mathematical logic of the 1920s and 1930s to make the goals look absurd. If I wished to be cheap, I might say that proof theoretically, the advantages of this theory over Gödel's system PRA are the advantages that Russell ascribed to theft.
  4. It would not be surprising for a second-order theory (if it is that) to prove its own consistency, if consistency means absence of a "contradiction", because second-order theories effectively elide the distinction between truth in the model and their (preferably doubly dashed) turnstile relation.
  5. I use scare quotes about "contradiction" in this system because since there seems to be no notion of proof in the standard sense of the term, the notion of contradiction is also non-standard; contradictions appear here to be redefined to mean the absence of a pair of sentences that are both satisfiable and where one is the negation of the other. I think this may be the point that Countably Infinite was getting at with the point that soundness is a highly nontrivial hidden assumption in this kind of theory.
  6. The kind of consistency proofs that Gentzen gives for FOA and Willard gives for his self-verifying theories are described by Hewitt in terms I regard as condescending. But both of these proofs do what I do not see in this paper, namely provide strong arguments for the adequacy of the systems they describe. If Direct Set Theory proves a mathematical theorem, what confidence do we have that the proof we have would be acceptable to a mathematician? Proofs in ZFC generally are acceptable to mathematicians.

Some minor quibbles:

  • However, PL [Gödel-Löb provability logic] is a very weak theory of deduction - PL is a very tricky propositional modal logic. Its theory of frames is non-first-orderisable. The Löb axiom is the magic sauce in some very interesting, strong type theories.
  • (Gödel insinuated that Wittgenstein was “crazy.”) - I respect Wittgenstein's work very highly, but Gödel was not being unfair here. Wittgenstein was using the notion of contradiction at cross purposes to the way Turing did (Turing famously lost his temper with Wittgenstein about the interpretation of mathematical logic.
  • I don't like the existence of the trademark symbol in Direct LogicTM (thanks note on the first page). At the very least, it is unfair to the literature stemming from the same-named decidable predicate calculus of Ketonen and Weyhrauch (1984).

I do not mean to be cheap. At the very least, an "internal logic of the actor model" is a very interesting thing, and many of the features of the system remind me of other interesting things that I have seen. But I would like to see less contempt for first-order solidity and more results about where the limits of the theory lie.

Computer Science versus Philosophy (again)

Charles Steward has raised some interesting points.

A fundamental point is differences how Computer Science and Philosophy address issues.

In order to facilitate discussion, consideration of his points is separated into multiple posts below (ignoring as much as possible Stewart's non-substantive innuendo).

"Faced with the choice between changing one’s mind and
proving that there is no need to do so,
almost everyone gets busy on the proof."
    John Kenneth Galbraith [1971 pg. 50]

Philosophy set the initial paradigm

Philosophy set the initial paradigm and some philosophers are uncomfortable that Computer Science has entered a field that used to belong to Philosophy alone. A new paradigm is often considered "fringe" and new terminology is not initially understood.

"fringe"

Three points:

  1. Perhaps I should not use the charged term "fringe". I do not mean to say there is no contribution there, or that I don't appreciate efforts to do something new. It is just that both paraconsistency and non-first-orderisability are tricky things, and if you inhabit that space, you lose things that boring, certainty-grounded, monotone first-order theories have. This is not to say that this ground is not interesting, but if you are not clear about the costs of giving up monotonicity or first-orderisability, you will lose audience and risk not being considered relevant to the ongoing enterprise of mainstream logic. In particular, the kind of reflection-like axioms you use and interleaving of forward/backward-chaining are known and have a history in logic: the reasons these haven't been widely adopted in the theorem proving community are reasons to be doubtful of your work until you deal with them straight on.
  2. I am not an academic anymore; to the extent I consider myself still a scholar, it is as a logician. Computer science, particularly, is where I think the most exciting applications of logic lie and so it is the richest source of problems for logic; likewise philosophy is something that logicians sometimes have to do, but logic is something of an autonomous discipline that overlaps with but is not just a part of either. I like how Volker Halbach has talked of the formal sciences as an ecology that is inhabited by logic, computer science, and (some) philosophy. You should be right at home in the formal sciences.
  3. I actually like your consistency proof, as an argument. It is nice to set up systems where this kind of thing can be done. I do not think it is very surprising that we can find systems that are either non-first-orderisable or paraconsistent that can "prove" their own consistency, and the interest of the proof depends what good properties your system has. I think what you have shown doesn't tell us that Gödel has been refuted if we avoid playing games.

Paraconsitency is a special case of Inconsistency Robustness

Direct Logic is designed to support Inconsistency Robustness (see the IR'14 symposium at Stanford next summer).

Paraconsistency is a special case of Inconsistency Robustness.
*All that paraconsistency requires is that it to be possible to derive everything from a contradiction. So a logical rule to the effect that you can infer "God exists." from a contradiction is paraconsistent because it does not infer everything.

*As a paradigm, paraconsistency resulted in zoo of incompatible and non-intuitive systems that were not useful to Computer Science. Computer needs a powerful, intuitive standard system for inconsistency robust inference. Direct Logic attempts to fill the bill.

The problem with paraconsistency...

... is that the class of paraconsistent theories includes the dialethic theories, we I think we agree are unacceptable for mathematics. But since dialethicity is something that can be hidden, it makes providing formal criteria of acceptability harder for paraconsistent logic than monotone logics.

I do not reject paraconsistency: Belnap and Dunn have shown that it is interesting. But its strengths I consider to be weaknesses for the kind of enterprise that I take you to be engaged in, namely providing foundations.

Inconsistency Robustness is fundamental to Computer Science

Inconsistency Robustness is of fundamental to Computer Science. See Inconsistency Robustness 2014

What theories are really like

I believe that handling inconsistencies is essential to working with large information systems, collective inference, and the like, which I would class as information science. I think failure to tolerate inconsistencies has been healthy for mathematics.

I am not barring monsters here. I just want to know what theories are really like before I claim to understand them. For non-normal modal logics, I like to see if they are equivalent to a normal modal logic, in the sense that proofs in one can be translated into proofs in the other, and likewise for unprovability. For paraconsistent theories, I like to see if they are equivalent to some montotone logic. If I can't see this, then I doubt the coherence of the system.

Two kinds of theories: mathematical versus practical

It is important to distinguish two kinds of theories and handle each appropriately:
* a Mathematical Theory should be a consistent extension of Mathematics which is the common foundation of Types and Sets (including Peano Numbers)
* a Practical Theory (e.g. climate change, the human liver) is invariable inconsistent and requires inconsistency robust inference (e.g. Direct Logic). A Practical Theory can make use of any theorem of Mathematics.
Both mathematical and practical theories should be monotonic for theoretical and engineering reasons.

Self-referential sentences are monsters that must be barred because they lead to contradictions. First-order theories are entirely inadequate for Computer Science.

Computer Science is informed by and informs Mathematics

Computer Science is informed by Mathematics and increasingly informs Mathematics. Just being born, Computer Science imported from Mathematics. But the new science soon needed to extend and then create new Mathematics.

A proper mathematical theory requires monotonicity. According to Anderson and Belnap in Entailment:

Any criterion according to which entailment is not transitive,
is ipso facto wrong.

Thanks for the pointer to the work of Volker Halbach.

Is the first-order restriction "playing games"?

One way of "playing games" is introducing limiting syntactic restrictions on general principles. For example, first-orderedness imposes syntactic restrictions as follows:
* on the general induction principle for natural numbers
* on the sets that can be defined by a predicate using comprehension
Philosophical reasons have been proposed for the above restrictions. But the above restrictions may not make sense for Computer Science.

No

FOL is far from a syntactic restriction on logic. The restrictions make sense for semantical reasons more intelligible to the typical computer scientist than to the similarly typical philosopher.

In terms of "logic harmony" (a philosophical concept due to Gentzen, Belnap, Prawitz, and Dummett, one that Howard and Martin-Löf have shown makes clear sense within theoretical computer science) and as I explain in my doctoral dissertation (Stewart 2000; mostly chapter two, for the intuitionistic case, together with the discussion of recursion in chapter one, and with an explanation of how classical logic complicates matters in chapters three and four and a demonstration there that these complications can be resolved) , the standard inference rules involving natural numbers (in type theory, the rules introducing the zero and successor term constructors and the induction rule) can be seen as a complete specification of the meaning of natural numbers and provides an effective criterion for acceptability of alternate induction principles. Incompleteness arises for a reason that can be understood in type theory syntactically: that the induction rule for the natural numbers cannot be constructed to satisfy what I call the decompositional part of the inversion principle (Prawitz calls this the reverse of the inversion principle), with the consequence that for some new concepts (such as universal quantification) the interaction of the natural numbers concept can interact with the new concept to increase the inferential strength beyond the two concepts separately (i.e., the theory with both concepts is a nonconservative extension of the theory with either one of the concepts removed; this generally means that it is not possible to prove cut elimination for the theories with both concepts, even though cut-elimination might hold for each of the one-concept theories). Comprehension principles appear to suffer a similar problem with respect to harmony, although there is no consensus on the right way to formulate sets in type theory.

If you look for people who think that SOL is better than FOL, I guess you will find more philosophers than computer scientists, e.g., Shapiro (2000).

(Shapiro 2000). Foundations without Foundationalism: A Case for Second-order Logic. Oxford University Press.
(Stewart 2000) On the Formulae–as–Types Correspondence for Classical Logic. DPhil thesis, Programming Research Group, University of Oxford. http://www.textproof.com/paper/dphil.html

Ohh. Your dissertation may

Ohh. Your dissertation may be exactly what I need to move me forward (well, your dissertation plus I need to launch another sustained assault on Kleene's Introduction to Metamathematics).

Good luck

Some people have complained that it is hard going. Do get in touch if you get in trouble.

First-ordedness is a syntactic restriction on general induction

First-ordedness is a syntactic restriction on general induction as introduced by Peano and Dedekind who proved that the general induction axiomatization (see below) characterizes the natural numbers up to isomorphism.

The correct categorical formulation of the general induction principle is as follows:

The natural numbers â„• are axiomatised as follows where S is the successor function:
  0∈ℕ
  ∀[n∈ℕ]→ S(n)∈ℕ
  ∀[n∈ℕ]→ S(n)≠0
  ∀[n∈ℕ]→ S(n)=S(m) ⇨ n=m 
  ∀[Y]→ Inductive[Y]⇨ ℕ⊆Y
     where Inductive[Y] is defined as follows:
       Y⊆ℕ
       0∈Y
       ∀[n∈Y]→ S(n)∈Y

The first-ordedness restriction has unfortunate consequences in what can be proved in practice because of the compactness property imposed on all first-order theories. A consequence of compactness is that no first-order theory can characterize the natural numbers up to isomorphism. Another unfortunate consequence of compactness is that no first-order theory can prove that a computer server always responds to the requests that it receives.

This observation is not a contradiction

General induction is formulated using language from three type-theoretic concepts: the natural numbers (Nat) (I claim this is adequately specified using normal induction as it is formulated by Martin-Löf), first-order universal quantification (1U), and second-order universal quantification (2U).

Nat alone is equivalent to PRA
Nat+1U is not a conservative extension of Nat, and alone is equivalent to PA
Nat+1U+2U is not a conservative extension of Nat+1U, and allows us to prove general induction.

Hence, on the account of logical harmony as I have expressed it in type theory, we have two sorts of interference involving two kind of universal quantifier.

Incidentally, the simple observation that general induction is a 2nd-order sentence has no obvious relevance to my observation that FOL is characterised by its semantics. It is common for 1st-year introductions to logic to begin by working directly with the model theory of FOL and treat its syntax casually until axiomatics and proof theory are tackled later in the course. If the essence of FOL was its syntactic restriction, this approach would fail to teach FOL at all. Having taught such a course, I attest that it does indeed teach FOL well.

First-order theories are inadequate for Computer Science

First-order theories are entirely inadequate for Computer Science because, for example:
* A first-order theory cannot axiomatize the integers up to isomorphism
* A first-order theory cannot prove that a server always responds to the requests received by the server
* The Tarskian model theory semantics of a first-order theory cannot handle inconsistency

Computer Science needs semantics for inference that are based on argumentation rather than Tarskian model theory. See Formalizing common sense for scalable inconsistency-robust information integration

Is allowing self-referential propositions "playing games"?

Is allowing self-referential propositions into mathematics "playing games"? Self-referential proposition do not seem to be at all useful in ordinary mathematics. Also self-referential propositions lead to inconsistency in mathematics. So why allow them? Is this where Gödel went off the rails?

I don't think so

I think incompleteness is mathematical for two reason. First, culturally, I've found "pure" mathematicians (unlike computer scientists) to often be condescending about logic in general, but are respectful of the core incompleteness results. Second, Paris & Harrington (1977) and Friedman (2010) have shown that incompleteness is isomorphic to quite natural mathematical phenomena.

There are many accounts of self-reference in consistent first-order theories; many use familiar devices: e.g., quotation, allowing names for sentences as individuals; many use more elaborate devices: e.g., ordinal reflection principles over hierarchies of Tarski-languages, categorical universes, first-order modal theories with non-first-orderisable (ie., Pi-1-1) frame conditions. You will not be surprised if I say that I do not think that Gödel's hierarchy of provability strength goes off any rails.

Is forbidding self-referential sentences a kind of monster barring? Lakatos seems to consider monster barring a kind of game.

(Paris & Harrington 1977). A mathematical incompleteness in
Peano Arithmetic. In J. Barwise, editor, Handbook of Mathematical
Logic, pages 1133-1142. Amsterdam: North-Holland.

(Friedman 2010) Boolean Relation Theory and Incompleteness. Lecture Notes in Logic, Association for Symbolic Logic.

Monster barring is serious business in Computer Science

Note that the Peano-Dedekind standard formulation of induction proves the Paris-Harrington Theorem that you cite. So the theorem is not an example of "incompleteness" in mathematics. The fact that the cut-down first-order version of induction cannot prove the Paris-Harrington Theorem is another argument against first-ordedness.

So far as I know, there are no good practical uses in Computer Science for mathematical self-referential sentences. Since self-referential sentences lead to inconsistency, they represent a serious security hole and must be barred.

Au contraire, Lakatos meant that that monster barring is serious business when he wrote:

“But why accept the counterexample? ... 
Why should the theorem give way...? 
It is the ‘criticism’ that should retreat.... 
It is a monster, a pathological case, not a counterexample.”

The self-proof of the consistency of mathematics stands and the self-referential sentences must be barred.

Paris-Harrington and Lakatos

By the PH theorem, I mean theorem 1.2 of Paris & Harrington (1977), which is not a theorem of PA, and weak systems (EFA, I think), can show is equivalent to the consistency sentence for PA. Is this PH theorem provable in your logic?

I think you have misunderstood the significance of the role those Lakatos assertions by Delta play in the dialog, though maybe you can convince me that Delta speaks for Lakatos there.

(Paris & Harrington 1977) A mathematical incompleteness in Peano arithmetic. In Handbook of Mathematical Logic, Amsterdam: North-Holland. http://www.compstat2004.cuni.cz/~krajicek/ph.pdf

(Edit: bad HTML fixed, thanks CI)

Italic Tag?

I guess your post introduces an unbalanced <i> tag. Which runs down from your post over all other posts making them italic.

Parris-Harrington is a theorem of Mathematics in Direct Logic

Parris-Harrington is a theorem of Mathematics in Direct Logic because Direct Logic incorporates the full Peano-Dedekind theory of the natural numbers.

Lakatos was serious about monster barring

Lakatos was serious about monster barring.

Computer Science is deadly serious about barring monsters like self-referential sentences because they lead to contradictions in mathematics that represent security holes.

Being first-order is an odd restriction for Computer Science

Being first-order is an odd restriction for many purposes in Computer Science.

First-order theories are incapable of characterizing (up to isomorphism) even fundamental mathematical structures such as integers and real numbers. Peano and Dedekind succeeded in axiomatizing (providing a proof theory) that exactly characterized integers and reals up to isomorphism. Subsequently, some philosophers have come up with quibbles that are not particularly bothersome in Computer Science.

Proof Theory versus Tarskian Model Theory

Proof theory (argumentation) is fundamental to Computer Science because it has proven useful for reasoning about inconsistent information that is pervasive in computer systems. Consequently Direct logic is "all in" for argumentation (proof theory). Of course, as a result model theorists are not happy with Direct Logic.

So far, Tarskian model theory has failed to gain similar traction with inconsistent information although some progress has been made in extending it. For example, see Berto's paper in IR'11.

Italic Tag 2

@Charles: Looks see I can even close the italic tag by posting
something with </i>. The posts should no more be italic
from this post on. I guess the LtU system does not sanitize
posts.

Post sanitised

I've closed my mismatched tag (I had an i closed by a /a). Excess /i closing tags don't seem to be a problem.

LtU runs on an old version of Drupal, which has issues like this.

Can credible consistency proofs make use of ProofByContradiction

In view of the proof that mathematics self-proves its own consistency, can credible consistency proofs make use of Proof by Contradiction?

Gentzen proved the consistency of first-order Peano numbers, using proof by contradiction. Is this a issue for the credibility of his proof?

Provability Logic is very weak in comparison to Direct Logic

Provability Logic is very weak in dealing with issues of provability by comparison with Direct Logic. An important reason for the weakness of Provability Logic is that it allows self-referential propositions.
Consequently Probability Logic lacks the Principle of Integrity:

          (⊢Φ) implies Φ 

Gödel insinuated that Wittgenstein was crazy.

Gödel insinuated that Wittgenstein was crazy. This was unfair because it was ad-hominem and didn't address Wittgenstein's substantive point that self-referential propositions lead to inconsistency in mathematics with the following argument:

True in Russell's system means, as we have said, proved in
Russell's system; and false in Russell's system means that the
opposite [negation] has been proved in Russell's system.
Let us suppose I prove the improvability (in Russell's system) of P
[⊢⊬P where P⇔⊬ P]; then by this proof I have proved P [⊢P].
Now if this proof were one in Russell's system [⊢⊢P]—I should in
this case have proved at once that it belonged [⊢P] and did not
belong [⊢¬P because ¬P⇔⊢P] to Russell's system.
But there is a contradiction here! 

PS. I asked Weyhrauch about our use of "Direct Logic". He thought it was OK and appreciated the reference.

use of "Direct Logic"

It's not the use of the term that I worry about, it is the trademark symbol. There seems to be an assertion of ownership over the term that was not made by Ketonen or Weyhrauch, or the various other people who have done logic using this name.

A valuable name should not be wasted

Ketonen & Weyhrauch kind of mentioned Direct Logic in passing. And their mention was for a very specialized system. At this point, the usage here is more deeply embedded in the literature than their original mention.

Embedded in the literature

Following the its recognition as a simplified fragment of linear logic, direct logic has been the focus of a significant literature. Cf. DDG Google Scholar: "direct logic" linear !gscholar.

If you want to assert a trademark on a theory you didn't invent, it does not change the validity of your research. But putting trademark signs on academic terms is a danger sign for fringe science

Direct Predicate Calculus versus Direct Logic

Weyhrauch liked to use the term "Direct Predicate Calculus" (6,430 results on Google) which appears as the official name in many of the papers with "direct logic" as a description.

Unfortunately, Google Scholar tends to be out of date in comparison to regular Google searchers. However, Direct Logic shows up in Common sense for concurrency and strong paraconsistency using unstratified inference and reflection in the first search page of a Google Scholar search for "Direct Logic".

I take it that we disagree on the importance of Linear Logic for Computer Science ;-)

Precedence

The point I made is that "direct logic" has seen substantial usage by a variety of scholars since Ketonen and Weyhrauch's paper was published. I am not going to evaluate your theory on the basis of a symbol, rather, I just wanted to draw your attention that it did not create a good impression on me.

I would find it strange to think direct logic is important and linear logic unimportant. I can see how to defend the other three combinations of important/unimportant.

The fate of your research is in the hands of your successors

The fate of your research is in the hands of your successors. So far Linear Logic has not gained much traction in Computer Science. We will see what happens with Direct Logic ;-)

Removed

Removed

Thank you

This is helpful. So it does have a first-order, paraconsistent proof theory?

There are non-normal epistemic logics that seem close to this theory. See Égré (2004) The knower paradox in the light of provability interpretations of modal logic; I have a PDF preprint of this somewhere, I think. You can get a provability logic that is equivalent to (intertranslatable with) Gödel-Löb logic that has the T axiom, but sacrifices the K axiom for something weaker (this is an old result, I think due to Solovay, which Égré takes seriously as an epistemic theory).

Use of the turnstile symbol as a binary modality is not new, though I agree that this is bad for clarity, since we will also have a different turnstile relation between propositions/ sequents/ whatever in the metatheory.

Tricky response to Avicenna: you may take away those clubs and brands, because I don't deny non-contradiction for sentences in Delta-1-2!

Removed

Removed

Having ⊢Φ and ⊢¬Φ means having a contradiction in math

Having ⊢Φ and ⊢¬Φ means having a contradiction.

Yes but...

The law of the excluded middle on which disproofs by counterexample rest, ∄Φ:(¬Φ ∧ Φ), is not present in an inconsistent system, therefore you cannot use it to disprove anything in such a system.

Hence while it is true that from the proposition that a system is inconsistent you can deduce that in that system

∃ Φ:(¬Φ ∧ Φ)

this does not generalize to

∀ Φ:(¬Φ ∧ Φ)

, nor form a valid basis for a disproof of anything in that system.

Removed

Removed

Direct Logic is paraconsistent and minimal

Direct Logic is
* paraconsistent because a contradiction in a practical theory does not infer everything
* minimal in the sense that adding proof by contradiction to a practical theory makes it a classical theory

Removed

Removed

Fortuntely, Direct Logic has the usual Boolean equivalences

Fortunately, Direct Logic has the usual Boolean equivalences for inconsistency robust inference. The equivalences make Direct Logic intuitive and powerful by comparison with alternative systems.

Direct Logic has given up on the propositions True and False

Direct Logic has given up on the propositions True and False since they don't actually say anything.

Removed

Removed

(~A & A) & B = (~A & A) is not an equivalence in Direct Logic

Of course, (~A & A) & B = (~A & A) is not an equivalence in Direct Logic.

Removed

Removed

The rule A |- (A v B) is not inconsistency robust

The rule A |- (A v B) is not inconsistency robust as pointed out in Formalizing common sense for scalable inconsistency-robust information integration using Direct Logic reasoning and the Actor Model.

From the above paper (and others), the correct inconsistency robust rule for Or-Introduction is the following: A,B |- (A v B)

Only inconsistency robust rules are allowed for practical theories (e.g. climate change, human liver, etc.) because they are invariably inconsistent.

Removed

Removed

For inonsistency robustness, P⇒Q is not equivalent to Q∨¬P

In Direct Logic for practical theories , P⇒Q is not equivalent to Q∨¬P. Instead P⇒Q is equivalent to (P⊢Q)∧(¬Q⊢¬P).

Of course for mathematical theories, the classical equivalence holds.

Removed

Removed

The following is not inconsistency robust: A ⊢(A v (B & A))

The following rule is not inconsistency robust: A ⊢(A v (B & A))

Not having a Φ such that ⊢Φ and ⊢¬Φ means consistency

Not having a Φ such that ⊢Φ and ⊢¬Φ means consistency of math

Approach of Direct Logic is the opposite of Willard

The approach for barring self-referential sentences in Direct Logic is the opposite of [Willard 2007], which developed sufficiently weak systems that self-referential sentences do not exist even using an untyped grammar.

Turing versus Wittgenstein

Turing differed fundamentally on the question of inconsistency from Wittgenstein when he attended Wittgenstein’s seminar on the Foundations of Mathematics [Diamond 1976]:

Wittgenstein:... Think of the case of the Liar.
It is very queer in a way that this should have puzzled anyone — 
much more extraordinary than you might think... 
Because the thing works like this:
if a man says 'I am lying' we say that it follows that he is not lying,
from which it follows that he is lying and so on. 
Well, so what? You can go on like that until you are black in the face. 
Why not? It doesn't matter. ...
it is just a useless language-game, and why should anyone be excited? 

Turing: What puzzles one is that one usually uses a contradiction as a criterion
for having done something wrong. 
But in this case one cannot find anything done wrong.

Wittgenstein: Yes — and more: nothing has been done wrong, ...
where will the harm come?

Turing: The real harm will not come in unless there is an application, 
in which a bridge may fall down or something of that sort…. 
You cannot be confident about applying your calculus
until you know that there are no hidden contradictions in it ….  
Although you do not know 
that the bridge will fall if there are no contradictions, 
yet it is almost certain 
that if there are contradictions it will go wrong somewhere. 

Wittgenstein followed this up with [Wittgenstein 1956, pp. 104e–106e]:

Can we say: ‘Contradiction is harmless if it can be sealed off’?
But what prevents us from sealing it off?

Don't feed the troll

Let's look at another attempt at adding self-consistency proof to a logic:

                |- 
  -------------------------------------
  |- Hey guys, this logic is consistent

The intended interpretation here is that if you can prove "Hey guys, this logic is consistent", then the logic is consistent (and also, hey, guys). You can add this inference rule to ZFC and the logic is still consistent! And it proves its own consistency!

I'm hearing objections that the "denominator" up there ^^^ doesn't sound mathy enough. So let me improve that:

                     |- 
  -------------------------------------------
  |- (¬exists[P:Proposition] --> |- (P ^ ¬P))

So if we add that rule, have we actually proven our own consistency? Of course not. A proof of consistency isn't just a term that's supposed to be interpreted as "this logic is consistent." Look at what Godel did. He carefully modeled what it means to be a valid term and a valid proof in arithmetic and the statement of consistency amounted to a real arithmetic statement that "there does not exist a proof of false". He did not do this, as Hewitt claims, because he didn't have access to cutting edge computer science. He did this because he understood that what you intend the logic to mean is irrelevant.

I'm not sure if Hewitt is a troll looking to increase his page rank or doesn't understand what Godel did, but this is kind of ridiculous. And look, I don't want to bet against Hewitt, either. At least, not metaphorically. If someone wants to bet real money, let's talk.

re: hungry hungry trolls

Pick your favorite, classical, formalizable number theory. Pick one that you and many mathematicians believe is consistent, albeit necessarily incomplete. If you really try to formalize it you'll discover it doesn't contain a proof for a Theorem of Consistency. You can go ahead an add an Axiom of Consistency and... Goedel's 2nd proof means your classical system with the new axiom is inconsistent. It's conceptually easy to make a specific prop P such that P and not P are both proved in your new system with the Axiom of Consistency.

Hewitt is displaying a different way to formalize something like number theory. Hewitt's way involves a syntactic restriction of a sort that people in Goedel's day hadn't quite conceived. Hewitt's claim is that this syntactic restriction prevents Goedel's 2nd theorem proof from applying yet leaves "classical math" otherwise pretty much intact. Not only could you add an Axiom of Consistency to this system without creating an inconsistency via Goedel's method, but in addition you don't need that axiom since Consistency is already a theorem.

Hewitt understands Goedel well and subtly. What's unclear to me, still, is whether or not his syntactic trick actually works as advertised.

Between the two of you, I think that Hewitt is not trolling and in the exchange just between you two there are either 0 or 1 trolls.

So Direct Logic makes it

So Direct Logic makes it impossible to make a statement P that says "I am unprovable", ducking around Godel that way?

This is of interest, but where is that claim proved or demonstrated? Certainly not in the paper we are talking about here, unless I'm missing something.

The paper here is attempting to self-prove consistency of classical mathematics using reductio ad absurdum, and the arguments stem from that, not from whatever syntactical properties Direct Logic has.

Are we at the end of the day just saying "what if we don't allow self-referential statements"? This is a syntactic restriction of which Godel's peers couldn't conceive? Really?

Russell and Whitehead

Passage from my dissertation:

[Russell and Whitehead] attributed the antinomies to impredicative definitions, i.e., definitions that are not strictly prior to what they define (such as the set of all sets, impredicative since the set contains itself; or the set of all sets that do not contain themselves, viciously impredicative since its self-containment leads immediately to Russell's Paradox). To eliminate impredicativity, they stratified properties (i.e., propositional functions) in the Principia by nonnegative integer type, restricting each function to take arguments of strictly lower type. [footnote: Ironically, and perhaps insightfully, the properties predicative and impredicative cannot be expressed in Russell and Whitehead's logic, because these properties pertain to properties of every type n.] However, having carefully arranged this stratification, they also introduced an axiom to collapse the hierarchy (for every proposition involving properties of type >=1 there is an equivalent proposition with properties all of type 0), for which the justification, by their own admission, was only that it was what was needed to found analysis. Even the authors considered this axiom a stopgap.

Direct Logic makes self-referential sentences impossible

Direct Logic makes self-referential sentences impossible to construct. See the syntax of Direct Logic on page 9 of Mathematics self-proves its own Consistency paying particular attention to footnote 28.

Intended interpretations

in the exchange just between you two there are either 0 or 1 trolls.

I can appreciate that the style of my last post wasn't helpful. I'll proceed more respectfully in the future.

Hewitt's way involves a syntactic restriction of a sort that people in Goedel's day hadn't quite conceived.

But let's reserve judgment on what the old timers missed for a post mortem (after we understand what Hewitt has done).

It seems to me that the possibilities are:

1. Direct Logic (DL) doesn't have a full encoding of arithmetic. Is PA there as a subset somewhere?

2. DL doesn't "understand" the reflected terms that Hewitt has added in a way that would allow Godel's (why does LtU sometimes add the umlaut and sometimes not?) proof to go through. Does DL know that these reflected terms are just data? Can it pattern match on terms and construct new ones?

3. DL is inconsistent.

You seem to be saying that you think bullet 1 is the case? I'm not sure how type theory could restrict us to the "useful fragment" of PA. The types are Nat, so which operation are you going to outlaw?

I suspect we're in bullet 2 and that Hewitt has added reflection but has not formalized how it works well enough for the logic to "get a handle on it" and construct self-referential statements. The problem is that without sufficient axiomatization to pin down what you can do with these "proof objects", you're left appealing to the intended model when you claim that you've proven consistency.

Consider: instead of interpreting Hewitt's "A, B |- C, D" as a judgment literal, let's interpret it as "A and B implies C or D" (a proposition with value True or False). Are there any rules of DL that this interpretation wouldn't satisfy? If you can't open up a Judgment or Proof and see that it's actually a piece of data, then it's possible that an interpretation like this satisfies the axioms. And then the "Proof of Consistency" is really just a statement that there is no proposition that is both true and false.

I don't know enough about DL to say whether the above works or not, but hopefully it illustrates the problem. Attempts to get rid of Godel by restricting what you're allowed to say about reified propositions can have the effect that you're no longer talking about what you intend to be talking about.

An aspect of Godel

An aspect of Godel I cheerfully admit to understanding very little, but that I note those at the time were especially impressed by, is Godelization, which basically allows him to establish his result for a logical system without having to worry about whether the logical system explicitly admits a statement "this statement is unprovable": if it's powerful enough, he says, it'll admit a statement equivalent to this statement. Via Godel numbers. Making his result appallingly difficult to avoid. Death, taxes, and Godel.

Gödel implicitly assumed a strange grammar for mathematics

Gödel implicitly assumed a strange grammar for mathematics.

It's not so difficult to rule out self-referential sentences. See the syntax of Direct Logic on page 9 of Mathematics self-proves its own Consistency paying particular attention to footnote 28.

re: "Intended interpretations"

You seem to be saying that you think bullet 1 is the case?

No, (2). I'll elaborate my tentative understanding in a reply to Shutt.

(Are you still thinking

(Are you still thinking about posting to John?)

I went through the paper looking for how he avoids Gödel. Hewitt states in the forums here that he avoids Gödel with his type scheme:

If f is a [σ]↦Sentence and t is a σ then f[t] is a Sentence [28]

But this scheme is just about avoiding overly self-referential Sentences. I don't know what it would have to do with blocking Gödel, which doesn't rely on overt self-reference.

It looks to me that the only thing stopping inconsistency is possibly a lack of axioms around proofs. He has a notation for |-^p Q, but the rules for this aren't in the paper.

To me it still seems that DL either doesn't have a proper self-consistency proof or it is inconsistent.