Continuation of Discussion: "Mathematics self-proves its own Consistency (contra Gödel et. al.)"

DRUPL limitations truncated previous ongoing discussion. So I have re-posted here. BTW, I would like to thank LtU for previous discussion that was very helpful in improving the current version.

Some readers might be interested in the following abstract from Mathematics self-proves its own Consistency:

                   Mathematics self-proves its own Consistency
                          (contra Gödel et. al.)

                             Carl Hewitt

That mathematics is thought to be consistent justifies the use of Proof by Contradiction.
In addition, Proof by Contradiction can be used to infer the consistency of mathematics 
by the following simple proof:
   The self-proof is a proof by contradiction.
     Suppose to obtain a contradiction, that mathematics is inconsistent.
     Then there is some proposition Φ such that ⊢Φ and ⊢¬Φ.
     Consequently, both Φ and ¬Φ are theorems
     that can be used in the proof to produce an immediate contradiction.
     Therefore mathematics is consistent.
The above theorem means that the assumption of consistency
is deeply embedded in the structure of classical mathematics.

The above proof of consistency is carried out in Direct Logic [Hewitt 2010]
(a powerful inference system in which theories can reason about their own inferences).
Having a powerful system like Direct Logic is important in computer science
because computers must be able to carry out all inferences
(including inferences about their own inference processes)
without requiring recourse to human intervention.

Self-proving consistency raises that possibility that mathematics could be inconsistent
because of contradiction with the result of Gödel et. al. that
“if mathematics is consistent, then it cannot infer its own consistency.” 
The resolution is not to allow self-referential propositions
that are used in the proof by Gödel et. al.
that mathematics cannot self-prove its own consistency.
This restriction can be achieved by using type theory
in the rules for propositions
so that self-referential propositions cannot be constructed
because fixed points do not exist. 
Fortunately, self-referential propositions
do not seem to have any important practical applications.
(There is a very weak theory called Provability Logic
that has been used for self-referential propositions coded as integers,
but it is not strong enough for the purposes of computer science.) 
It is important to note that disallowing self-referential propositions
does not place restrictions on recursion in computation,
e.g., the Actor Model, untyped lambda calculus, etc.

The self-proof of consistency in this paper
does not intuitively increase our confidence in the consistency of mathematics.
In order to have an intuitively satisfactory proof of consistency,
it may be necessary to use Inconsistency Robust Logic
(which rules out the use of proof by contradiction, excluded middle, etc.).
Existing proofs of consistency of substantial fragments of mathematics
(e.g. [Gentzen 1936], etc.) are not Inconsistency Robust.

A mathematical theory is an extension of mathematics
whose proofs are computationally enumerable.
For example, group theory is obtained
by adding the axioms of groups to Direct Logic.
If a mathematical theory T is consistent,
then it is inferentially undecidable,
i.e. there is a proposition Ψ such that
⊬TΨ and  ⊬T¬Ψ,
(which is sometimes called “incompleteness”)
because provability in T
is computationally undecidable [Church 1936, Turing 1936].

Information Invariance is a
fundamental technical goal of logic consisting of the following:
  1. Soundness of inference: information is not increased by inference
  2. Completeness of inference: all information that necessarily holds can be inferred
Direct Logic aims to achieve information invariance
even when information is inconsistent
using inconsistency robust inference.
But that mathematics is inferentially undecidable (“incomplete”)
with respect to Ψ above
does not mean “incompleteness”
with respect to the information that can be inferred
because it is provable in mathematics that ⊬TΨ and ⊬T¬Ψ.

The full paper is published at the following location:
Mathematics self-proves its own Consistency

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

OK, I am willing to play

OK, I am willing to play this game as long as I don't have to prove anything. There are experts for that. One of my favorites is the model theorist, Wilfrid Hodges. Here is an essay that does two things. It introduces a notion of before and after to mathematics; really, as well as the notion of interpretation. Along the way it introduces most of model theory in a short read.

A book by Hodges, Building Models by Games, develops the idea of constructive proof using notions of consistency and games. Unfortunately, there is not much on line except Forcing in Model Theory. I take this to be an example of mathematics proving itself.

Lastly, how does mathematics manage to prove itself if self reference is such a big problem? Could it be because model theory is loaded with cardinals, ordinals, countable, and uncountable numbers?

Edit: I almost forgot to mention. There is a nascent theory of a unity of logic and algebra as well as the territory in between in the chapter on existentially closed groups, (p. 59).

First-order logic is inadequate for Computer Science

Thanks for the pointer's to the work Hodges, which is well-done.

Unfortunately, the work on first-order logic is too limited for Computer Science. For example, that a computer server provides service (i.e., ∃[i∈ℕ]→ ResponseBefore[t]) cannot be proved in a first-order theory.

Proof: The amount of time before a server responds is unbounded (∄[i∈ℕ]→ ⊢ ResponseBefore[t]). In order to obtain a contradiction, suppose that it is possible to prove the theorem that computer server provides service (∃[i∈ℕ]→ ResponseBefore[t]) in a first-order theory. Therefore the following infinite set of sentences is inconsistent: {¬⌈ResponseBefore[t]⌉ | t∈ℕ}. By the compactness theorem of first-order logic, it follows that there is finite subset of the set of sentences that is inconsistent. But this is a contradiction, because all the finite subsets are consistent.

Computer Science needs the full induction axiom that is lacking from first-order logic. Unfortunately, techniques developed for cut-down first-order logic are not applicable to systems with full induction.

The notion of Forcing was developed to study problems like a limited version of Continuum Hypothesis automatized in first-order logic. As far as I know, logicians have so far been unable to extend Forcing to systems with the full induction axiom. Consequently, the independence of the Continuum Hypothesis remains an open problem (contrary to popular opinion).

anticipatory dynamics

Of course no one can say anything about a block of time unless they can say something about what has happened before. It is dynamical as well as logical. This is the point of a unity of algebra and logic.A common way to deal with such things is anticipatory dynamics.

I am guessing at what you

I am guessing at what you mean by "full induction axiom" but from the context you seem to want an axiom to replace the dynamic part of the sever. The precondition for the server is it's last action. This type of thing has been justified by your colleague McCarthy and others. Of course I am talking about situation calculus with inertia. I am sure you know the story of the Yale shooting problem and "Commonsense Reasoning".

But of course there is another way to deal with this that is as old as commonsense itself. The word that I like to use is "state". States are inherent in algebraic structure and this accounts for the ASM method. States also occur in model theory but as types, saturation, and stability. Hodges, in the essay I linked above slips up about half way through and used "that word" to explain stability!

Model theory may be first order but it is also algebraic and dynamic.

State models of computation are inadequate for concurrency

Unfortunately state models of computation are inadequate for concurrency. See What is computation? Actor Model versus Turing's Model, in which it is proved that state models don't work for concurrency. McCarthy and Hayes defined a situation as "the state of the entire universe at a point in time." Instead, the Actor Model has configurations in which there are dynamic messages in transit that arrive at some indeterminate point in the future.

States can be indeterminate

With respect to "state models are inadequate for computer science" above, first this is not an attack on actors, and second there are ways within the concept of state to handle indeterminacy.

State is all about morphism and scale. We can make details go away by grouping them into equivalence classes called SP partitions, and if necessary invoking the axiom of choice. The technique is also called congruence in algebra. Some of this has been carried out in model theory. This is covered under the heading of indiscernibles: chapter 6 in Marker's book.

States can model nondeterminacy but not indeterminacy

States can model nondeterminacy using choice but not indeterminacy as in the Actor Model. See What is computation? Actor Model versus Turing's Model

Logical Idealism?

If the metaphysical and philosophical commitments implicit in your definition of computation were applied rigorously to the application of actors, what would it look like overall from the small scale to the large? I dough if even you could say. Would there be models or states or types? I guess not. How would correctness be demonstrated? Do we rigorously apply your new logic?

Edit: A definition of computation is a fundamental philosophical statement. It would be impolite to name someone's philosophy, but it is about time we had a name for this.

Inconsistency Robustness is the emerging paradigm

Inconsistency Robustness is the emerging paradigm. See Inconsistency Robustness 2014.

Full Induction is not expressible in First-Order Logic

Full induction (which cannot be expressed in first-order logic) is the the following:

∀[P:Booleanℕ]→ Inductive[P] ⇨ ∀[n:ℕ]→ P[n]
where Inductive[P] ⇔ P[0] ∧ ∀[n:ℕ]→ P[n] ⇨ P[n+1]]

Since the type NaturalNumber doesn't have a special symbol in mathematics, â„• is used instead in the above.



Theory of Cargo Cults

Anthropologist Anthony F. C. Wallace conceptualized 'The Tuka movement' as a revitalization movement.

Peter Worsley's analysis of Cargo cults placed the emphasis on the economic and political causes of these popular movements. He viewed them as "proto-national" movements by indigenous peoples seeking to resist colonial interventions. He observed a general trend away from millenarianism towards secular political organization through political parties and cooperatives.

Theodore Schwartz was the first to emphasize that both Melanesians and Europeans place great value on the demonstration of wealth. "The two cultures met on the common ground of materialistic competitive striving for prestige through entrepreneurial achievement of wealth." Melanesians felt 'relative deprivation' in their standard of living, and thus came to focus on 'cargo' as an essential expression of their personhood and agency.

Peter Lawrence was able to add greater historical depth to the study of cargo cults, and observed the striking continuity in the indigenous value systems from pre-cult times to the time of his study. Kenelm Burridge, in contrast, placed more emphasis on cultural change, and on the use of memories of myths to comprehend new realities, including the 'secret' of European material possessions. His emphasis on cultural change follows from Worsley's argument on the effects of capitalism; Burridge points out these movements were more common in coastal areas which faced greater intrusions from European colonizers.

From Wikipedia Theoretical explanation

Jargon File

cargo cult programming: n.

A style of (incompetent) programming dominated by ritual inclusion of code or program structures that serve no real purpose. A cargo cult programmer will usually explain the extra code as a way of working around some bug encountered in the past, but usually neither the bug nor the reason the code apparently avoided the bug was ever fully understood (compare shotgun debugging, voodoo programming).

The term ‘cargo cult’ is a reference to aboriginal religions that grew up in the South Pacific after World War II. The practices of these cults center on building elaborate mockups of airplanes and military style landing strips in the hope of bringing the return of the god-like airplanes that brought such marvelous cargo during the war. Hackish usage probably derives from Richard Feynman's characterization of certain practices as “cargo cult science” in his book Surely You're Joking, Mr. Feynman! (W. W. Norton & Co, New York 1985, ISBN 0-393-01921-7).

From "cargo cult programming" in The Jargon File.

Tarskian Model Theory is inadequate for Computer Science

Models are very important in science and mathematics.

Tarskian Model Theory was developed to provide a set-theory semantics for first-order logic. Unfortunately, logicians have not developed a way to extend Tarskian Model Theory to deal with ⊢ or to deal with inconsistency robustness.




Apparently there is something going on here that I know nothing about. For instance I have no idea what FOM is.

The book is very rigorous and makes narrow claims like: Robinson forcing produces existentially closed models. The proofs generalize or extend Henkin's construction with a little game semantics.

Perhaps the real problem, or virtue, is that it is not axiomatic. What harm is there in pointing out that something exists already. We don't have to build everything from scratch.







Requiring Instances in proof steps can be self defeating

Requiring that a concrete instance be provided in each proof step can be self defeating. For example, in showing that there is no rational number whose square is 2, no concrete instance is provided. Instead, we suppose the existence of a hypothetical rational number whose square is 2 and a contradiction is derived.



Is Pedagogical Logic suitable for practical reasoning?

It looks like Pedagogical Logic may not be a suitable foundation for reasoning about practical systems.



Is Pedagogical Logic capabable for inference in math?

It looks like Pedagogical Logic may not be a suitable foundation for inference in mathematics.



OK, I am willing to play


OK, I am willing to play