## Continuation of Discussion: "Mathematics self-proves its own Consistency (contra Gödel et. al.)"

DRUPL limitations truncated previous ongoing discussion. So I have re-posted here. BTW, I would like to thank LtU for previous discussion that was very helpful in improving the current version.

Some readers might be interested in the following abstract from Mathematics self-proves its own Consistency:

```                   Mathematics self-proves its own Consistency
(contra Gödel et. al.)

Carl Hewitt
http://carlhewitt.info

That mathematics is thought to be consistent justifies the use of Proof by Contradiction.
In addition, Proof by Contradiction can be used to infer the consistency of mathematics
by the following simple proof:
The self-proof is a proof by contradiction.
Suppose to obtain a contradiction, that mathematics is inconsistent.
Then there is some proposition Φ such that ⊢Φ and ⊢¬Φ.
Consequently, both Φ and ¬Φ are theorems
that can be used in the proof to produce an immediate contradiction.
Therefore mathematics is consistent.
The above theorem means that the assumption of consistency
is deeply embedded in the structure of classical mathematics.

The above proof of consistency is carried out in Direct Logic [Hewitt 2010]
(a powerful inference system in which theories can reason about their own inferences).
Having a powerful system like Direct Logic is important in computer science
because computers must be able to carry out all inferences
(including inferences about their own inference processes)
without requiring recourse to human intervention.

Self-proving consistency raises that possibility that mathematics could be inconsistent
because of contradiction with the result of Gödel et. al. that
“if mathematics is consistent, then it cannot infer its own consistency.”
The resolution is not to allow self-referential propositions
that are used in the proof by Gödel et. al.
that mathematics cannot self-prove its own consistency.
This restriction can be achieved by using type theory
in the rules for propositions
so that self-referential propositions cannot be constructed
because fixed points do not exist.
Fortunately, self-referential propositions
do not seem to have any important practical applications.
(There is a very weak theory called Provability Logic
that has been used for self-referential propositions coded as integers,
but it is not strong enough for the purposes of computer science.)
It is important to note that disallowing self-referential propositions
does not place restrictions on recursion in computation,
e.g., the Actor Model, untyped lambda calculus, etc.

The self-proof of consistency in this paper
does not intuitively increase our confidence in the consistency of mathematics.
In order to have an intuitively satisfactory proof of consistency,
it may be necessary to use Inconsistency Robust Logic
(which rules out the use of proof by contradiction, excluded middle, etc.).
Existing proofs of consistency of substantial fragments of mathematics
(e.g. [Gentzen 1936], etc.) are not Inconsistency Robust.

A mathematical theory is an extension of mathematics
whose proofs are computationally enumerable.
For example, group theory is obtained
by adding the axioms of groups to Direct Logic.
If a mathematical theory T is consistent,
then it is inferentially undecidable,
i.e. there is a proposition Ψ such that
⊬TΨ and  ⊬T¬Ψ,
(which is sometimes called “incompleteness”)
because provability in T
is computationally undecidable [Church 1936, Turing 1936].

Information Invariance is a
fundamental technical goal of logic consisting of the following:
1. Soundness of inference: information is not increased by inference
2. Completeness of inference: all information that necessarily holds can be inferred
Direct Logic aims to achieve information invariance
even when information is inconsistent
using inconsistency robust inference.
But that mathematics is inferentially undecidable (“incomplete”)
with respect to Ψ above
does not mean “incompleteness”
with respect to the information that can be inferred
because it is provable in mathematics that ⊬TΨ and ⊬T¬Ψ.
```

The full paper is published at the following location:
Mathematics self-proves its own Consistency

## Comment viewing options

### OK, I am willing to play

OK, I am willing to play this game as long as I don't have to prove anything. There are experts for that. One of my favorites is the model theorist, Wilfrid Hodges. Here is an essay that does two things. It introduces a notion of before and after to mathematics; really, as well as the notion of interpretation. Along the way it introduces most of model theory in a short read.

A book by Hodges, Building Models by Games, develops the idea of constructive proof using notions of consistency and games. Unfortunately, there is not much on line except Forcing in Model Theory. I take this to be an example of mathematics proving itself.

Lastly, how does mathematics manage to prove itself if self reference is such a big problem? Could it be because model theory is loaded with cardinals, ordinals, countable, and uncountable numbers?

Edit: I almost forgot to mention. There is a nascent theory of a unity of logic and algebra as well as the territory in between in the chapter on existentially closed groups, (p. 59).

### First-order logic is inadequate for Computer Science

Thanks for the pointer's to the work Hodges, which is well-done.

Unfortunately, the work on first-order logic is too limited for Computer Science. For example, that a computer server provides service (i.e., ∃[i∈ℕ]→ ResponseBefore[t]) cannot be proved in a first-order theory.

Proof: The amount of time before a server responds is unbounded (∄[i∈ℕ]→ ⊢ ResponseBefore[t]). In order to obtain a contradiction, suppose that it is possible to prove the theorem that computer server provides service (∃[i∈ℕ]→ ResponseBefore[t]) in a first-order theory. Therefore the following infinite set of sentences is inconsistent: {¬⌈ResponseBefore[t]⌉ | t∈ℕ}. By the compactness theorem of first-order logic, it follows that there is finite subset of the set of sentences that is inconsistent. But this is a contradiction, because all the finite subsets are consistent.

Computer Science needs the full induction axiom that is lacking from first-order logic. Unfortunately, techniques developed for cut-down first-order logic are not applicable to systems with full induction.

The notion of Forcing was developed to study problems like a limited version of Continuum Hypothesis automatized in first-order logic. As far as I know, logicians have so far been unable to extend Forcing to systems with the full induction axiom. Consequently, the independence of the Continuum Hypothesis remains an open problem (contrary to popular opinion).

### anticipatory dynamics

Of course no one can say anything about a block of time unless they can say something about what has happened before. It is dynamical as well as logical. This is the point of a unity of algebra and logic.A common way to deal with such things is anticipatory dynamics.

### I am guessing at what you

I am guessing at what you mean by "full induction axiom" but from the context you seem to want an axiom to replace the dynamic part of the sever. The precondition for the server is it's last action. This type of thing has been justified by your colleague McCarthy and others. Of course I am talking about situation calculus with inertia. I am sure you know the story of the Yale shooting problem and "Commonsense Reasoning".

But of course there is another way to deal with this that is as old as commonsense itself. The word that I like to use is "state". States are inherent in algebraic structure and this accounts for the ASM method. States also occur in model theory but as types, saturation, and stability. Hodges, in the essay I linked above slips up about half way through and used "that word" to explain stability!

Model theory may be first order but it is also algebraic and dynamic.

### State models of computation are inadequate for concurrency

Unfortunately state models of computation are inadequate for concurrency. See What is computation? Actor Model versus Turing's Model, in which it is proved that state models don't work for concurrency. McCarthy and Hayes defined a situation as "the state of the entire universe at a point in time." Instead, the Actor Model has configurations in which there are dynamic messages in transit that arrive at some indeterminate point in the future.

### States can be indeterminate

With respect to "state models are inadequate for computer science" above, first this is not an attack on actors, and second there are ways within the concept of state to handle indeterminacy.

State is all about morphism and scale. We can make details go away by grouping them into equivalence classes called SP partitions, and if necessary invoking the axiom of choice. The technique is also called congruence in algebra. Some of this has been carried out in model theory. This is covered under the heading of indiscernibles: chapter 6 in Marker's book.

### States can model nondeterminacy but not indeterminacy

States can model nondeterminacy using choice but not indeterminacy as in the Actor Model. See What is computation? Actor Model versus Turing's Model

### Logical Idealism?

If the metaphysical and philosophical commitments implicit in your definition of computation were applied rigorously to the application of actors, what would it look like overall from the small scale to the large? I dough if even you could say. Would there be models or states or types? I guess not. How would correctness be demonstrated? Do we rigorously apply your new logic?

Edit: A definition of computation is a fundamental philosophical statement. It would be impolite to name someone's philosophy, but it is about time we had a name for this.

### Inconsistency Robustness is the emerging paradigm

Inconsistency Robustness is the emerging paradigm. See Inconsistency Robustness 2014.

### Full Induction is not expressible in First-Order Logic

Full induction (which cannot be expressed in first-order logic) is the the following:

∀[P:Boolean]→ Inductive[P] ⇨ ∀[n:ℕ]→ P[n]
where Inductive[P] ⇔ P[0] ∧ ∀[n:ℕ]→ P[n] ⇨ P[n+1]]

Since the type NaturalNumber doesn't have a special symbol in mathematics, ℕ is used instead in the above.

### Curgo Cult Science

Do you summon something by posting bits of
wisdom? Looks like a cargo cult to me:

Bye

BTW: This makes me thinking of taking a look
at "Remarks on Frazer's Golden Bough", by L.W.

### Theory of Cargo Cults

Anthropologist Anthony F. C. Wallace conceptualized 'The Tuka movement' as a revitalization movement.

Peter Worsley's analysis of Cargo cults placed the emphasis on the economic and political causes of these popular movements. He viewed them as "proto-national" movements by indigenous peoples seeking to resist colonial interventions. He observed a general trend away from millenarianism towards secular political organization through political parties and cooperatives.

Theodore Schwartz was the first to emphasize that both Melanesians and Europeans place great value on the demonstration of wealth. "The two cultures met on the common ground of materialistic competitive striving for prestige through entrepreneurial achievement of wealth." Melanesians felt 'relative deprivation' in their standard of living, and thus came to focus on 'cargo' as an essential expression of their personhood and agency.

Peter Lawrence was able to add greater historical depth to the study of cargo cults, and observed the striking continuity in the indigenous value systems from pre-cult times to the time of his study. Kenelm Burridge, in contrast, placed more emphasis on cultural change, and on the use of memories of myths to comprehend new realities, including the 'secret' of European material possessions. His emphasis on cultural change follows from Worsley's argument on the effects of capitalism; Burridge points out these movements were more common in coastal areas which faced greater intrusions from European colonizers.

### Jargon File

cargo cult programming: n.

A style of (incompetent) programming dominated by ritual inclusion of code or program structures that serve no real purpose. A cargo cult programmer will usually explain the extra code as a way of working around some bug encountered in the past, but usually neither the bug nor the reason the code apparently avoided the bug was ever fully understood (compare shotgun debugging, voodoo programming).

The term ‘cargo cult’ is a reference to aboriginal religions that grew up in the South Pacific after World War II. The practices of these cults center on building elaborate mockups of airplanes and military style landing strips in the hope of bringing the return of the god-like airplanes that brought such marvelous cargo during the war. Hackish usage probably derives from Richard Feynman's characterization of certain practices as “cargo cult science” in his book Surely You're Joking, Mr. Feynman! (W. W. Norton & Co, New York 1985, ISBN 0-393-01921-7).

### Tarskian Model Theory is inadequate for Computer Science

Models are very important in science and mathematics.

Tarskian Model Theory was developed to provide a set-theory semantics for first-order logic. Unfortunately, logicians have not developed a way to extend Tarskian Model Theory to deal with ⊢ or to deal with inconsistency robustness.

### Feeding the Trolls

Oh, somebody has seen the name Hodges on FOM. And please note
the book was already around 1985. But I am not sure whether
"mathematics proves itself" hits the nail.

One could even go on and say mathematical logic is what logicians
do. Period. But lets face it, in the present case the subject
matter is the axiomatic method.

Forcing was used by Cohen to show independence of some axioms
from other axioms. This is done by constructing models. So if
you find models M1 and M2, and if you have for your axiom A,
M1[A]=1 and M2[A]=0, then you know your axiom is independent
from the rest of the axioms.

The model construction can also be used for relative consistency
proofs. Namely if you find a model M, and if you have a theory T,
M[T]=1, then T is consistent relative to the consistency of
your method of finding M. But well its not an absolute
consistency.

So instead of installing a strawman and trying to stir a new
shitstorm based on the same old smelly direct logic paper,
reinterating over and over that you want to help computer science,
why not do something really innovative?

Here is an example:
Investigations on a Pedagogical Calculus of Constructions
http://arxiv.org/abs/1203.3568

So instead of magically expecting some calculi to be consistent
in general, why not force the end-user to give his intuition
of consistency in every step he makes.

What is the obvious provision against the simplified model
construction of this paper? Anybody around with having built a
verification system or somesuch around this idea?

Bye

### FOM?

Apparently there is something going on here that I know nothing about. For instance I have no idea what FOM is.

The book is very rigorous and makes narrow claims like: Robinson forcing produces existentially closed models. The proofs generalize or extend Henkin's construction with a little game semantics.

Perhaps the real problem, or virtue, is that it is not axiomatic. What harm is there in pointing out that something exists already. We don't have to build everything from scratch.

### And the Troll Award Goes To

I wrote "subject matter is the axiomatic method". Means
one does construct models to settle questions about
axioms. At least this is what one can read off from
Cohens independence proofs.

It is of course a different task, although interesting,
to analyze the mathematics that is going on when doing
the construction and try to cast it into an axiomatic
system and/or logic.

But I am not sure what one expects here to get out of
that, and whether it its related to some of the claims
of Hewitt. For example the Henkin construction is helpful
in forming a completness proof. But completness doesn't
settle consistency.

Maybe some reverse mathematics would shed some light on
the axiomatics of the meta level that is used in Henkin
constructions, Forcing methods etc.. But I doubt that one
gets more than relative consistency.

Bye

The FOM acronym means Foundations of Mathematics.
But it might also refer to a mailing list.

Description of the list is here:
http://www.cs.nyu.edu/mailman/listinfo/fom

Tne FOM Archives are here:
http://www.cs.nyu.edu/pipermail/fom/

### FOM Pearls

FOM contains pearls such as:

"I don't think consciousness presents any logical problems
provided the thinker doesn't purport to be able to guarantee
answers to "will I ever" and "are my thoughts consistent".
Humans can observe some of our mental state, but computer
programs can be much more conscious than that. My 1995
Machine Intelligence 15 article "Making robots conscious of
their mental states" is available as
http://www-formal.stanford.edu/jmc/consciousness.html"

[FOM] The Lucas-Penrose Thesis
John McCarthy jmc at steam.Stanford.EDU
Mon Oct 2 16:26:30 EDT 2006
http://www.cs.nyu.edu/pipermail/fom/2006-October/010866.html

### Requiring Instances in proof steps can be self defeating

Requiring that a concrete instance be provided in each proof step can be self defeating. For example, in showing that there is no rational number whose square is 2, no concrete instance is provided. Instead, we suppose the existence of a hypothetical rational number whose square is 2 and a contradiction is derived.

### Proof by Contradiction not Pedagogical

Judging from the Mitchel & Colson paper, neither ex falso quodlibet nor double negation elimination are available in pedagogical logic. So I guess the strong form of reductio ad absurdum, i.e.

G, ~A |- f
---------
G |- A

is not available. Whether there is some other issue that prevents a proof, I don't know. But I guess a sufficient constructive proof will go through. You only need to show:

2×a^2 <> b^2 v a > b

Try a form of binary representation induction of numbers. The value b can be either 2*k or 2*k+1. The former leads to using the inductive hypothesis 2*k^2 <> a^2, the latter is directly satisfied by some remainder consideration. But I dont have a proof certificate at hand.

### Is Pedagogical Logic suitable for practical reasoning?

It looks like Pedagogical Logic may not be a suitable foundation for reasoning about practical systems.

I am not natively english speaking. I guess there are
so many readings into the words "suitable", "foundation",
"reasoning" and "practical systems". That I can't answer
right now from my point of view.

Could you narrow down your question a little bit? Alone
the word reasoning has so many connotations. Do you
mean deductive(*) or inductive reasoning? Are you satisfied
with bounded reasoning or are you interested in the
mathematical concept of unbounded reasoning? Etc..

Maybe the meaning you are interested in is evident from
your context. But I don't know exactly the context. You
would also need to narrow down the other terms.

For example the term "suitable", do you allow or don't you
allow translations, like double negation translation from
Gödel? Do you require a logicistic logic, where you don't
need some extra to be "suitable" or do you allow adding
a set theory/arithmetic etc.. and still consider it a
"suitable" foundation.

As far as I know paraconsistent logics don't restrict
one much if translation and extras are allowed. Concerning
translations you might lookup Gödels Dialectica, which might
targets Gödels system T. Pedagogical logic can also work
towards Gödels system T, but I would need to reconsult the
paper to see exactly what the pitfalls are.

Maybe you might like the following read:
Gödel’s system T as a precursor of modern
type theory Gilles Dowek
http://who.rocq.inria.fr/Gilles.Dowek/Philo/godel.pdf

I personally have a lot of reservations against foundational
questions with an undertone whether there is the right
foundation. I feel reductionism is never unique. And
computer science itself champions this in exhibiting a
plethora of formalisms that can all do the same concept.
Take for example the concept of "partial recursive
function", we have:

- Turing machines
- While programs
- Partial recursive definitions
- Etc..

So I guess "reasoning about practical systems" will anyway
never have a unique answer. A practical alternative to system
T etc.. I am currently interested in is constraint logic
programming. It shares some features of logic programming.
For example negation as failure can be seen in a kind of
paraconsistent logic context.

Some forms of negation as failure demand only the excluded
middle for equality (the =/2 predicate in Prolog). They don't
demand excluded middle for defined predicates (user predicates
in Prolog). Similarly in constraint programming I have the
feeling that excluded middle is only needed for the constraints.
So there is even a kind of paraconsistency in constraint
programming, when your constraint system allows predicate
definitions (**).

But it seems that some Java based constraint solving libraries
blend out the definitional aspect of contraints and predicates.
For example a library such as Choco doesn't speak about it:

On the other hand approaches such as the SWI-Prolog CLP(FD)
library or I guess also in Picat, it is easily possible to
mix and match predicate definitions and constraints.

So lets wait another 10 years, for a little more prevasive
tool set, where you can ascribe some theoretical underpinning
which might be even pedagogical.

Bye

(*) You can map abductive reasoning easily to deductive
reasining, at least in classical logic. Just observe
G, A |- B iff G, ~B |- ~A.

(**) This holds easily for non-recursive definitions,
since one has quickly for a non-recursive definition D,
if T is consistent then T, D is also consistent.

### Is Pedagogical Logic capabable for inference in math?

It looks like Pedagogical Logic may not be a suitable foundation for inference in mathematics.