Church's fundamental paradox: "Is there such a thing as logic?"

[Church 1934] stated the fundamental paradox as follows:

"in the case of any system of symbolic logic, the set of all provable theorems is [computationally] enumerable... any system of symbolic logic not hopelessly inadequate ... would contain the formal theorem that this same system ... was either insufficient [theorems are not computationally enumerable] or over-sufficient [that theorems are computationally enumerable means that the system is inconsistent]...
This, of course, is a deplorable state of affairs...
Indeed, if there is no formalization of logic as a whole, then there is no exact description of what logic is, for it in the very nature of an exact description that it implies a formalization. And if there no exact description of logic, then there is no sound basis for supposing that there is such a thing as logic."

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

ontology

From the quote, particulary the part: "if there no exact description of logic, then there is no sound basis for supposing that there is such a thing as logic", it sounds as if Church would not have been able to give a coherent account of what it would mean for there to be "such a thing as logic".

Here, too:

for it in the very nature of an exact description that it implies a formalization

He used "exact description" as weasel words that apparently mean nothing more than "a formalization".

He may as well have said "it is the nature of a formalization that it implies a formalization exists".

Curry's paradox does not involve negation (or Excluded Middle)

Curry’s Paradox [Curry 1941] does not involve negation with the argument that every proposition Ψ is provable.

Suppose that Ψ:Proposition.
CurryΨ:Proposition ≡   Fix[fΨ] // note that this fixed point does not exist in typed notation of mathematics   
        where  fΨ[Φ:Proposition]:Proposition ≡ Φ⇒Ψ
1) CurryΨ ⇔ (CurryΨ⇒Ψ)   // fixed point
2) CurryΨ ⇒ CurryΨ       // idempotency
3) CurryΨ ⇒ (CurryΨ⇒Ψ)   // substituting 1) into 2)
4) CurryΨ ⇒ Ψ            // contraction
5) CurryΨ                // substituting 1) into 4)
6) Ψ                     // chaining 4) and 5)

Formal Definition of Logic

The problem is how do you define a formal system? If a formal system is defined in a human language (like English) it is by definition imprecise as the meaning of an english word is simply that of its use within language games (Wittgenstein). At some point we are reduced to informally defining what implication, conjunction and disjunction are.

Computation on the other hand is definable. You can build a machine, and it exists as a formal definition, independent of language or interpretation of words. From this regard a physical machine (say a PC for example) is a better formal standard than lambda calculus, because the calculus is only described by human language, rather than a physical implementation.

This leads to an interesting question, what is the simplest physical implementation capable of expressing lambda calculus? Something like an SKI machine. A written description of an SKI machine would not be a formal definition for the reasons above, but an actual piece of hardware might be, and something simple like an SKI machine would make a better formal standard than a PC as it is much simpler and omits all the irrelevant details.

I think you can define logic formally. The formal definition is something like a Prolog interpreter (but with sound unification, complete search, and complete inference) and the physical hardware it runs on.

Computatibility Logic

Computatibility Logic may be of interest to you.

Mathematics self-proves that it is open

Church's paradox can be reformulated as a rigorous mathematical theorem that resolves the paradox.

Mathematics proves that it is open in the sense that it can prove that its theorems cannot be provably computationally enumerated:

Theorem ⊢Mathematics is Open
Proof. Suppose to obtain a contradiction that it is possible to prove closure, i.e., there is a provably computable total procedure Proof such that it is provable that
p Ψ ⇔ ∃[i:ℕ]→ Proof[i]=p
As a consequence of the above, there is a provably total procedure ProvableComputableTotal that enumerates the provably total computable procedures that can be used in the implementation of the following procedure:
Diagonal[i] ≡ (ProvableComputableTotal[i])[i]+1
However,
* ProvableComputableTotal[Diagonal] because Diagonal is implemented using provably computable total procedures.
* ¬ProvableComputableTotal[Diagonal] because Diagonal is a provably computable total procedure that differs from every other provably computable total procedure.
The above contradiction completes the proof.

[Franzén 2004] argued that mathematics is inexhaustible because of inferential undecidability of mathematical theories. The above theorem that mathematics is open provides another independent argument for the inexhaustibility of mathematics.

"mathematics"

Hoffstadter in GEB brought a whole generation into a popular, basically demystifying, fundamentally syntactic and computational understanding of what mathematicians are doing when they are formal or at least formalizable. (Socially, "mathematics" is a bigger topic.)

I think this is a good thing but it also, in retrospect, makes the foundational crisis of the early 20th century look a bit bombastic and misguided.