Strong Mathematical Foundations for Computer Science

Mathematical foundations of Computer Science (CS) require great power as well as extreme rigor because of the requirement that computer systems be able to carry out all reasoning including reasoning about their own reasoning processes. The mathematical foundations of CS have important economic and security consequences.

Following a suggestion of [Scott 1967], mathematical foundations for Computer Science (CS) should be built on *both* types *and* sets:
* “there is only one satisfactory way of avoiding the paradoxes: namely, the use of some form of the theory of types... the best way to regard Zermelo's theory is as a simplification and extension of Russell's ...simple theory of types. Now Russell made his types explicit in his notation and Zermelo left them implicit. It is a mistake to leave something so important invisible...”
* “As long as an idealistic manner of speaking about abstract objects is popular in mathematics, people will speak about collections of objects, and then collections of collections of ... of collections. In other words set theory is inevitable.” [emphasis in original]

A natural way to proceed is to construct sets from characteristic functions by composing function spaces on ℕ (as axiomatised by Dedekind/Peano). Proceeding in this way, every set has finite rank. (The foundations of CS don’t seem to need the higher cardinals.
However, theories of higher cardinals can be axiomatised in the foundations.) Sets in the foundations are categorical in the type Sets◅ℕ▻, which is not a set (meaning that sets have been automatized up to isomorphism with a unique isomorphism). Categoricity is important for computer systems that reason model theoretically in order to avoid security holes. See “Formalizing common sense for scalable inconsistency-robust information integration” at Formalizing Common Sense.

Computer Science needs strong foundations that go far beyond the capabilities of first-order logic. For example, that a computer server provides service(i.e. ∃[i∈ℕ]→ ResponseBefore[i]) in the Actor Model) cannot be proved in a first-order theory.
Proof: In order to obtain a contradiction,suppose that it is possible to prove the theorem that computer server provides service (∃[i∈ℕ]→ ResponseBefore[i]) in a first-order theory T. Therefore the following infinite set of sentences is inconsistent: {⌈¬ResponseBefore[i]⌉|i∈ℕ}. By the compactness theorem of first-order logic, it follows that there is finite subset of the set of sentences that is inconsistent. But this is a contradiction, because all the finite subsets are consistent since the amount of time before a server responds is unbounded
∄[i∈ℕ]→ ⊢T ResponseBefore[i]).

For further discussion, see “What is Computation? Actor Model versus Turing’s Model”.

Let Peano be the Peano/Dedekind theory of the natural numbers with the following strong induction axiom:
∀[P:Booleanℕ]→ Inductive[P]⇨∀[i∈ℕ]→ P[i]
where Inductive[P]⇔P[0]∧∀[i∈ℕ]→ P[i]⇨P[i+1]

Of course, there are inferentially undecidable propositions Ψ such that
⊬PeanoΨ, ⊬Peano¬Ψ, and ⊨ℕΨ. But this is not fundamental defect of the theory Peano (regardless that ℕ is the categorical model of Peano) because in practice, proving ⊨ℕΦ requires proving ⊢Peano Φ.

The exact nature of the inferentially undecidable propositions of Peano is currently a matter of great controversy. See “Mathematics Self Proves its own Consistency”.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Types are a better

Types are a better foundation for both computation and math. Sets are a poor foundation for math. Sets focus far too much on incidental structure rather than the theories or functions. Sets don't have nice properties for composing theories, transforming theories, or recognizing equivalent theories.

The recent efforts on Homotopy Type Theory, which has been formally verified in Martin-Löf types via Coq, is a very promising effort in the direction of unifying math and computation. Rather than isomorphism (which generally doesn't respect expressiveness or modularity properties - incremental local change!), the equivalence property for Homotopy Type Theory is a homotopy - a smooth transition between theories/functions. Read more at: http://homotopytypetheory.org/book/

Actors are among the worst possible foundations for math or CS. They don't have any interesting laws beyond object capability model. They don't compose. They don't have referential transparency, don't enable control of determinism. Why even bother?

Types alone are not enough; also need sets as well

Types alone do not make a satisfactory mathematical foundation for Computer Science for the following reasons:

* Obscure and unnatural notation as in Homotopy Type Theory. See comments in [Scott 1967](above) that "set theory is inevitable".
* Lack of a categorical model

It seems I don't have your

It seems I don't have your faith in hand-wavy comments from fifty years ago. And homotopy type theory models categorical models, so I'm not understanding the posited lack of them.

Homotopy Type Theory is not categorical

Homotopy Type Theory is not categorical. Consequently, we don't know what the axioms actually describe.

I'm not seeing the

I'm not seeing the relevance or significance.

The Actor Model is essential to Math Foundations of CS

The Actor Model is essential to mathematical foundations of Computer Science because local and non-local concurrency are fundamental.

You assume that there are no

You assume that there are no other models of non-local concurrency, and that indeterminacy is a necessary property of concurrency.

Cellular automata are both a trivial counter-example and, I posit, much closer to physics.

Cellular automata are an unsatisfactory foundation for CS

Cellular automata are an unsatisfactory foundation for CS because they lack the following capabilities:
* Concurrency
* Modularity
* Scalability
* Organizational structure

Generalized models of

Generalized models of cellular automata have all those properties, and are used in PADS (parallel and distributed simulations).

Even simple, deterministic cellular automata have concurrency. They model many conditions and changes at the same time. Concurrency and indeterminacy are orthogonal properties.

Adding epicycles to cellular automata doesn't help

Adding epicycles to cellular automata doesn't help ;-)

What does generalizing a

What does generalizing a model have to do with 'adding epicycles'?

Actors compose using Organizational Principles

Actors compose using organizational principles. See Actor Model of Computation.

What are your composition

What are your composition operators? What is your set of components, closed under composition? What are your mathematically inductive (compositional) properties for the resulting composite? Are your abstract components (including composites) 'actors' or are they 'organizations'?

Do you use the phrase 'actors compose' with any rigor whatsoever? My impression is that you do not.

Organizational composition goes beyound functional composition

Functional composition operators are not sufficient for organizational composition because of the interactions between sub-organizations. Actors naturally implement Organizations.

Because Organizations are scalable, they can cope with complexity way beyond what is possible with functional composition. There are no functional composition operators for Organizations. Simply having an over-simple rigorous functional compositional theory is inadequate.

There are many composable

There are many composable models other than functions. For example, we can call 'diagrams' composable because we have several standard operators to take two diagrams and construct another diagram, such as by addition or subtraction. And diagrams have useful compositional properties (such as bounding boxes). And composition is certainly not limited to static structures like functions or diagrams. For example, we can have composition based on interactive documents, widgets, constraint models, grammars or regular expressions modeling state machines, publish-subscribe architectures, pipelines, dataflow, workflow/orchestration models, incremental process models, or network overlays.

AFAICT, you aren't using the word 'compose' meaningfully. By which I mean: you don't have any rigorous operational definition for it, functional or otherwise. You seem to be treating it like an empty buzzword. Further, you repeatedly hand-wave about other models being 'inadequate'. When did your work drift from computer science to marketing?

Most people understand how organizations compose

When I said that cellular automata are "inadequate", of course, I meant technically and practically, which is very far from commercial advertising :-)

Most people understand how organizations compose and there is a whole field of Organizational Science. A great deal is understood about how organizations merge, reorganize, and split apart.

However, combining two arbitrary diagrams into single coherent diagram is not so well understood.

What does organizational

What does organizational science built on humans have to do with the behavior of actors? It seems to me that actors have very different constraints, motivations, and agency from humans. Also, if you believe our alleged understanding of how organizations merge, reorganize, and split apart is relevant in context, please point me to the generic code for applying these operations to your iOrgs.

Just because you don't know how to use cellular automata or functional composition (or other composable models) for large scale, complex, interactive systems doesn't mean they're technically or practically "inadequate".

Just to join the chorus

... what the hell are you talking about? "technically inadequate" or "practically inadequate" are not things which have rigorous definitions.

Maybe I'm just stupid, but if you have a point to make you might try making it understandable for us stupids rather than making grand pronouncement from up on high.

Most people understand how organizations compose and there is a whole field of Organizational Science. A great deal is understood about how organizations merge, reorganize, and split apart.

Do you mean "Organizational studies"? In which case we're talking social science... which is not actually a science.

... and then: what the hell does this have to do with CS?

Nonsience

Not to defend grand pronouncements that run to proof by intimidation, but Computer Science isn't altogether a science either (although theoretical computer science is a rather pure branch of mathematics; go figure).

CS must learn from Sociology because Psychology doesn't scale

Computer Science must learn from Sociology and Organizational Science because Psychology doesn't scale.

Homotopy Type Theory doesn't have indeterminacy (concurrency)

Homotopy Type Theory doesn't have indeterminacy, which is require for true concurrency.

Actors have "laws of computation" and a denotational model

Actors have "laws of computation" and a denotational model. See Actor Model of Computation.

Yes, but...

As I said above, actors don't have any interesting laws beyond object capability model. Not even in your linked article.

Object capability model has a useful set of laws for security or enforcing local invariants. But it isn't so useful for equational reasoning, nor for reasoning about large scale system properties: latency, consistency, runtime update or extension, livelock, deadlock or starvation, behavior during partial failure, resilience, resource control, process control, etc..

Formalization of Object Capability Model follows Actor Model

Formalization of the Object Capability Model is subsequent to the Actor Model, where the laws for computation were first published [Baker and Hewitt 1977].

Irrelevant and irrelevant.

Laws aren't judged by who had them first. And you can't reasonably generalize the limitations of process calculus to other models.

[Hewitt seems to have extracted his comment on process calculi and replayed it below. It's still irrelevant.]

Process calculi equational approach to concurrency is inadequate

The equational approach (although algebraically elegant) to formalizing concurrency developed in Process Calculi has severe limitations by comparison with denotational semantics of the Actor Model (see Actor Model of Computation) using propositional reasoning (see Formalizing Common Sense).

This is good stuff, guys.

This is good stuff, guys.

Usually, sarcasm doesn't

Usually, sarcasm doesn't come across very well in text. But I feel an exception here. :)

So your response is...

to link to the same article again?

Stop trolling.

Links are to help newbies :-)

It is important to put in links to help newbies :-)
It is not always obvious which reference is most relevant.

Raising the level of discourse on LtU

Any ideas how to raise the level of discourse on LtU?

Cheers,
Carl

Write to future friends.

I almost never get a setup like that. Thanks! This is for others too, not just you.

Any ideas how to raise the level of discourse on LtU?

Depends on the result you want. We can aim for quality, courtesy, amity, humor, discovery, persuasion, etc. I assume you wonder how posting links can be taken as trollish behavior? Posting a link once is communication, but twice or more starts to seem like a broadcast in all-caps by someone not listening to replies posted by others. Repetitive behavior can seem like spam instead of conversation, if nothing else gives your message a human touch.

The listening part is pretty important if you want good reception. Some elements of writing can give an effect weakly like counterparts of in-person behavior, implying consideration or aggression, for example. When you meet someone, not looking them in the eye is disrespectful, but continuous staring is too aggressive. In writing, not acknowledging what someone else says seems oblivious, like a muttering street person, and picking every thing they say apart seems like an attack, like an intellectual bully. I hope a bit of this analogy seems clear: extremes are bad.

Presumably you want things read when you post links. This is more likely to occur if—when asked a question—you reply the best you can right there, in perhaps different words than you have used before, crafted to suit the person and situation now as if you care about others as equals and respect the question. If you want someone to adopt a work of yours, this is the point-of-sale where people skills count. Or, if you wanted friendly human interaction on the internet, this is your chance to foster that.

There are several non-obvious ways to mess up treating others with courtesy. A reader can hear anything you write in a tone of voice very different than one you had in mind. Often, if it's possible to read insulting tone, mockery, or sarcasm, this is what is perceived, as if folks want to be harmed when they have a choice. Using someone's name, for example, is easily seen as patronizing, like addressing a child. Generally, refer to someone else only enough to count as "looking them in the eye" but without creating opportunity to be misconstrued.

A stance of professional indifference is slightly too cold, unless you're in fact really upset and have resorted to being too polite as a signal of sorts. Little is lost by being nice by default.

To raise the level of discourse in content quality, increase quality of content in what you write. Exemplify what you want to see and others might go along. In contrast, being critical encourages a race to the bottom.

Civility first; ditch the sneering and insults.

Nice response!

Cheers,
Carl

Cred increased by getting history and references correct

As another commentator pointed out: your credibility would be increased by getting the history and references correct ;-)

Homotopy Type Theory is inadequate for natural numbers (â„•)

Homotopy Type Theory is inadequate for the Peano/Dedekind theory of the natural numbers with the following strong induction axiom:
∀[P:Booleanℕ]→ Inductive[P]⇨∀[i∈ℕ]→ P[i]
where Inductive[P]⇔P[0]∧∀[i∈ℕ]→ P[i]⇨P[i+1]

You don't even need Homotopy

You don't even need Homotopy Type Theory to handle that. Coq does it just fine. You define the natural numbers:

Inductive nat : Set :=
  | O : nat
  | S : nat → nat

Then you automatically get the induction principle for natural numbers:

nat ind : ∀ P : nat → Prop, P O → (∀ n : nat, P n → P (S n)) → ∀ n : nat, P n

Coq needs stronger power and improved notation

Coq needs stronger power and improved notation. Unfortunately, the following Coq axiom lacks strength and is somewhat obscure

nat ind : ∀ P : nat → Prop, P O → (∀ n : nat, P n → P (S n)) → ∀ n : nat, P n

by comparison with the following:

∀[P:Booleanℕ]→ Inductive[P]⇨∀[i∈ℕ]→ P[i]
where ∀[P:Booleanℕ]→ Inductive[P]⇔ P[0]∧∀[i∈ℕ]→ P[i]⇨P[i+1]

The lack of power in Coq is the result of being unable to quantify over the uncountable predicates Booleanâ„•.

It is only obscure because

It is only obscure because you are not familiar with the notation. Coq can quantify over uncountable sets (or in Coq: types) just fine as my translation of your induction principle into Coq shows...

Quantifying over Predicates stronger than quantifying over Sets

Quantifying over Predicates is stronger than quantifying over Sets, e.g., first-order ZFC quantifies over uncountable Sets but cannot quantify over uncountable Predicates.

You might try directly translating the following into Coq to see how close it can come:

∀[P:Booleanℕ]→ Inductive[P]⇨∀[i∈ℕ]→ P[i]
where ∀[P:Booleanℕ]→ Inductive[P]⇔ P[0]∧∀[i∈ℕ]→ P[i]⇨P[i+1]

I already did, see above.

I already did, see grand-grandparent above. The very definition of the natural numbers in Coq gives you that induction principle for free. I can explain the syntax if you want:

∀ P : nat → Prop, P O → (∀ n : nat, P n → P (S n)) → ∀ n : nat, P n

First note that Prop is like your Bool, nat is like your ℕ, and (S n) is like your (n+1). [It is possible to use the symbols ℕ and + in Coq, but my definition of the natural numbers happened to use nat and S. To avoid making it even more confusing, lets stick with that.] Therefore nat → Prop is like your Bool^ℕ, and we can now dissect that line. In each section the first line is Coq, the second line is your syntax, and the third line is an english explanation of the highlighted bit in the preceding lines.

∀ P : nat → Prop, P O → (∀ n : nat, P n → P (S n)) → ∀ n : nat, P n

∀[P:Booleanℕ]→ P[0]∧∀[n∈ℕ]→ P[n]⇨P[n+1]⇨∀[n∈ℕ]→ P[n]

for all functions P that take a natural number and returns a Prop (=~ boolean)

∀ P : nat → Prop, P O → (∀ n : nat, P n → P (S n)) → ∀ n : nat, P n

∀[P:Booleanℕ]→ P[0]∧∀[n∈ℕ]→ P[n]⇨P[n+1]⇨∀[n∈ℕ]→ P[n]

if we have a proof of P 0

∀ P : nat → Prop, P O → (∀ n : nat, P n → P (S n)) → ∀ n : nat, P n

∀[P:Booleanℕ]→ P[0]∧∀[n∈ℕ]→ P[n]⇨P[n+1]⇨∀[n∈ℕ]→ P[n]

and we have for all natural numbers n a proof that P n implies P (n+1)

∀ P : nat → Prop, P O → (∀ n : nat, P n → P (S n)) → ∀ n : nat, P n

∀[P:Booleanℕ]→ P[0]∧∀[n∈ℕ]→ P[n]⇨P[n+1]⇨∀[n∈ℕ]→ P[n]

we get a proof that for all natural numbers n, P n is true.

Now I have explained the statement ∀ P : nat → Prop, P O → (∀ n : nat, P n → P (S n)) → ∀ n : nat, P n. This is not an axiom in Coq; it can be proved from within Coq from more primitive constructs. The proof of this statement in Coq is:

Inductive nat : Set :=
  | O : nat
  | S : nat → nat

That code simultaneously defines the natural numbers and gives us that induction principle.

p.s. The reason Coq has Prop instead of booleans is that this provides a richer notion of truth. A boolean is always true or false, but a Prop can be proved, disproved or not yet proved neither disproved. This means that you can state slightly stronger versions of theorems using Prop than using bool that do not depend on something being definitely true or definitely false. For instance ∀A:Prop,∀B:Prop, A∧B⇨B∧A is true whether or not A and B are definitely true or definitely false: if we have a proof of A and we have a proof of B, then we also have a proof of B and a proof of A. However, for ∀A:Prop, A∨¬A this is not the case, because to prove it we would either have to give a proof of A, or we would have to give a proof of ¬A, and we can't do that for a general Prop. To make it true you'd have to make a weaker statement by using booleans / decidable Props instead of Prop. Classical logic doesn't have this distinction and as a result you always have to use booleans, so all statements you make are slightly weaker by default even if they do not need to be (like in the case of ∀A:Prop,∀B:Prop, A∧B⇨B∧A). Therefore the induction principle in Coq that uses Prop has a slightly stronger meaning than one that uses booleans, in the sense that the one with Prop implies the one with booleans but the one with booleans does not imply the one with Prop.

Coq may not be as strong as Classical Logic.

In Classical Logic, ∀[Φ:Proposition]→Φ∨¬Φ and furthermore ∀[Φ,Ψ:Proposition]→(Φ∧Ψ)⇔(Ψ∧Φ). So it looks like the propositions of classical logic are stronger than those of Coq.

With respect to induction, the important point is that Booleanℕ has uncountably many predicates with the understanding that ∀[P:Booleanℕ,x∈ℕ]→P[x]⇔P[x]=True. Also Booleanℕ allows partial functions so a predicate is not required to be defined for every argument. Coq seems to allow induction only for the countably many predicates that can be defined in Coq (analogous to first-order logic?).

PS. [Church 1936, Turing 1936] famously proved that for a classical theory T whose inference is computationally undecidable:
∃[Ψ:Proposition]→(⊬T Ψ)∧(⊬T ¬Ψ)
Can Coq prove the above Church/Turing theorem?

The second example A or B B

The second example A or B <=> B or A works in Coq unchanged. The first example (law of excluded middle) is the one I discussed so yeah that doesn't hold in Coq for Prop, that's kind of the point of being constructive. As I explained, propositions of classical logic aren't stronger, they are weaker in the sense that Coq makes more distinctions. And as I said, you can still reason with the law of excluded middle by using any of the many well known ways to encode classical logic into constructive logic.

If the predicate isn't defined for some n then the induction principle is actually false so I'm not sure how you can claim it still works for partial functions.

Of course Coq only allows induction for the countably many predicates that can be defined in it. That applies to any formal system.

Yes you can prove Church's theorem in Coq / constructive logic, in fact this has been done. Google will serve you well here. But note that what you stated is (reminiscent of) Godel's first incompleteness theorem not Church's theorem (which has also been proved in Coq).

Coq is weaker than Classical Mathematics

Coq is weaker than Classical Mathematics for the following reasons:
* Coq does not have excluded middle
* Mathematical induction in Coq is weaker than classical Peano/Dedekind strong mathematical induction.
* Coq does not carry out the Church/Turing proof of inferential incompleteness for mathematical theories because the proof is not constructive, i.e, the Church/Turing proof does not construct a proposition that is inferentially undecidable. (Please do not confuse this with "Church's Thesis". Reading the Church and Turing 1936 papers will serve you well. On Gödel's work, see “Mathematics Self Proves its own Consistency”.)

PS. Can you give a counterexample to classical Peano/Dedekind mathematical induction as I stated it?

* Coq does not have excluded

* Coq does not have excluded middle

This is irrelevant as I already explained: you can do classical reasoning in Coq just fine using one of the encodings of classical logic into constructive logic.

* Mathematical induction in Coq is weaker than classical Peano/Dedekind strong mathematical induction

Why is it weaker? Haven't I just shown that you can do exactly peono induction in Coq?

* Coq does not carry out the Church/Turing proof of inferential incompleteness for mathematical theories because the proof is not constructive, i.e, the Church/Turing proof does not construct a proposition that is inferentially undecidable.

I think you (or perhaps I) may be misunderstanding what Church & Turing proved. They proved that there isn't an algorithm for deciding first order logic. This is a statement of the form: forall algorithms that are supposed to decide first order logic, there exists a first order logic sentence that the algorithm can't decide. Given an algorithm, Chuch & Turing constructed an explicit sentence that the algorithm can't decide (each with their own but ultimately equivalent notion of algorithm; Church with lambda terms and Turing with a Turing machine). There is nothing non-constructive about this.

PS. Can you give a counterexample to classical Peano/Dedekind mathematical induction as I stated it?

If P(0) is undefined then the depending on how your system handles undefinedness it either leads to a false conclusion or to no conclusion. In either case the fact that you allow functions P that are not everywhere defined buys you nothing at best.

Coq cannot carry out proof of inferential undecidability

For a classical theory T whose inference is computationally undecidable:
∃[Ψ:Proposition]→(⊬T Ψ)∧(⊬T ¬Ψ)

Proof. Suppose to obtain a contradiction that ¬∃[Ψ:Proposition]→(⊬T Ψ)∧(⊬T ¬Ψ).
It follows that it is possible to computationally decide provability of Φ in T by enumerating proofs until Φ or ¬Φ occurs. But this is a contradiction.

The above proof is non-constructive and consequently is not allowed in Coq.

Coq cannot axiomatize â„• up to isomorphism

Because of the weakness of its induction mechanism, Coq cannot axiomatize the natural numbers (â„•) up to isomorphism.
However, the following Peano/Dedekind axiom does the job:

∀[P:Booleanℕ]→ Inductive[P]⇨∀[i∈ℕ]→ P[i]
where ∀[P:Booleanℕ]→ Inductive[P]⇔ P[0]∧∀[i∈ℕ]→ P[i]⇨P[i+1]

Coq does not have excluded middle or double negation

Coq does not have excluded middle or double negation. Weird encodings like the following don't count:
forall P:Prop, {P} + {~ P}.

Can the predicate Inductive be defined in Coq?

Can the predicate Inductive be defined in Coq where:
∀[P:Booleanℕ]→Inductive[P]⇔P[0]∧∀[i∈ℕ]→ P[i]⇨P[i+1]

Of course.


What is the definition of Inductive in Coq?

What is the definition of Inductive in Coq?

Come on now, if you can't

Come on now, if you can't be bothered to Google a Coq 101 tutorial, should you really be making sweeping statements about it?

Proof that Inductive cannot be correctly defined in Coq?

Is there a simple proof that predicate Inductive cannot be correctly defined in Coq where:
∀[P:Booleanℕ]→Inductive[P]⇔P[0]∧∀[i∈ℕ]→ P[i]⇨P[i+1]

Inductive[P] can be defined

Inductive[P] can be defined in Coq...this is like asking can 1+2 be computed using Javascript. Of course it can! Now you're asking me to *disprove* that it can be defined in Coq?? Anyway, this discussion has gone on for long past the point where it was useful. If you want to make claims about Coq I'd encourage you to actually try to use it or read about it for 2 minutes. I know hardly anything about Coq or constructive mathematics, but all of your questions are answered in even the most introductory material on Coq.

Stonewalling giving definition because impossible in Coq

It took a moment to figure out that the reason that you are stonewalling giving the definition of Inductive is because it is impossible to do so correctly in Coq.

Although I believe the

Although I believe the Wikipedia community has been badly damaged by their Assume Good Faith principle, I do approve of Wikinews's Never assume. Accusing others of stonewalling looks a lot like assuming bad faith; not recommended as a way to win friends and influence people.

Stonewalling using redicule should be discouraged on LtU

Stonewalling using ridicule should be discouraged as a tactic on LtU. Objectively, looking at the discussion it looks like Jules was stonewalling using ridicule. This does not necessarily mean that he was using bad faith because in the end he professed that he didn't know much about Coq.

I'm not ridiculing you, you

I'm not ridiculing you, you are doing that on your own. Here is how you'd define inductive in Coq syntax:

Definition inductive (p : nat -> Prop) := P 0 /\ ∀i:nat, P i -> P (i+1)

It's hardly fair to say that my refusal to serve as your personal Google assistant is stonewalling.

Unfortunately, the definition of Inductive in Coq is not correct

Unfortunately, the definition of Inductive that you have provided above is not correct because it defines Inductive for only countably many predicates in (nat -> Prop) as opposed to the correct definition below that defines Inductive for the uncountably many predicates in Booleanâ„•:

∀[P:Booleanℕ]→Inductive[P]⇔P[0]∧∀[i∈ℕ]→ P[i]⇨P[i+1]

Since you seem to have trouble clicking on links, I will again mention that it is worth looking at “Mathematics Self Proves its own Consistency” to see why it is important to quantify over all predicates. ;-)

You can in practice only

You can in practice only ever instantiate P to one of the countably many that you can define in Coq, but you can use it to reason about all uncountably many nat -> Prop. Note that this is exactly the same in ZFC, your system, or any other formal system.

Coq induction predicates countable; however predicates are not

Coq induction predicates are countable because they must be definable in Coq. However, there are uncountably many predicates in Booleanâ„•.

Only by using the correct definition of Inductive using Booleanâ„• is it possible to axiomatize â„• up to isomorphism.

Unfortunately, the Coq definition allows models of natural numbers with undesirable infinite integers.

Which predicates?

Within the logic of Coq, there are uncountably many Coq predicates. Within some other logic inside which you are reasoning about Coq, there may only be countably many Coq predicates.

Which logic is it that you're using to reason about Coq? If we used Coq to reason about this logic, would we be able to prove that it has uncountably many predicates? Can you show us the proof?

It would be easier to understand what you are saying if you wouldn't expect your readers to do so much of the work for you. Since there are more readers than writers of your comments, it's also sensible to spend more of your time writing to save their time reading.

Coq started as the calculus of inductive constructions

Coq started as the calculus of inductive constructions, but this proved unsustainable so it wandered. At this point, there are more variants of Coq than you can shake a stick at :-( So I confine my comments to classical Coq as the calculus of inductive constructions.

Fair enough

So which predicates do you mean when you say predicates are not countable? And can you sketch how you would prove it using some commonly accepted logic as a meta-logic?

Functions from â„• into Boolean uncountable via standard proof

Booleanâ„• is uncountable by the standard diagonal proof due to Cantor.

If you stick to the

If you stick to the predicative fragment/mode of Coq, then there are models for which his definition really quantifies over all predicates (i.e. nat is modeled by the real natural numbers and the type of functions a->b includes all functions from a to b).

Computable procedures versus all functions

It is important to distinguish between computable procedures (i.e., aType↦anotherType) and all functions (i.e., anotherTypeatype). Both are needed the mathematical foundations of Computer Science. See Formalizing Common Sense.

Ridiculous behavior should

Ridiculous behavior should also be discouraged on LtU.

For example, we should not make sweeping assertions about subjects we haven't even studied to proficiency, then expect other people to take time out of their busy lives to educate us to the contrary in case we're wrong.

Coq 101

Your claims about Coq have all the force behind them of someone who knows next to nothing about Coq. Consequently, as a debate, this thread is not terribly enlightening. Still, it would be a shame if you left with such a grave misconception about the system. Here you go.

Definition inductive (P:nat → Prop) := P 0 ∧ (∀ i:nat, P i → P (i+1)).
Lemma nat_inductive (P:nat → Prop) : inductive P → (∀ i:nat, P i).

This is, plain as day, the second-order induction axiom for the natural numbers, equivalent to the earlier one spelled out to you in pain-staking detail by Jules Jacob. Presumably, you insist that these are unsatisfactory because the set of definable predicates is countable. And yet, the type nat → Prop is provably uncountable.

Lemma uncountable : ¬ ∃ (f:nat → (nat → Prop)), ∀ (P: nat → Prop),
  ∃ (i:nat), (∀ n, P n ↔ (f i) n).

You might want to look at the section on logic in the Coq FAQ.
[Edit: fixed typo in uncountable]

Coq induction predicates are countable because defineable

Coq induction predicates are countable because they must be constructively definable in Coq. To proof categoricity of axioms of natural numbers, it is necessary to perform induction over uncountably many Booleanâ„•.

This can be a tricky business; so it is best to try understanding before insults. ;-)

As you can see, I

As you can see, I anticipated your response when I mentioned that nat → Prop is provably uncountable. You've ignored this. Why? Is my specification of uncountable unsatisfactory?

Is it worth pointing out that people have developed category theory in Coq, and proven that nat is an initial object in the appropriate category? That indeed, a version of the induction principle has been proven that applies to all such initial objects? My suspicion is that you would conveniently ignore this too. (However, I'd be happy to provide pointers on request).

To look at this another way consider the following statement. "Real numbers are countable in classical mathematics because they must be definable in ZFC." This is directly analogous to your first sentence. Mainstream mathematicians would agree that in principle most mathematics is formalizable in ZFC, yet surely they would disagree with this statement.

Is the set of "definable" reals uncountable in DL?

Apologies for the perceived insult. I did not intend it as such, as I thought you would not dispute that your knowledge of Coq is very limited. In return, please do not patronize me by assuming that I do not understand you.

First-order ZFC is an inadequate foundation for Computer Science

For reasons state in the initial post of this forum topic, first-order ZFC is an inadequate foundation for Computer Science.

Now that Coq* (the many variants of Coq) have abandoned the initial Coq foundation in the calculus of inductive constructions, things have unfortunately become ambiguous. For example, in Coq* does (nat → Prop) mean the computable procedures nat↦Prop or all functions Propnat?

Being an initial object of a category may not be as strong as having a categorical axiomatization.

The set of computable reals in Direct Logic is countable.

Your question gets to the

Your question gets to the heart of the matter. Coq is agnostic about what the members of the type (A -> B) are. Which is to say, it assumes little. As a result, the theorems you can prove about functions in Coq are valid in any model satisfying the small set of assumptions. As Matt said, this includes models where (A -> B) is all functions from A to B. It also includes models where (A -> B) is more restricted. In Coq, you aren't able to "tell" which, unless you add more axioms that restrict the set of applicable models.

Coq needs additional notation: functions vs. procedures

Coq needs additional notation so it can distinguish nat↦Prop from Propnat. Both are needed in Computer Science.

Acclimation trap

Beware of "this way of doing things is easy once you get used to it". That's true of anything; therefore, if one accepts that not all things are equally easy to use, it must be that the most useful measure of easy-to-use is something other than whether it's possible to get used to it.

Strength of reasoning may be more important than aesthetics

I am more concerned about the strength of the reasoning power of Coq than its aesthetics.

Sure

Sure. I wanted to remark on that particular flaw of thinking while it was fresh, though.

Unfamiliarity can breed contempt ;-)

You make a good point: unfamiliarity can breed contempt ;-)

It's a problem that I encounter often in my own work :-(

In general I agree but in

In general I agree but in this case it's just trivial syntactic differences like this:

Hewitt: ∀[a∈B] → c
Coq: ∀a:B, c

If the claimed advantage of this new mathematical foundation is syntax, then OK.

Axioms versus Coq Defintions

In Direct Logic, a theory T could have the following axiom:

∃[Russell:BooleanPredicate]→ ∀[p:BooleanPredicate]→ Russell[p]⇔¬p[p]

As a consequence, the theory T is classically inconsistent because

Russell[Russell]⇔¬Russel[Russell]

The above inconsistency of T is OK because, T can have arbitrary axioms. However, definitions in Coq are supposed to be safe so some restriction is necessary for the following:

Definition Russell(p : Prop-> Prop) := ¬ p p

"Object capability" is confusing; use "address" instead

Unfortunately, the term "object capability" is confusing terminology that was introduced after the Actor Model was developed. It is better to speak of the "address" of an Actor that can be used to send it messages.

Address was a bad choice

Address was a bad choice even when Actor Model was developed. It's too easy to confuse with memory addressing, IP addresses, etc. which do not offer any significant secure reasoning properties.

All that you can do with a "capability" is to send it a message

The only thing you can do with an "object capability" is to send it a message. Consequently, "address" is better terminology

I'm not following your

I'm not following your reasoning.

And we can communicate an object capability via message.

IP and memory addresses are kinds of Actor addresses

The only thing that you can do to a memory address is to sent it a Load or Store message. (Some memory systems allow additional messages, e.g. AtomicIncrement, etc.) Of course, there are things that you can do with a memory address, e.g., add 1 to it.

Likewise, the only thing that you can do to an IP address is to send it a message (i.e. a packet).

In a different Key

As I see it the real issue for math and computer science "foundation" is a choice between atomism and holism. Atomism is Russell's argument for logic as a foundation. The logic that Russell proposes is atomistic and discursive. The alternative is the algebra of logic approach also thought of as holistic. Alberto Coffa has quite a lot to say about this.

It is time that these issues were taken seriously and not selected solely according to one's personality type or politics.

It is worth noting that nearly the whole issue of (in)consistency is due to Russell's atomism and subsequent choice of material implication. Inconsistency is scarcely an issue in algebraic approaches.

Edit: For reference the Stanford Encyclopedia has articles on Russell's Logical Atomism, and The Algebra of Logic Tradition.

Not Russell

The approach you're calling "atomism" isn't due to Russell.

OK, Sure

I neglected to give the whole history of atomism.

Appearances

But such a fundamental misattribution tends to undermine the speaker's credibility. Hence, to be avoided.

Inconsistency is *always* an issue for practical systems

Inconsistency is always an issue for practical systems. The alternative is to pretend to sweep inconsistencies "under the rug" using denial, e.g., restricting language to make inconsistencies "that which cannot be said."

Understand it!

Inconsistency is always an issue for disembodied discursive linguistic systems. Systems that consist only of symbols abstracted from any specific coding and decoding entity. In other words any thing more abstract than two humans communicating face to face. "The media is the message".

There are many ways to avoid consistency problems and practical people do it all the time. Generally this means "wholeness" in some sense using situation or environment, and usually this means algebra. Inconsistency arises out of disembodied language. The only fix is to consider complete communicating entities. The language or code is only the symbols passing over the channel. Ultimately it will mean nothing unless the encoder and decoder are properly considered.

Deriving contradictions is a fundamental tool of reasoning

Deriving contradictions is a fundamental tool of reasoning. Each contradiction is a victory of formalization enabling arguments in favor and opposed to be made more rigorously.

Logic?

I am not sure where this discussion goes from here. But I think Russell knew that his system was (in)complete. He shows this by inventing types and recommending there use. A type system is a complete lattice, a form of algebra!

Actually a complete lattice

Actually a complete lattice must be compactly generated to be algebraic. See A Course in
Universal Algebra, H. P. Sankappanavar, Stanley Burris.
page 17.

Removed

Removed

⊢ in additon to Boolean Equivalences

Actually, we need both inference and Boolean Equivalences. For example, Inconsistency Robust Direct Logic has ⊢ plus Boolean equivalences for propositions :-)

Removed

Removed

What else is there to say?

For a mathematician it is all the same, that is why I stated my problem philosophically as atomism vs. Holism. This doesn't have to be a battle ground but it is. Perhaps if mathematicians were in charge things would be different. But then again maybe not, there is a history of division along the same lines in mathematics. Anyway, There are signs of progress.

Edit, By the way, where did you get the Zen stuff?

Removed

Removed

Two visions of completeness

The main point of my comments above on Atomism/Holism can be sharpened a little by thinking about how this distinction affects completeness in Russell's logic of material implication compared with completeness in equation logic. I am sure you already know about Gödel's (in)completeness but have you ever looked at Garret Birkhoff's completeness for equational logic. It is less than two pages long. You can find it on page 105 of A Course in Universal Algebra, H. P. Sankappanavar, Stanley Burris. already linked above. Birkhoff's proof is easily seen as holistic, while Russell/Godel is atomistic.

Edit: Birkhoff's completeness doesn't immediately yield an incompleteness result.

Theorem 4.19?

Looking at page 105, I see Theorem 4.19 (Birkhoff: The Completeness Theorem for Equational Logic). Is that what you're referring to? I'm unsure, because although it's on page 105 and due to Birkhoff, I'd say completeness is an, er, completely different property from consistency.

Complete!

I wouldn't call it completely different because one implies the other, and anyway it doesn't change my point. But you are right, I meant to say complete above. Thanks for complaining about that. Both Gödel and Birkhoff proved completeness.

Absorption [P∧(P∨Q) = P] is not Inconsistency Robust

Absorption [P∧(P∨Q) = P] is not Inconsistency Robust (although it holds in Classical Direct Logic). Is it needed for Birkhoff completeness?

Birkhoff's rules are just

Birkhoff's rules are just reflexive, symmetric, transitive, plus substitution and replacement. The first three are an equivalence relation. Unless absorption is needed or implied in equivalence the answer would be no. Otherwise yes. My amateur mathematics is severely overloaded right now.

Removed

Removed

1st-Order Completeness is Weak v. Math Inferential Undecidablity

Completeness of First-Order Logic is Weak (syntactic); inferential undecidability of mathematical theories is Strong (semantic).

* Completeness of First-Order Logic: Proposition is provable in a first-order theory T if and only if the proposition is true (|=M) in every model M of T (corrected below; thanks!), which means that a first-order theory is incapable of axiomatizing the natural numbers ℕ or of axiomatizing the real numbers ℝ up to isomorphism.
* Inferential Undecidability of Computationally Inferentially Undecidable Theory T: There is a proposition Ψ such that ⊬T Ψ and ⊬T ¬Ψ.

Removed

Removed

But I would argue that

But I would argue that undecidable doesn't make something incomplete, incompleteness is a sort of internal malfunction caused by inconsistency?

Removed

Removed

But I asked about

But I asked about undecidable vs incomplete.

Removed

Removed

I appreciate what you are

I appreciate what you are saying but I still think there is more to it. Obviously I need to do more work. thanks.

Removed

Removed

Math is open; Math theories are closed

Countably Infinite makes a good point above.

Mathematics self proves that it is open, i.e., its theorems are not computationally enumerable [Church's Paradox]. However, by convention each mathematical theory is closed, i.e., its theorems are computationally enumerable. Of course, to avoid Church's Paradox a mathematical theory cannot enumerate its own theorems! That a mathematical theory is closed is required to prove inferential undecidability.

See “Mathematics Self Proves its own Consistency”.

Removed

Removed

Typo

because of explosion, we have for all formuas A that T |- A and T |/- A.

You meant T |- ~A, I assume.

5

Hacker culture being what it is, we've likely got some Discordians — "Zen for roundeyes".

Inconsistency Robust Boolean Equivalences

The following Inconsistency Robust Boolean Equivalences hold in Direct Logic:

Self Equivalence: Ψ = Ψ
Double Negation: ¬¬Ψ = Ψ
Idempotence of ∧: Ψ∧Ψ = Ψ
Commutativity of ∧: Ψ∧Φ = Φ∧Ψ
Associativity of ∧: Ψ ∧ (Φ∧Θ) = (Ψ∧Φ)∧ Θ
Distributivity of ∧ over ∨: Ψ ∧ (Φ∨Θ) = (Ψ∧Φ) ∨ (Ψ∧Θ)
De Morgan for ∧: ¬(Ψ∧Φ) = ¬Ψ∨¬Φ
Idempotence of ∨: Ψ∨Ψ = Ψ
Commutativity of ∨: Ψ∨Φ = Φ∨Ψ
Associativity of ∨: Ψ ∨ (Φ∨Θ) = (Ψ∨Φ) ∨ Θ
Distributivity of ∨ over ∧: Ψ ∨ (Φ∧Θ) = (Ψ∨Φ)∧ (Ψ∨Θ)
De Morgan for ∨: ¬(Ψ∨Φ) = ¬Ψ∧¬Φ