Fixed points considered harmful

Fixed points have been an important topic ever since Church invented the lambda calculus in the early 1930s.

However, the existence of fixed points has unfortunate consequences:
* In a programming language, they preclude having having strong types
* In logic, they lead to inconsistency.

Fortunately, types intuitively preclude the existence of fixed points, e.g.,
ActorScript and Direct Logic.

Comment viewing options

re Not a recursive definition

But look closer: It's not a recursive definition at all. It's just the definition "X := q ∈ K" and there happens to be a reminder "X == Rq(q)" meaninglessly tacked onto the side.

For the second incompleteness theorem to work, you have to produce (in the system itself) a proof that:

X = q ∈ K where (X = Rq(q))

You can't make that proof without a recursively defined formula. The particular recursive definition I gave for X is just one example of how you could do it (but that is not allowed in Hewitt's system).

Missing each other's points?

You can't make that proof without a recursively defined formula.

Why not? I think Gödel didn't use one.

The particular recursive definition I gave for X [...]

When did you give a recursive definition for X?

re missing points

I think Gödel didn't use one.

How else do you possibly make sense of section 4 of his paper?

It begins on page number 69, PDF page number 72 of this:

[update: sorry, forgot the link. here.

http://jacqkrol.x10.mx/assets/articles/godel-1931.pdf

]

Pay particular attention to the paragraph that begins "We now establish the following: All the concepts defined (or assertions proved) in Sections 2 and 4 are also expressible (or provable) in P." Especially footnote 67.

How else do you possibly

How else do you possibly make sense of section 4 of his paper?

I can't say I do make complete sense of it, and I'm sorry for that. There's a good chance I'll say something that I'll regret later.

67: That the correctness of w Imp (17 Gen r) can be concluded from (23), is simply based on the fact that—as was remarked at the outset—the undecidable proposition 17 Gen r asserts its own unprovability.

All propositions assert their own truth, but that doesn't mean we need fixpoints to construct every proposition.

Proposition 17 Gen r does in some sense assert its own unprovability, in the sense that it's logically equivalent to a (separate) statement that asserts the unprovability of that sentence. That's even how it's designed, but that's not how it's defined.

why did I define X that way

When did you give a recursive definition for X?

Without that or another fixpoint equivalent, you can't draw out the implications Goedel sketches in part 4.

The inferential leap you need, within the system, is from a theorem about the model to a theorem about a modeled statement. That's the big sticking point that doesn't work in Hewitt's.

*when* did you define X that way

When did you give a recursive definition for X?

Without that or another fixpoint equivalent, you can't draw out the implications Goedel sketches in part 4.

What are you referring to by "that"? I said "when," not "why."

re *when* did you define X that way

Sorry for the confusion. In the comment with the subject line that begins "proof sketch...".

The following is a recursive definition of X.

X := q ∈ K where (X = Rq(q))

Here is a kind of schema for an assertion, with a free variable X (using earlier defined K and Rq):

q ∈ K where (X = Rq(q))

A definition of a sort, which Goedel was talking about, tries to bind the free variable to the whole assertion itself:

X := q ∈ K where (X = Rq(q))

It could as well be written something like:

letrec X := q ∈ K where (X = Rq(q))

To actually put a definition like this into a hypothetical formal system, some meaning has to be given to the assertion. The meaning Goedel relies on is a fixed point meaning. The meaning is something like the assertion

q ∈ K where ("this assertion iself" = Rq(q))

There might be more than one possible q. We might pick in particular a least fixed point using the well ordering of integers.

This is an assertion, something like a Sentence in Direct Logic, but to use the trick in Goedel's 2nd incompleteness theorem, the sentence must be overtly self-referential. It must be defined as a fixed point (which DL doesn't permit).

Not recursive...

The following is a recursive definition of X.

X := q ∈ K where (X = Rq(q))

I guess we didn't connect when I said

It's not a recursive definition at all. It's just the definition "X := q ∈ K" and there happens to be a reminder "X == Rq(q)" meaninglessly tacked onto the side.

When I look at the definition of q and the definition of K, I think "X := q ∈ K" is already equivalent to the other two non-recursive definitions.

The part that says "where (X = Rq(q))" is just some extraneous clarification. All it does is call to mind the alternative definition "X := Rq(q)" from two lines ago.

If you're right and X is defined as "q ∈ K where (X = Rq(q))" then what does "where" mean here? If it establishes a local binding for X, then it's extraneous, since the expression "q ∈ K" contains no occurrences of X. If it establishes a second binding for the same variable X, then we have some kind of multi-valued definition, not a recursive one.

q was already defined

In a more recent thread, Thomas Lord said

Looking back at the earlier thread, you looked at this notation and said it was not a recursive definition... you say that there was no fixpoint here:

X := q ∈ K where (X = Rq(q))

The fixpoint semantics of this definition are the only thing that give q and X a specific meaning. Do you not see that X is defined in terms of itself there? That is is not some arbitrary q from the set K but is instead a very specific example of q in K?

The variable q was already defined elsewhere in your post above:

What is S specifically? Which "q"? S is:

S(n) := Rq(n) := n ∈ K == n ∈ { k in Nat. | ~Provable(Rk(k)) }

Unfortunately, that line is potentially even more confusing than the one we're already talking about. I'll unpack it. It's two definitions and an observation:

q := the i such that (Ri(n) == (n ∈ K))

S(n) := Rq(n)

Observe that (n ∈ K) == (n ∈ { k in Nat. | ~Provable(Rk(k)) }).

Now when the post gets around to the particular definition of X we're arguing about (and not the other two definitions of X)...

X := q ∈ K where (X == Rq(q))

...this line is defining X as the proposition "q ∈ K." It's also observing that this is the same as saying X is Rq(q).

goedel's first theorm (pt 1 answer to Ross's correct criticism)

[Updated: Ross Angle pointed out a mistake in the definition of ParadoxFormuai which has been fixed (hopefully). The mistaken version can be seen in his comment below.]

I agree that my earlier description, which you quote, is sloppy.

I will try to make amends here.

We hypothesize a formal system for which we metamathamatically have a family of formulas indexed by natural numbers:

  Iformsi in ℕ
an as yet undescribed set of formulas, indexed by naturals


What are these formulas? Each one has a free variable which, for our discussion, we may consider to be of type ℕ.

Notationally,

  Iformi(n)   where n ∈ ℕ
is the sentence formed from Iformi by replacing the free variable with n


Metamathmatically speaking (for the moment), "Iformi(n)" is a name for some sentence in the subject system.

(Incidentally, the "subject system" that we are studying metamathematically is already not Direct Logic because DL "formulas" are not countable. They can't be indexed by ℕ. But that isn't really central to showing how Goedel uses a fixed point so I'll move on from that.)

Goedel also assumes that proofs in the subject system can be indexed by ℕ and that we can compute from an index what proposition each proof proves. (Again: not actually countable in DL but that isn't important for the moment.)

Notationally, let's say:

  Proofk ∈ ℕ
the family of proofs in the subject system


And define a relation between two natural numbers, k and n:

  k iProves_self_applied_iForm n <=>
the proof Proofk is a proof of Iformn(n)


Leading up to his "first theorem" Goedel spends quite a few pages getting to this point. I hope this time I've said it better than I said it last time, and that this abbreviated version is familiar territory.

Here is something really subtle about Goedel numbering, but very, very important.

Goedel's first theorem assumes proofs can be enumerated, and formulas with a free variable can be enumerated.

Most importantly... this is the key point of Goedel numbering ...

To establish a contradiction, Goedel's theorem 1 assumes that the relation k iProves_self_applied_iForm n is decidable by a some procedure that takes a numbers k and n as input, and then outputs either True or False.

Even though a procedure that computes k iProves_self_applied_iform n is working only with integer k and n, we know (metamathematically!) that if it returns True, that the kth proof proves the nth sentence iFormn(n).

Since "iProves_self_applied_iForm" is assumed to be a computable relation between integers, Goedel assumes it can not only be defined metamathematically, but also in the target system. In the target system it is not defined in terms of forumulas and proofs. It is just a defined relation between any two natural numbers.

Here is Goedel's first use of a fixed point, in the first theorem.

He metamathematically defines ParadoxFormula to be a family of formulas in the target system, using this recursive definition:

ParadoxForumalai := IFormi(i)
where Iformi is a subject system
formula, with free variable x:
"~(x ∈ { j ∈ ℕ | ∃ k ∈ ℕ s.t. k iProves_self_applied_iForm j })"


We have here metamathematically defined a family ParadoxFormula, indexed by a (so far unclear) subset of ℕ.

If the family ParadoxFormuai is not empty, then let:

  q := the least index of ParadoxFormulai
Paradoxical = ParadoxFormulaq


In other words, we have (metamathematically) define Paradoxical to be a least fixed point of the open formula ParadoxicalFormua.

Paradoxical is a sentence in the subject system, metamathematically defined as a fixed point.

Completing the proof of Goedel's first incompleteness theorem should be easy from here.

If Direct Logic is the target system, the definition of Paradoxical violates the rules of construction for sentences.

goedel's first theorm (pt 2 answer to Ross's correct criticism)

Goedel's handwave of a proof of his 2nd (so-called) theorem begins -- I claim this is a fair paraphrase --

Copy the afforementioned definition of the sentence Paradoxical but delete every mention of "metamathematically". Treat all of that as definitions in the subject system itself. Then.....

How are rules of construction violated?

ParadoxForumalai := IFormi(i)
where
i is the index of a subject system formula,
"~(i ∈ { k ∈ ℕ | k iProves_self_applied_iForm i }"

First, I think the formula you were going for was "~ exist k ∈ ℕ s.t. k iProves_self_applied_iForm i." (If not, that doesn't affect the rest of what I'm saying. In fact I'll continue using your version in this post just in case.)

Second, you're using three variables i here. I can't tell if IFormi(i) is supposed to be taking i from the subscript parameter or from the local binding in the "where" clause. How about cutting to the chase:

i := the index of a subject formula with free variable n,
"~(n ∈ { k ∈ ℕ | k iProves_self_applied_iForm n }"

Paradoxical := IFormi(i)
If Direct Logic is the target system, the definition of Paradoxical violates the rules of construction for sentences.

Why does it violate the rules of construction? The sentence Paradoxical is structurally of the form "~(i ∈ { k ∈ ℕ | k iProves_self_applied_iForm i }," with i and iProves_self_applied_iForm replaced with their elaborate number-theoretic definitions. Since Classical Direct Logic embeds full second-order Peano/Dedekind axioms for ℕ, it should be able to express everything in the sentence.

Goedel's fixpoint (re How are the construction rules violated)

Ross, thanks for pointing out a bug in the definition I gave for Paradoxicali. I fixed it above and will unpack it here. The (hopefully) fixed version is:

ParadoxForumalai:= IFormi(i)
where Iformi is a subject system
formula, with free variable x:
"~(x ∈ { j ∈ ℕ | ∃ k ∈ ℕ s.t. k iProves_self_applied_iForm j }"


Recall that Iformi ∈ ℕ is assumed to be a family of all subject system formulas with one free variable.

As an intermediate step, we could define a sub-family:

PotentialParadoxicali = IFormi
where Iformi is a subject system
formula, with free variable x:
"~(x ∈ { j ∈ ℕ | ∃ k ∈ ℕ s.t. k iProves_self_applied_iForm j }"


The family Paradoxicali is an image of PotentialParadoxicali by the operation of self-application-of-index:

  Paradoxicali := PotentialParadoxicali(i)
for all i that are index of
PotentialParadoxical


If the family Paradoxicali is not empty then pick any member (such as the least) and you have a sentence which, if it can be proved, the proof's index serves as a counter-example and yields a contradiction.

We defined Paradox to be a member of the family and, for convenience, the least member.

Paradox := Paradoxicali where i is the least index of Paradoxical


It's really critical to Goedel that the families of formula, sentences, and proofs we're talking about here be effectively enumerable. For example, the formula IFormi can be interpreted as a function:

   Iform : ℕ ↦ formula of one free variable

Iform[i ∈ ℕ] :  ℕ ↦ sentences

Paradoxical : a particular domain restriction of Iform

Paradoxical[i ∈ ℕ] : a particular domain restriction of Iform[i ∈  ℕ]


It is also critical to Goedel's argument that each of those functions not only be computable, but that it's inverse must also be computable.

The definition of Paradox is (for Goedel's argument purposes):

  Paradox := leastfix λ s . Paradoxical[Paradoxical←(s)]


No matter how you formalize it or how many layer of indirection in the definitions, Goedel's argument requires that the definition of the sentence Paradoxical explicitly expresses one of the (integer) terms of the definition is the "encoding" of Paradoxical.

Sentences in DL are defined by categorical induction, a kind of construction method. J. H. Conway has a handy metaphor for this kind of construction in terms of a numbering of days.

On Day 0, some specific sentences exist.

On Day 1, new sentences can be created from those from Day 0, none of which are equal to any Day 0 sentence.

On Day 2, new sentences can be created from earlier made sentences, none equal to any earlier made sentence.

etc.

Sentences in Directly Logic can not be computed by arbitrary computable functions. None of the functions that can construct a sentence in Direct Logic have any fixed points.

Lack of clarity in Sentence constructors (and nitpicks)

The (hopefully) fixed version is

Minor nitpicks: The parentheses are unbalanced. You're also using "x ∈ { j ∈ ℕ | ...(j)... }" where you might be able to say "...(x)..." directly. Later on, what do you mean by "categorical induction" as a construction method? When I look it up, it's a philosophy of science concept and I don't see the connection.

Major nitpick: I still think your ParadoxFormulai "family" doesn't use i for anything; it defines its own i inside rather than actually depending on the index. Or are you allowing that there may be more than one index i corresponding to the same formula?

If you are allowing that, how do you reconcile that with saying that Iform's "inverse must also be computable"? If you're not allowing that, then you've already constructed a single sentence, and I see no reason for you to define:

  Paradox := leastfix λ s . Paradoxical[Paradoxical←(s)]

---

No matter how you formalize it or how many layer of indirection in the definitions, Goedel's argument requires that the definition of the sentence Paradoxical explicitly expresses one of the (integer) terms of the definition is the "encoding" of Paradoxical.

At least we're on the same page there.

Sentences in Directly Logic can not be computed by arbitrary computable functions.

I'm very unclear on how Sentences in Direct Logic work. The rules you pointed me to on page 27 are just introduction rules, with no elimination rules, so there's not much point in defining any functions that take Sentences as arguments.

Even the introduction rules seem a bit incomplete. They refer to free variable information, but this information doesn't seem to be tracked in the types, so I'm not sure how it's extracted. Meanwhile, it seems like there's no way to construct an (x:Variable◅aType▻) without first building an expression that binds it, and I'm not sure how (and whether) that circularity is well-founded.

There's at least some way to construct (a certain subset of) Sentences from strings. If that subset is strong enough to describe arbitrary computable functions, and if the string-to-sentence coercion is one of those computable functions, then we can probably write quine strings that describe themselves. Then we get to ask if these self-describing strings are strong enough to cause problems.

Personally, I suspect it's possible to make the self-describing strings but not possible to combine them with the self-proof of consistency. Hewitt claims it's not even possible to use sentences to make fixpoints, but I don't see enough details about the string-to-sentence coercion to know that for sure.

String must type check in order to be parsed into a sentence

A string must type check in order to be parsed into a sentence of Mathematics.

String for Y combinator fixed point does not type check.

Also, in Direct Logic there is no such thing as a "formula."

re (and nitpicks)

This responds only the "major nitpick":

I still think your ParadoxFormulai "family" doesn't use i for anything; it defines its own i inside rather than actually depending on the index. Or are you allowing that there may be more than one index i corresponding to the same formula?

Same up to alpha-renaming, yes. This would be true in the case of Goedel's specific construction.

Notationally:

  IFormi ∈ ℕ :=  "some formula with x free"


defines a family indexed by the naturals in terms of a typical member, Iformi.

Here is a formula with x free:

  ~(x ∈ { j ∈ ℕ | ∃ k ∈ ℕ s.t. k iProves_self_applied_iForm j })


There is a subfamily of Iformi which are equal to that formula, up to alpha-renaming.

That subfamily could be defined this way, again in terms of a typical member:

PotentialParadoxicali := IFormi
where Iformi is a subject system
formula up to alpha renaming,
with free variable x:
"~(x ∈ { j ∈ ℕ | ∃ k ∈ ℕ s.t. k iProves_self_applied_iForm j }"


Note that the index set of the family PotentialParadoxicali is a subset of ℕ

The index set of the family Paradoxicali is the same as the index set of PotentialParadoxicali.

The family Paradoxicali defined in terms of a typical member is:

  Paradoxicali := PotentialParadoxicali(i)

== Iformi(i)
for all i in the index set of
PotentialParadoxical

== a sentence of the form
""~(i ∈ { j ∈ ℕ | ∃ k ∈ ℕ s.t. k iProves_self_applied_iForm j }"
where i is the index of
this member of the family

== Iformi(Paradoxical←Paradoxicali)


Thanks, but what's the last part for?

Same up to alpha-renaming, yes.

Ah, okay, that answers the questions I asked.

Why do you then build up to this, though?

Paradoxicali ==
Iformi(Paradoxical←Paradoxicali

If you think you need fixed point definitions to make that observation (not definition), then the same would apply to any other invertible function. For instance:

Identityi := i

== Identity←Identityi

effectiveness (re Thanks, but what's the last part for?)

Why do you then build up to this, though?

  Paradoxicali := Iformi(Paradoxical←Paradoxicali)


Just to emphasize that Paradoxicali is a specific family of fixed points.

(You can abstract the mention of Paradoxicali from the right hand side and install a fixpoint operator instead.)

Waving Hands and Daydreaming

In Hewitt's system, this definition:

X := q ∈ K where (X == Rq(q))

is ruled out.

That's still too informal to me. (Though I am clueless what that proof is about.)

The thing which bugs me: For Presburger arithmetic I have a proof that it is consistent and complete. If I expand it with multiplication, I arrive at the logic of Godel, multiplication is one of the the constructs Godel relies upon, and I have a proof the system becomes consistent and incomplete.

But Hewitt hasn't presented a formal definition of his language. He claims it doesn't allow fixed points but he hasn't proven it. Then now this argument which claims that a construct is ruled out in since it relies on the definition of a fixed point which cannot exist in his formal system which doesn't exist.

Hewitt needs to state a formal system. A language description with type inference rules, a proof that fixed points cannot be expressed, and a proof that the construction above cannot be expressed within his system.

It's all daydreaming.

Church/Turing incompleteness theorem is not about arithmetic

The Church/Turing inferential incompleteness theorem is not about arithmetic.

Gödel numbers are a "red herring" when it comes to inferential incompleteness ;-)

There is a formal definition of Classical Direct Logic here: Inconsistency Robustness in Foundations

Comments and suggestions for improvement are greatly appreciated.

Direct Logic proof of consistency of Math is not dispositive

Thomas,

You are correct: the Direct Logic proof of the consistency of Mathematics (by itself) is not dispositive because Direct Logic could still be shown to be inconsistent.

Cheers,
Carl

Direct Logic not part of mathematics?

Does this mean the proof of consistency of mathematics is contingent on Direct Logic not being part of mathematics? Yet it seems distinctly mathematical?

re Direct Logic not part...

Does this mean the proof of consistency of mathematics is contingent on Direct Logic not being part of mathematics?

Classical Direct Logic is, for convenience, called "Mathematics".

Classical Directly Logic ("Mathematics") is proposed as a foundations of mathematics for Computer Science.

To answer your question: Classical Direct Logic is within mathematics. It is proposed as a foundational theory for mathematics.

Proof of consistency of Direct Logic?

How can that be true if Direct Logic can prove the consistency of mathematics, but it itself may still be proved inconsistent? Doesn't that require that direct logic is not part of mathematics, otherwise we already have proof of its consistency?

Consistency hearsay and equivocation

Classical Direct Logic tells us that it's consistent, but that's what an inconsistent system would say. :)

In that way, a self-proof of consistency has pretty limited value.

Still, it's almost comical how we justify the use of mathematics. If anyone doubts a system's consistency, we retreat into a stronger system to argue for its consistency. A self-proof of consistency would let us at least keep talking about the same system.

I don't know what cultural effect this would have. This stability might let us be even more stubborn, or it might let us stand still long enough to learn something.

To be more specific (and

To be more specific (and setting aside Gödel for a moment), a self-proof of consistency has in itself exactly no value. Because you will have motive to believe it exactly when you already believe the self-proving system to be consistent. A proof of consistency based on a subsystem of the system, which Hilbert had been hoping for, would have the benefit that you'd know if the subsystem is consistent then the whole system is consistent. A proof of consistency based on a supersystem of the system may be informally interesting if it increases understanding of the relationships between different principles of deduction, different axioms. A self-proof of consistency by a system that's designed to not explode if it turns out not to be consistent? That has mildly negative value, because clearly the system itself is designed on the expectation the self-"proof" might be wrong. But of course when you stop setting aside Gödel, you lose the proof-by-subsystem and self-proof options, because when those happen (for a system that isn't underpowered) you know it's not conistent. So the proof-by-supersystem is the only one of them that does't disprove consistency anyway.

Math self-proof of consistency: convincing proofs maybe possible

The Mathematical self-proof of consistency shows that convincing proofs may be possible.

Math self-proof of consistency: common understanding inaccurate

The Mathematical self-proof of consistency shows that the current common understanding (based on Gödel's writings) is inaccurate.

Although Math can prove its own consistency, it may not be

Although Mathematics can prove its own consistency, it may turn out not to be consistent.

what is "mathematics" re Proof of consistency of DL

How can that be true if Direct Logic can prove the consistency of mathematics, but it itself may still be proved inconsistent?

A consistent theory can contain a proof of self-consistency.

An inconsistent theory can also contain a proof of self-consistency.

DL is one of those two cases but it is conservative enough that betting it is consistent is probably a good bet.

Doesn't that require that direct logic is not part of mathematics,

Now I'm not sure at all what you mean by "part of mathematics".

Church/Turing proof of incompleteness has no fixed points

The Church/Turing proof of inferential incompleteness has no fixed points; but Gödel's "proofs" depend crucially on the existence of fixed points.

Gödel claimed to create a nonexistent mathematical proposition

Thomas,

You are correct:

"contra Gödel" is that Gödel was able to get away defining sentences
by fixed point that way only because of a confusion at the time. That,
indeed, the syntactic restriction Hewitt proposes should be universal
to all of math. Permitting self-referential sentences in any but a
trivial system is, after all, a way to instantly make the system
inconsistent!


Cheers,
Carl

Zeno's Paradox

How would I write Zeno's paradox in Direct Logic, and calculate its fixed point. I would like to see the code.

Interpreting Gödel, and meanwhile, F arithmetic

Goedel's method requires the target system to permit a sentence to be defined as a fixed point of a formula containing S itself.

Keep in mind Gödel's target system is just first-order Peano. Maybe all you mean is that fixpoints are admissible, but that's a pretty surprising theorem (Löb's theorem), not something the system goes out of its way to "permit."

If we had a readable enough Gödel sentence to look at, it wouldn't have a special syntax for saying "this sentence." Instead, it might look like a quine.

Hewitt is (at least claiming, plausibly) to point out that Goedel did not kill Hilbert's program at all. In fact, Hewitt has a candidate system to do what Hilbert wanted.

Going by W. W. Tait's "Gödel on Intuition and on Hilbert's finitism," it looks like Gödel himself made sure to point that out in the incompleteness paper, and he kept on thinking about finitary foundations long after that.

---

Since you like the idea of a foundation that can self-prove its own consistency, I recently found another one you might like. (I hope Matt M is reading too.)

Andrew Boucher's "Arithmetic Without the Successor Axiom" describes a weak arithmetic called F. As far as I can tell, the reasoning in this paper isn't peer-reviewed, but it's at least chock full of (IMO) readable proofs, and it talks realistically about prior work and room for future improvement.

F arithmetic has full-powered second-order induction, but what makes it unable to invoke first-order Peano is that it can't prove the existence of any number's successor. It proves its own consistency in the sense of showing that no proof of "0 doesn't equal 0" exists. It does that by showing that its axioms and inference rules all preserve a predicate that amounts to "true in the trivial model where the only number that actually exists is zero," which that sentence doesn't satisfy.

If we trust a traditional foundation like ZFC, F is clearly consistent by that argument.

If we distrust ZFC and take F as a foundation, then our proofs may themselves be too large to exist, including the self-consistency proof. Fortunately, that document claims something reassuring: If any proof of "0 doesn't equal 0" exists, then that proof must be large enough that the consistency proof exists first! I don't really follow the proof of that claim, but it's fascinating to think that perhaps all our core requirements for a mathematical foundation can be satisfied in a system with not only a finite symbolic specification, but a satisfactory finite model, which then lets us talk about an alternative, infinite model from solid ground.

At this point, I think Andrew Boucher's F arithmetic (or one of its variations and consistent extensions described in the same document) could be a very promising addition to the superstitious total compilers Matt M was describing a couple of years ago. Both ideas seem to be revolve around the exact same kind of agnosticism toward large numbers.

F arithmetic seems

F arithmetic seems interesting. Here's a long thread from the n-category cafe discussing various weak models of arithmetic, including F.

Goedel's *second* theorem.

Keep in mind Gödel's target system is just first-order Peano.

Not in the second theorem.

If we trust a traditional foundation like ZFC

I'm not sure if Hewitt's would need "C".

Interesting

I would if you could augment F with an axiom that would let you establish the existence of any particular finite Natual without letting you generalize, as I described in that thread. And if you did so, I wonder if it would still be complete.

(I'm hard at work right now, so trying to limit LtU time.)

If 5 exists, then 3 exists

If you assume the existence of any number, F supposedly lets you show the existence of all smaller numbers down to zero, but not the bigger numbers. I think that's what you want, right?

And if you did so, I wonder if it would still be complete.

It's not complete. It has multiple models.

No I want the ability to

No I want the ability to prove the existence of any finite number but not to prove the existence of every number - meaning with a quantifier.

F can't do that... maybe

Well, nope, this system doesn't let you just prove the existence of any given finite number. Adding that capability would probably destroy the self-consistency proof the way it's written, since the axioms would no longer be true for the zero-only model.

Hmm, there might be another way to state the self-consistency proof so that it shows that in a model of a given size (no smaller than the proof of this very self-consistency theorem), all the proofs that can be encoded in that model are consistent. I assume we'd add an axiom schema that would establish the existence of any given finite number, as long as we fully write it out. That way the axiom instance itself will be too large to exist unless the number it's talking about also exists.

Someone would have to actually prove it in that form, of course. Unfortunately, I'm not confident in the validity of Boucher's original proof, so I'm not confident this alternative theorem can be proven either. (Right now, my biggest doubts are seeded in the worry that we'd need to take every possible proof encoding scheme into account... and perhaps there are schemes where self-consistency proofs are artificially larger than proofs of "0 doesn't equal 0.")

Inference in Mathematics formalized using ⊢

Thomas,

You have it almost exactly right!

Mathematics (formalized using ⊢) is its own meta system and can be used to formalize at theory T (using ⊢T) even is T is inconsistent.

Of course, Mathematics must be both consistent and very powerful.

Cheers,
Carl

what is "Computer Science"?

Every formal theory about computer programs or machines is presumably, generally speaking, of a mathematical character and thus, if Hewitt is correct, is expressible in his formal system, "mathematics".

Additionally, since very computer program is, in an important useful sense, itself a proof within the mathematical theory described by the programming language, each program is a proof within "mathematics" generally.

The (presumptively) directly apparent ability of Hewitt's foundation to express all these particular theories is (presumably) why he sometimes describes "mathematics" as the "foundation for computer science".

"mathematics" shows fixpoint sentences are a bad idea

Assuming my informal (but I think formalizable) understanding of Hewitt is right and is as he in intends:

It should be straightforward to build a convincing model of "mathematics" in which it can be shown that (if the model has fidelity) there is no sentence that contains its own encoding in the model itself. That is one way to restate Hewitt's claim about the role of types in preventing fixpoints.

It is further straightworward to extend the model with an additional modeled axiom that such a fixpoint exists (in particular a Goedel sentence). Goedel's second theorem shows us how to quickly show that the new modeled theory is inconsistent.

In that way, Hewitt's "mathematics" can demonstrate, via a model, that it should not be generalized to allow sentences which contain self-encodings.

At the same time, Hewitt's "mathematics" can (one gathers) easily demonstrate by model that it does not already contain any such sentence-with-self-encoding.

Object instantiation is the fixed point.

It turns out from a pure functional programming perspective, instantiating an object is an application of the fixed point operator. See the OOHaskell paper: http://arxiv.org/pdf/cs/0509027.pdf

"mathematics" also *demonstrates* self consistency

The formalization of mathematics-in-general Hewitt plauibly says he's laid out contains a trivial and less than convincing (in isolation) self-consistency proof.

The system is (plausibly alleged) to be very easy to convincingly model in a way that makes it easy to prove the self-consitency within the model of formalized math-in-general.

The model is not a proof that the system expressing the model is self-consistent, but if the model is simple and mechanical enough the demonstration via model is anyway convincing.

Meanwhile, even though a new axiom can't be derived from the model's demonstration of the self-consistency of Hewitt's system, nevertheless that system already contains a self-consistency theorem more trivially arrived at.

The whole trick here is that do any but the most abstract math in Hewitt's system, additonal axioms have to be proposed in what amounts to a safely contained lexical scope.

Mathematics is founded on the categorical induction axiom

Mathematics is founded on the categorical induction axiom:

∀[P:Booleanℕ]→  Inductive[P]⇨ ∀[i:ℕ]→ P[i]
where  ∀[P:Booleanℕ]→ Inductive[P] ⇔ (P[0] ∧ ∀[i:ℕ]→ P[i]⇨P[i+1])


From the above axiom, sets over ℕ (said to be the "language of ordinary mathematics") can be constructed using types.

See page 28 of Inconsistency Robustness in Foundations.

Hewitt: incredibly lucid so far but...

What is going on at the top of page 15 of the Direct Logic paper?

Corollary: There is a proposition phi of Naturals....

Is the symbol for Naturals there a typo for tau?

What is the double-bar turnstile thing?

Hewitt: concs

Are we grown up enough to live with uncertainties or will we repeat the mistakes of the twentieth century and pledge blind allegiance to another certainty. Malone [2007]The world always needs heretics to challenge the prevailing orthodoxies. We are lucky that we can be heretics today without any danger of being burned at the stake. But unfortunately I am an old heretic. Old heretics do not cut much ice. When you hear an old heretic talking, you always say, “Too bad he has lost his marbles.”

I'm sorry I was one tending towards that conclusion. On the bright side, it's a really interesting experience to discover one was wrong about that -- to see from both sides what that's like.

Thanks!

Carl

Hewitt: appendix question

Third line of type definitions on page 26 (DL paper): "If f:Type^Nat ..."

I don't know what the notation is trying to say and the footnote isn't helping.

update: No it's not a typo. I got it, now.

semantics for DL & answer for Matt M.

DL appears to have a natural, weak, operational semantics. (Matt M., at the end I address your "truth oracle".)

Looking at DL this way helps to clear up the difference in DL between provability and truth.

It also illustrates an interesting oddity: the provability of a theorem in DL is not always well defined! In other words, DL semantics under-determines theorem provability in some cases.

In this operational semantics, the meaning of a proposed proof is a computation that can return True if the proof is verified, False if a mistake is found, and in some cases the program must diverge rather than returning anything.

To illustrate, consider the proposition, containing two literal real numbers:

φ ≡ 3.1415.... = 3.1415....


Is it provable?

⊢? φ


It is such a simple proposition that we expect it to be self-proving or self refuting. For example with integers we get:

Theorem: ⊦ 2 ≠ 3

Proof

1. 2 ≠ 3     by inspection
2. ⊢ 2 ≠ 3   by 1 and Adequacy


But what about φ?

Theorem: ⊦ φ
where a ≡ 3.1415...
b ≡ 3.1415...
φ ≡ a = b

Proof

1. a ≠ b   by inspection?!???!!?
Note: step 1 raises a tricky problem.

2. ⊢ φ   by 1 and Adequacy


The two real numbers in φ are actual real numbers. For example, they may both be infinite, lazy lists of digits (and one decimal point) produced by Actors.

We can not glibly assume that any two arbitrary reals can be found equal "by inspection".

So what is the semantics of "=" in this context?

In this case "=" has a natural inductive definition -- and dual induction on the two infinite lists of digits.

Suppose that RComp is a procedure that performs that inductive equality test. A naive implementation of RComp would simply compare successive digits of the two infinite lists of digits. It would return only if the two reals are found to be unequal.

In some environments, it would be possible to write versions of RComp that can sometimes return True when the two reals are equal. For example, an implementation might (behind the scenes) notice that both real numbers are a standard Actor that returns 0, exactly: 0.000....

It is fine for an implementation of RComp to return True in finite time so long as it does so faithfully. The semantics of "=" are given by an inductive definition. RComp is free to recognize the induction holds in whatever special cases it can.

Given this, we can write the proof:

∀ x,y ∈ ℝ (x = y) ⬄ RComp[x,y]


So:

Theorem: ⊦ φ
where a ≡ 3.1415...
b ≡ 3.1415...
φ ≡ a = b

Proof

1. RComp[a,b] = True   by inspection?!???!!?
Note: step 1 still raises a tricky problem.

2. ⊢ φ   by 1 and Adequacy


And this suggests some semantics:

The proof is valid if the reals a and b happen to be unequal. That is because we can certainly identify them as unequal "by inspection", looking at only a finite number of digits.

The proof is valid if the reals a and b turn out to be of a kind that some (correct) implementation of RComp returns True in finite time.

Otherwise, the proof is not valid, which is to say we do not obtain ⊢ φ.

We also do not obtain: ⊬ φ.

This is not, however, the same thing as a case of inferential incompleteness. In particular, suppose that no valid proof of φ can be found. It is not the case that we can assume ⊫ φ.

For example, if behind the scenes both a and b are copies of a single random stream of digits they will be, in fact, equal. It is not safe to assume they are not equal for from that we could infer the existence of digits where they differ. And if they are equal but no proof can ever show it, we can not assume they are equal because from the deduction rule Adequacy we could infer a valid proof could be found.

Here then is how the semantics I'm describing sort out truth and provability in DL:

That which we know to be true is provable. ("Adequacy").

That which is provable we know to be true. ("Soundness").

"provable" is established by demonstration and means that a proposed proof has been checked in finite space and time. A "valid proof" is one that can be checked in finite space and time.

It is possible to construct propositions whose truth ("⊨") is definite but unknowable. (As described above.)

Wrongly assuming the falsehood of an unknowable truth introduces error. (Such as the error of assuming a valid proof exists.)

Matt M. suggested that a model for DL could follow by assuming there are no unknowable truths. For example, we could imagine an oracle that can always, instantly tell us if two reals are equal.

Oracles of that sort lead to paradox. For example, given such oracles, DL would be able to define an enumeration of all programs that halt.

Why parameterize propositions with values?

1. a = b   by inspection?!???!!?
Note: step 1 raises a tricky problem.

This is the problem with parameterizing propositions with values. Propositions need to be finite so that you can inspect them. The standard way that everyone does this, with propositions and proofs as finite pieces of data that can be inspected in their entirety, seems like the right way to go. I don't see any advantage to what Hewitt is doing here. The only way that we'll be able to prove things about arbitrary values is if 1) the implementation somehow gets a handle on their generating expression or 2) we can make due with finite inspection (in which case a finite expression can capture the relevant information). So why not just reason about expressions like everyone else does?

And if they are equal but no proof can ever show it [...]

Where did the assumption that 'no proof can ever show it' come from? In my (hypothetical) model there is a proof for every true proposition. If they are equal, then there is a proof they are equal.

re Why parameterize propositions with values?

Why parameterize propositions with values?

In the real world, we can build propositions that way.

Having a powerful system like Direct Logic is important in computer science because computers must be able to formalize all logical inferences (including inferences about their own inference processes) without requiring recourse to human intervention.

To me, the opposite idea -- that every sentence must be (finitely) printable -- seems an odd restriction that limits the directness with which mathematics can model existing reality.

. The only way that we'll be able to prove things about arbitrary values is if 1) the implementation somehow gets a handle on their generating expression or 2) we can make due with finite inspection (in which case a finite expression can capture the relevant information). So why not just reason about expressions like everyone else does?

Option (1) is a real thing that happens in the world.

What is so different between your ability to distinguish written glyphs 2 and 3 versus a computer programs ability to recognize the coincidental structural equality of two Actors about whose values from some message it wants to prove a theorem?

DL abstracts away a lot of stuff to find a common structure.

Where did the assumption that 'no proof can ever show it' come from? In my (hypothetical) model there is a proof for every true proposition. If they are equal, then there is a proof they are equal

If your Oracle can compare infinite lists from arbitrary computations instantly, you've given us enough to derive inconsistencies. We can start counting uncountable things. There would be no consistent truth for your Oracle to report.

Don't see it

To me, the opposite idea -- that every sentence must be (finitely) printable -- seems an odd restriction that limits the directness with which mathematics can model existing reality.

That seems like a superficial idea to me. If you dig into it, what can you even do with such sentences that contain values? If I give you a sentence S = "# = #" and the # are actors that you have to probe to find out what they are... what can you even do with such an object? The whole idea of equality is learning that two different expressions have the same value. Reasoning directly about the values doesn't even make sense to me. The expressions are at the core of what reasoning is.

What is so different between your ability to distinguish written glyphs 2 and 3 versus a computer programs ability to recognize the coincidental structural equality of two Actors about whose values from some message it wants to prove a theorem?

I think it's a fundamental aspect of symbols that they be readable in (tiny) finite time.

If your Oracle can compare infinite lists from arbitrary computations instantly, you've given us enough to derive inconsistencies.

I don't think so, because there's no way to incorporate the turnstile into a computation. The interpretation of turnstile as an oracle happens in the model. It's not available to the language as a computational element.

reasoning directly about values

Again, DL comes from a perspective that contemplates math as a social and as a computation process.

In 1927, if I wanted to hand you an alleged proof I had worked up, I would write it on a piece of paper. The real number constants in my proof would have to be expressible using a finite number of digits, or as the definition of some constractable real (e.g. π).

To assess my proof, to verify it, you would have to make sure the logical structure of it was right, and you would also have to verify certain things by inspection.

If I wrote "1 = 1" for example, you'd have to settle whether I got that right or not by inspection.

Today, at least in thought experiment, I am not limited to writing on paper. I could show you my proof in the form of an interactive hypertext.

A real number could be displayed as a button that reveals successive digits, each time you click it. (How it obtains the digit is not germane, other than it could be by non-deterministic choice.)

If two such real number are joined by an equals sign the equals sign could itself be a button. If you click on it, either it will (correctly) tell you the two reals on either side are equal, considering infinitely many digits in their expansion. Or it will tell you (correctly) they are not equal. Or it might just hang.... The key thing is: you can assume it will never give a wrong answer.

Notice that we have abstracted away any concern of how equality of constants is judged when verifying a proof. This is nothing new. We abstracted away that same judgement when proofs were confined to paper. Someone might say: "Looking at squiggles shaped '1 != 2' how do you know the squiggle '1' is different from the squiggle '2'?" A mathematician would say: "What, are you crazy? Just look at it." Since we can automate "just look at it" in interesting ways, we can allow in constants that aren't classically constructable.

A foundation for mathematics, especially one for computer science, should not exclude that second kind of proof that is at the same time on one hand a finite hypertext, and on the other hand contains infinite lists of digits.

That's an example of why (in my opinion) it is a good idea that DL makes a very conscious distinction between abstract sentences, a kind of syntax tree, and strings, the kinds of things you can write on paper.

where's the proof?

Where's the proof in your example? This equals sign button you've described sounds like a semi-decision procedure. If the implementation is particularly transparent, it would be a sensible way to define equality of reals. Otherwise, if equality of reals has some other independent definition, we'd have to reason about the implementation to prove that it's a correct procedure. Granting one of these possibilities for now, I would then only agree that you've proven two real numbers equal if you've pressed the button, and it came back with the answer "yes, they're equal". If you tell me you've got a proof, it's running now, it may never stop ... well, that's not a proof.

Otherwise, we have a proof of Goldbach's conjecture, as we're in the same situation with it: we've got a semi-decision procedure which will, in this case, never return "true", but will either return "false" at the first counterexample, or simply hang. (According to Wikipedia, there's no counterexample below at least 4 * 10^18). It should be obvious that any actual proof of Goldbach's conjecture needs to consist of a finite piece of reasoning, conclusively showing that the semi-decision procedure will never stop with a "false" answer.

I'll also point out that situations like your example are routinely modeled in computer proof assistants all the time; there's no sense in which they've been "ruled out". Unless you mean we've ruled out infinite proofs, which I maintain is a good thing.

re where's the proof

Where's the proof in your example? This equals sign button you've described sounds like a semi-decision procedure. If the implementation is particularly transparent, it would be a sensible way to define equality of reals.

It might be helpful to remember that mathematics is something that people (and machines) do. It's an activity. So, how does that help?

Suppose I give you a proof like this:

Theorem: 1 = 1
Proof:
1 = 1     by definition of = and inspection
QED


It looks right, right? The definition of "=" says that every number is equal to itself. And you can plainly see that in "1 = 1", the same number occurs on both sides of the equal sign.

Wait: Where's the actual proof that the number "1" is really on both sides of the equal sign? Answer: it's external to the formal math itself. It's in your head. It's a social fact. It's an empirical fact. Outside of the formal math, we can all stand around and look at the piece of paper, and agree: "Yup, that's a digit "1" on both sides of the equals sign, denoting the equality of the number 1 to itself."

When I show you the hypertext document with an equals sign button you can push to (maybe) get a judgement about two constants, it's similar. I'll be happy to take you back stage and show how the button is wired. How it works. I'll offer engineering arguments for why would should believe it only gives correct results, giving no result if it can't judge equality. We can all stand around and scrutinize the machine the runs the "=" button. But all that activity is outside of the formal math.

Traditional math is a way to reason about outside-of-math judgements like the equivalence of symbols on paper (which math abstracts as equality of numbers).

Why are we restricting ourselves to paper and refusing machines that can augment our ability to judge equivalences of representations?

Same question

You didn't answer my question, I think because I failed to ask it clearly. In the example with the equals button, what exactly is supposed to count as a proof of something? I'm happy to entertain your scenario, and I'm the last person who would argue for restricting ourselves to paper. I just don't see what the scenario has to do with proving anything.

re same question

In the example with the equals button, what exactly is supposed to count as a proof of something?

You already know the answer. A proof is correct if it is a series of propositions, each of which is either axiomatically true, or true by an allowed form of deduction from the earlier propositions.

The example of the "equals button" concerns the question of how we check to see if a step in a proof has been done correctly. In my trivial example proof, is the step that asserts "1 = 1" correct? That judgement needs to be made, outside of the formal system, to see if my proof is right.

There is a world-historic shift in the practice of mathematics in which machines are playing a new role in making the judgements we associated with verifying proofs. It has lots of interesting consequences, such as sharpening the distinctions between that which is provable, that which is true, and that which is (merely) consistent.

Thanks for your reply. My

Thanks for your reply. My understanding now is that you intend the equals button example to be admissible in a proof as a single step, analogous to the 1=1 example, albeit one whose verification may hang. If so, then I think I can claim to have a proof of Goldbach's conjecture in such a system, since that conjecture is equivalent to the statement that a certain real number is zero. (Take the nth binary digit to be zero iff n is the sum of two primes.) Apparently, I can just claim this equality as a single proof step, verifiable by inspection, perhaps with a button. Of course verification would hang. Maybe you can appreciate why I might find this terminology shift problematic? I was merely suggesting that we insist verification actually complete before granting the label "proof". My follow up argument is that as soon as we do so, we effectively make proof data finite, assuming verification is practical and not an oracle for example.

re Apparently, I can just claim this equality as a single proof

Mathematics is historically a social activity and increasingly an activity of societies of people and networks of machines.

"Nodes", like mathematicians or servers, can transmit impossible-to-substantiate or even wrong proof-claims to other nodes, and yet somehow mathematics lumbers on. A mistaken claim invalidates work that assumed it was true and leaves other work unscathed. So long as no inconsistency is discovered in a foundation, it too remains unscathed.

An "equals button" would certainly have to be empirically convincing to be credible. How convincing your claim of a proof is would hang on the details of your machine.

In the specific case of the Goldbach conjecture, it's implausible that we need to produce any unconstructable real constants or other infinite-information constants to carry out a proof. Any eventual proof, if there ever is one, would probably be of the kind that can be fully expressed in finite strings that you might write down on paper.

reasoning directly about values

I can understand the idea as a call for generalization: classical propositions allow literals, so let's allow arbitrary literals in our propositions. But I see it as fundamental to literals, as parts of expressions, that they be finitely recognizable. Look at a simple example of what formal reasoning looks like:

    |- a = b     |- b = c
------------------------------
|- a = c


Trying to make sense of a schema like that when you can't finitely recognize the entities under discussion doesn't make sense:

    |- # = #     |- # = #
------------------------------
|- # = #


Note that if these are literals and these equations actually hold, then all of the hashes are the same literal. This doesn't seem to be the same thing that's happening with reasoning inside a formal system.

Also, responding to your point about knowing that '1' is different from '2' by inspection -- that's not actually how many logics work. For example, in Peano Arithmetic, '1' and '2' are aliases for 's z' and 's s z'. That they are not the same is a theorem.

I can see the utility in introducing literals naming computational entities from some environment. But, again, I think the right mechanism for this is one of finitely recognizable names. Otherwise it doesn't seem to fit in with what propositions are.

reasoning directly about values

Trying to make sense of a schema like that when you can't finitely recognize the entities under discussion doesn't make sense

Yes, of course. That's right. That's why Hewitt describes Proof? as total computable. All judgements in a proof must be performed in finite time, etc. (Proof? can hang on non-proofs, by the way.)

The issue here is the information content of allowable constants.

If we allow only Sentences which can be parsed from finite Strings, then all the constants in our Sentences have only finite information.

Yet using media other than paper we can manifest constants containing unbounded information. Since these kinds of constants contain unbounded information, they can't be expressed as a finite String.

The first schema you gave is just fine:

    |- a = b     |- b = c
------------------------------
|- a = c


We can, using computer programs for example, construct material representations of Sentences, to which your schema might be applicable, even though the Sentences in question have no finite representation as a String.

The equivalence of the glyph "2" and the glyph "2" is not found in formal mathematical theories of numbers, yet you need judgements of that equivalence to do any formal math about numbers.

Using machine we aren't limited to manifesting constants using finite sequences of glyphs. We can manifest a constant using circuits and software, for example. We can manifest constants with no finite representation as a string. In some cases, we can make true judgements about the equality of these kinds of constants.

Computable sentences

Actually, as long as the actors are generating expressions, I don't see why you couldn't include them, in principle.

For example, rather than thinking of it as embedding a literal 3.1415926... into a sentence text, we could think of it as embedding an expression generated by an algorithm that expands to (3 plus-tenth (1 plus-tenth (4 ...))). I can imagine a framework for reasoning that such an infinite expression is equal to some other definition of pi, such as a series sum.

I'm still having a hard time imagining why this would be useful. What benefit comes with this complexity? (What complexity? You have to explain limits in order to evaluate expressions). I can reason about pi just fine with the usual approach of finite expressions without all of the headaches.

To find this compelling, I'll need an example of something useful that this approach makes possible. So far, all of your examples appear to be of things that are easily (and IMO much more naturally) expressed within more standard frameworks.

actor programs are nondeterministic

Actually, as long as the actors are generating expressions, I don't see why you couldn't include them, in principle.

The definition of an Actor is a finite string.

A manifest Actor is a physical entity.

A single Actor definition can produce an uncountably distinct number of manifestations.

In Actor programming languages, that situation can be described with operations for non-deterministic computation and delayed (lazy) computation.

I think part of your mental block here is that you are confining your interest in "mathematics" to (more or less) theorems of the sort that could be usefully written down in a text book and spread around the world and read years later. Not all useful theorems have to be communicable that way. We can have useful theorems about things for which you "had to be there".

What benefit comes with this complexity? (What complexity? You have to explain limits in order to evaluate expressions). I can reason about pi just fine with the usual approach of finite expressions without all of the headaches.

All of your arguments about pi can presumably be transcribed into a DL-founded theory quite directly. You aren't losing anything.

To find this compelling, ...

"What can we do with this?" is indeed an interesting question. A lot, I think, but it's a big topic.

Good stopping point :)

I'm unconvinced, but we'll see.

Inferring that x=y in Mathematics requires an actual proof ;-)

Inferring that x=y in Mathematics requires an actual proof ;-)

A single Actor definition

A single Actor definition can produce an uncountably distinct number of manifestations.

As I understand it:

• We start with a finite actor system (finite actors, finite messages)
• In each step, an actors system processes one available message and adds a finite number of available messages.
• For any finite number of steps, our actors system has a countable number of possible configurations.
• If I ask for an infinite number of steps, the resulting actors system will never physically manifest.

It isn't clear to me how you'll ever reach the 'uncountably infinite distinct manifestations'.

metaphor, cognition, and infinity

While this likely won't help, there's a book about metaphors which specifically addresses the issue of deriving the idea of infinity from repeating a process indefinitely. Also, the authors were not computer scientists, which prevents a "breathing our own exhaust" quality that might otherwise obtain.

When it was new, I read Where Mathematics Comes From by Lakoff and Nunez, because the general theme matched a naive intuition I had formed about how folks perceive mathematical entities as "real" by virtue of fitting nicely with human cognition (it feels good).

The discussion of infinity is pretty interesting. If you ask a normal human being to try to distinguish between reasoning about what happens "if we keep doing this indefinitely" and reasoning about the reified infinite result of having done that (despite the fact we can't), they're going to have trouble. I'd expect conversation to bog down.

If mathematicians have a jargon that resolves that by magic incantation, how do we distinguish that from mere rules in a game? I only ask that to suggest sometimes folks deny a game or rules are involved in the infrastructure of terms used in discussion.

nondeterminism (re a single Actor definition)

Actors can be non-deterministic. It's a fairly early page in the paper. There's a sentence at the top of the page that starts something like "there are uncountably many actors". There's a one-line program there of some interest to this thread.

Non-determinism was already

Non-determinism was already accounted for above. Each step processes one message from a finite set of available messages. Which message is not deterministic. But this still only gives us a finite branching factor based on the set of available messages.

Hewitt seems to be arguing for infinite actor systems, e.g. with arbitrary real numbered values or actors of infinite size or infinities of actors. With that assumption you could probably reach some uncountable infinities, but you'd certainly be unable to manifest any infinite system (i.e. by your earlier description that "a manifest actor is a physical entity").

By a diagonal argument, there are uncountably many Actors

By a diagonal argument, there are uncountably many Actors.

See page 10 of Inconsistency Robustness in Foundations

interplay of denotation and real state (re non-determinism...)

Consider the Actor definition (from the paper):

  Real.[] ≡ [(0 either 1), ⍒ Postpone Real . []]


The value denoted by the definiton of the nullary message is an infinitely long list of bits, each chosen randomly and independently of the others. The constant denoted contains an infinite amount of information.

That denotation accurately describes how programs can use such a value and how the value can be treated in mathematical propositions and proofs.

Of course, in an implementation bits are produced only on-demand so at no point is infinite storage needed.

Does a recipe for a cake

Does a recipe for a cake 'denote' a cake?

I'm not sure how this actor definition denotes anything other than an actor definition. If you instantiate it, you'll have an actor system. Everything I noted above still applies. The amount of information involved is finite.

Note that I specifically quoted your sentence where you said we'll have uncountably distinct manifestations, where you described 'manifest' to mean the physical sense. In that context, pointing out what a generating process for an infinite string of bits/cakes/etc. might 'represent' to a mathematician in some non-physical universe doesn't seem very relevant.

An Actor can evolve into any of uncountably many Actors

An Actor can evolve into any of uncountably many different Actors.

Edit: I.e., an Actor can evolve into a real number that is different from any element in a countable set of real numbers.

Sure.

Given infinite time.

My ex does this too.

My ex does this too.

You're not talking about CS

if you have reals (infinite memory) and the time to distinguish reals (infinite time).

Nonsense.

If an actor is defined as having a program of finite length and state of finite length it can only have a countable number of states.

If you claim otherwise then you don't understand what "countable" means.

Also the number of actors with finite addresses is countable, therefore any uncountable set of actors is a set that requires infinitely long addresses.

Also, being able to represent a number that isn't rational has nothing to do with being able to represent the whole set of reals. The set of numbers you can represent in a 1 to one correspondence with a finite program and finite state is countable.

On another topic if you're considering every possible computer program as completed no matter being infinite, then you're into a superset of anything that mathematics has ever taken to a limit. This isn't guaranteed to be analyzable, and certainly full of paradoxes.

cakes relevant to mathematicians

pointing out what a generating process for an infinite string of bits/cakes/etc. might 'represent' to a mathematician in some non-physical universe doesn't seem very relevant.

Unless, say, a programmer wants to reason mathematically about the behavior of his programs, or a mathematician wants to have some physical intuition for his inductions. Other than that, it's just words on paper possibly.

Okay.

It might theoretically be relevant in a completely different context than the one specifically regarding "uncountable distinct number of (physical) manifestations" for actors.

Though, it seems easier to reason mathematically about the co-inductive generating process. No need to involve infinities. Even mathematicians have a difficult time reasoning about real numbers. A lot of reasoning about infinities simply doesn't apply to computation, e.g. Ramanujan sums (1 + 2 + 3 + ... = -1/12).

uncoutable distinct

You are eliding the quote you take from me.

I said that a single actor definition can give rise to an uncountably distinct number of (physical) manifestations.

How many different possible physical manifestations can a single actor give rise to? Uncountably many.

This is not even controversial.

Though, it seems easier to reason mathematically about the co-inductive generating process. No need to involve infinities.

A countably infinite number of bits is entailed in the concept of an infinite, lazily evaluated list. Again, this is simply an ordinary literal truth, hardly anything controversial.

A lot of reasoning about infinities simply doesn't apply to computation, e.g. Ramanujan sums

That's very interesting. Which page in the paper are referring to?

physical implies finite time

For any finite time (e.g. measured in a count of messages processed), the number of possible manifestations for an actor system is finite i.e. with a finite branching in each step. The physical universe will never experience infinite time. Hence a physical actor system does not give rise to uncountably many distinct physical manifestations.

A countably infinite number of bits is entailed in the concept of an infinite, lazily evaluated list.

If you ignore the 'lazily evaluated' aspect, then all you have left to reason about is 'infinite list'. But programs are finite. Lazily evaluated lists have finite encodings. If you don't ignore this aspect, you can leverage it in your mathematical reasoning.

Which page in the paper are referring to?

That was a general observation. Reasoning that requires infinite steps or observations is similar to proof-by-contradiction, relying on a truth we cannot observe.

definition specifies vs. # of runs

Hence a physical actor system does not give rise to uncountably many distinct physical manifestations.

Right. Also, the one-line definition shown defines uncountably many physical actor manifestations. (I think you are just having trouble parsing my sentence.)

non-physical mathematical idealizations

You've been using the word "physical" in a very confusing way throughout this thread to include non-physical mathematical idealizations. I can only make sense of your claim here if I change "physical manifestation" to "non-physical idealized manifestation," for the reasons dmbarbour has spelled out, quite clearly I think. For another example, you used the absurd phrase elsewhere in this thread that "using media other than paper we can manifest constants containing unbounded information." I have no idea what you could mean there. I think it would help, for clarity purposes, if you distinguished between the actually physical, and mathematical idealizations of physical systems that are based in physical intuitions, but drop certain physical constraints (like finite time).

source text vs. manifestation

You've been using the word "physical" in a very confusing way throughout this thread to include non-physical mathematical idealizations.

Consider the cardinality of the set of possible outputs from a program.

The set of outputs from the one-line program that defines an infinite list of random bits is uncountable.

"using media other than paper we can manifest constants containing unbounded information." I have no idea what you could mean there.

On a sheet of paper you an only fit some finite number of digits. Using other hardware, you can create a device that produces an unbounded number of digits.

I think it would help, for clarity purposes, if you distinguished between the actually physical, and mathematical idealizations of physical systems

I have been.

The set of outputs from the

The set of outputs from the one-line program that defines an infinite list of random bits is uncountable.

(Sorry to break into another subthread.)

There is no uncountable set here, because there are no completed outputs of the program. It's a program that cannot terminate; there are only partial outputs. And the number of partial outputs from the program is countable.

For any countable set of reals, an Actor real nonelement of set

For any countable set of reals, there is an Actor for a real number that is not an element of the set, i.e., it differs from each element in the set.

Consequently, the set of Actor reals is uncountable.

Underlying disagreement

To add on to what John is saying here, from a different angle, I'd like to note that, as technical points, the fact that 2ω is uncountable while the set of all finite prefixes of its elements is countable is uncontroversial. These are essentially the two "sides" of the argument here. The real disagreement is as to what exactly the "set of outputs" denotes. I think it's possible to use language so as to be very clear about which set is being considered, and by so doing, eliminate all disagreements.

re The real disagreement is as to what exactly the "set of outpu

The real disagreement is as to what exactly the "set of outputs" denotes. I think it's possible to use language so as to be very clear about which set [finite prefixes after N steps of computation vs. 2ω] is being considered, and by so doing, eliminate all disagreements.

You are talking about two sets: 2ω and a set of list prefixes of lazily computed lists.

Here is a different way to look at that: Your two sets are really just two aspects of a single "thing".

Here is how I see it....

Let's recall the actor definition:

  Real.[] ≡ [(0 either 1), ⍒ Postpone Real . []]


We can read that as constructing a set of lists by induction.

Roughly:

  0. is a real number
if X is previously constructed real number then so are X0 and X1


Separately we could define an equivalence relation:

  For real 0.X:

0.X = 0.X0
0.X < 0.X1


We have a simple, inductively defined set of prefixes, right? That's the definition. And second we have an equivalence relation among prefixes, right?

An "equivalence set of prefixes" is a set of prefixes, all equal to one another. 0.X and 0.X0 are both in the same equivalence set of prefixes. Each equivalence set of prefixes is countably infinite.

Now please consider the set of equivalence sets of prefixes. This set of equivalence sets describes the reals (from 0 to 1) and their customary ordering.

At the same time -- and isn't this kind of, i dunno, beautiful or something --.....

The same actor definition that gives us a simple set construction of the reals as equivalence classes of prefxies...

The same definition describes a lazy, non-deterministic computation that can be physically realized to produce as output, successively longer members of one of those equivalence classes.

Isn't that cool?

which real is it? (physically speaking)

A machine can decide equality between two physically manifest reals by comparing their bits, or by comparing their actor addresses, or by some combination.

You people are driving me crazy

A computer with infinite memory and infinite time is not a computer. What are you on about?

Josh your misunderstanding is in this vicinity

A computer with infinite memory and infinite time is not a computer.

I am pretty sure (there might be mistakes but I'm pretty sure) that if you read back over my comments I don't talk about infinite memory or time. It's subtle. I can see how you got that impression, for sure, but... no, not really. There are some hairs to split. It's not easy to grasp at first look, really.

In that case

a physically manifest real is a member of some countable infinity at best, not a real.

The set of square roots of integers is not uncountable, etc.

And to be honest you can't even represent countable infinities in a computer.

Good luck storing something on the order of Graham's number in this universe.

What's going on here anyway?

If you didn't give the impression of some deep understanding of Hewitt's gobbltygook I would wonder if you know what "real" means or what "uncountable" means.

Better

Your wording here was careful enough that I can agree with more or less everything in your comment. There are some technical points, for example it looks like you're defining a strict order, not exactly an equivalence relation, and it's not clear from what you've specified how one would show that 0.1000... and 0.01111... are in the same equivalence class, but I don't see any obstacles to filling in the details and making everything rigorous.

Isn't that cool?

I do indeed find it cool to think of real numbers as computations that generate successively more accurate approximations, which is an idea that has a long history in constructive mathematics. Some of these ideas have lead to actual computer implementations, which are easy to learn about by googling "exact real arithmetic." At least one implementation is in a proof assistant and doubles as a rigorous logical development of the real numbers. Now it's down to subjective taste, but I prefer these other approaches, as they are rather careful about treating infinities and dealing with computability. (It also helps that they have clear and complete expositions.)

Thank you

There is no uncountable set here, because there are no completed outputs of the program.

Also you can't represent the continuum with finite programs. This isn't computer science.

Ok, maybe they want to ignore computability and put hypercomputing on some sort of firm ground (which it may not have yet), and use it as if it were normal mathematics... But doing that as an assumption is invisibly changing the subject. It's like a fallacy based on punning words.

 I guess Hewitt and Thomas don't see actors as a model of computing, they don't see actors as programs. They see actors as something like set theory with messages, where infinite processes are treated as complete. That's kind of mind blowing.

It's not CS, but it's interesting.

But you know, Hewitt thinks that self reference is the root of paradox, but if infinities are the root of paradox then he's in for a nasty surprise, because he's pretended infinities into a mock CS as if that were natural. Who knows what the result will be?

unbounded or infinite time

I think I could have worded my comment in a better way so that it wouldn't come across as an attack. My hope was that, by using more precise language, we could move the discussion along. I don't think you disagree with any of dmbarbour's observations about actors limited to finite times (correct me if I'm wrong). Do you object to my characterization of programs running for infinite time or unbounded time as being (quite useful) mathematical idealizations? Another good example of a useful idealization is the infinite tape of a Turing machine. If I got stuck in an argument with someone insisting the infinite tape was a real physical possibility, I wouldn't consider that a productive discussion.

The constant denoted

The constant denoted contains an infinite amount of information.

Not in a useful sense. What's there is a perpetually incomplete potential to generate further random bits. That's no more "infinite" than any other nonterminating computation. What we mean when we say "nondeterministic" is that later bits in the sequence don't depend on any information latent in the system before they're generated; so at each point in the unbounded computation, there's only as much information there as has been generated up to that point.

re "The constant denoted"

Not in a useful sense.

Confronted with you who denies the existence of a useful sense and Hewitt who demonstrates one, which claim should I believe?

Many excellent and some quite old definitions of the real numbers have an inductive character. Hewitt's is one of them.

You seem to be stuck on the notion that Hewitt's inductive definition of reals is parsimonious: That it both stands alongside classical definitions not motivated by programmable computation per se, and that it also can be interpreted as a straightforward program in an actor language.

It is as if by elaborating a parallel between classical math and computing, you feel Hewitt has taken something away.

Yes, it is odd to talk about a constant with infinite information in the same breath as a physically realizable computation that contains infinite information only in an unbounded future -- yet logically the unification of these two concepts works just fine.

Infinite Constants

Infinite constants are imaginary. They do not exist. Yes you can talk about them, even prove things about them, but that is simply an existentially quantified statement.

The existentially quantified statement about the imaginary thing exists (because it is finite) and can reasoned about even though the thing it refers to does not exist.

What I don't understand is how to reason about the generator function of a real, if it is an opaque actor. The only way we can prove useful identities on Reals is by algebraic manipulation of the generator function. The only way to get a real number is as the output of a computation anyway. You cannot measure them or input them.

re" "only way we can prove useful identities of reals"

The only way we can prove useful identities on Reals is by algebraic manipulation of the generator function.

Keep in mind that even mundane-seeming proofs of identity are useful in information system. If I say:

x = 3.0
y = 3.0

and ask you to prove that x == y, then after some algebraic manipulation you will have 3.0 ==? 3.0 which you'll solve "by inspection" (such as by a built-in feature of your CPU).

Similarly, if your CPU knows behind the scenes that two bitstreams in question happen to be wired to be copies of a single stream, it can assess their identity "by inspection".

What I don't understand is how to reason about the generator function of a real, if it is an opaque actor.

There is an example.

Algebraic manipulation of power series.

I am taking about things like the Euler identity, and algebraic manipulation of power series. In the case of 3 == 3 we can prove by structural induction IE s(s(s Z)) == s(s(s Z)).

e comes from natural logarithms, Pi from circles, i as the square root of -1, and yeg they all take part in the Euler identity, along with the trig functions sin and cos.

I don't see how you could prove the Euler identity by inspection.

re algebraic manipulation

In the case of 3 == 3 we can prove by structural induction IE s(s(s Z)) == s(s(s Z)).

And yet that itself reduces to showing that s == s and that Z == Z by inspection.

At the bottom of "doing math" either by hand or by machine there are judgements made by immediate apprehension. All I have been saying in this side thread is that immediate apprehension of equality is not limited to comparisons of entities containing only finite amounts of information (as evidenced by the example of a machine that knows two unbounded bitstreams are equal because it has built-in knowledge that they are two copies from a single source).

Confronted with you who

Confronted with you who denies the existence of a useful sense and Hewitt who demonstrates one, which claim should I believe?

That's not the situation. I explain specifically why it doesn't exist; neither Hewitt nor anyone else can "demonstrate" that it exists, because it doesn't. The definition you provided explicitly specifies a never-ending computation, where the "thing" you claim to talk about is always, throughout the never-ending computation, only partly constructed. Since the computation never ends, the "thing" is always incomplete. You defined it that way; I'm just pointing out what you did.

re confronted with...

Why aren't you applying your criticisms to other inductive definitions of ℕ or ℝ?

The definition you provided explicitly specifies a never-ending computation, where the "thing" you claim to talk about is always, throughout the never-ending computation, only partly constructed.

Additionally, interestingly enough, under some circumstances a program can know that two that two such infinite lists of bits are in fact equal.

Why aren't you applying your

Why aren't you applying your criticisms to other inductive definitions of ℕ or ℝ?

We weren't discussing those. I was pointing out that you've defined a recursive nondeterministic procedure for generating an infinite structure, rather than defining a completed infinite structure. Infinite structures cannot be completed.

With any inductive definition, you can get into trouble by naively treating the entire defined structure as a completed whole. Here we have, though, a particularly clear-cut case where there is no definition of the sequence at all; instead there is a finite specification of how to proceed with an unbounded nondeterministic process to generate elements of the sequence. The definition does not tell you what the nth element of the sequence will be; for that, you would have to actually carry the computation out to the nth step.

Real.[ ] is a perfectly good indeterminate Actor

Real.[ ] is a perfectly good indeterminate Actor.

Not very useful though

It doesn't seem vary useful though. A power series expansion for sin/cos, e seems much more useful, and capable of proving Euler's identity.

The problem with Real.[] is it may return Pi, but you will never be able to confirm that. All you know is it returns some real, and you may be able to test it is approximately Pi, but not prove it is exactly Pi.

Real.[ ] is useful to show there are uncountably many Actors

Real.[ ] is useful to show that there are uncountably many Actors.

If an actor is a computer program, then

this can only be nonsense. WHAT are you talking about?

Some Actors computer programs, e.g., of type Expression◅aType▻

Some Actors computer programs, e.g., those of type Expression◅aType▻

Haddocks' Eyes.

The name of the song is called ‘Haddocks' Eyes.’”

“Oh, that's the name of the song, is it?" Alice said, trying to feel interested.

“No, you don't understand,” the Knight said, looking a little vexed. “That's what the name is called. The name really is ‘The Aged Aged Man.’”

“Then I ought to have said ‘That's what the song is called’?” Alice corrected herself.

“No, you oughtn't: that's quite another thing! The song is called ‘Ways And Means’: but that's only what it's called, you know!”

“Well, what is the song, then?” said Alice, who was by this time completely bewildered.

“I was coming to that,” the Knight said. “The song really is ‘A-sitting On A Gate’: and the tune's my own invention.”

And expression of type real doesn't make the expression itself have the qualities of a real such as being uncountable.

Just naming your type as a member of reals doesn't have magical qualities.

Also I'm getting confused you're calling numbers actors now? Actors are supposed to be a model for computation where an actor is a point that recieves messages and sends them not the one name for everything you can think of.

Josh: less humpty dumpty, please?

Since you are into Lewis C.

Your language policing is less interesting than constructive dialog about what is actually being said. You could regard yourself as, in this context, trying to pick up a new language (both formal and with colloquialisms) by immersion.

A program that spits out random digits

is pretty much the only way you can (sort of get) the reals out of a computer program.

But that doesn't make actors uncountable, because one result from an actor is not an actor, it's a result.

You're confusing levels. The song is not the name of the song.

The expression is not itself a member of the type of the expression.

Josh: rigor (2)

You're confusing levels.

I have no idea what you mean by a "level".

But that doesn't make actors uncountable,

The set of actors denoted by a an actor program is, for some programs, uncountable. The definition given for "Real.[]" is an example.

The expression is not itself a member of the type of the expression.

Uncountable Sentences are formally defined. Have you read the paper? Where did you get stuck?

God, I give up

I looked up the definition of "Real.[ ]"

It's exactly what I thought it was. It can spit out one indeterminate real and never finish...

You're calling the infinite result of "Real.[ ]" an actor? Uhm no? "Real.[ ]" is an actor, its result is a string of 0s and 1s.

If you call everything everything then yes, you have a wonderfully impressive mysticism. And you accusing me of lacking rigor?

Pretty much.

Once again

"Real.[ ] is useful to show that there are uncountably many Actors."

Uhm no. It shows that you can get an infinite string of random digits out of an actor.

The way to get from that to "uncountable actors" is missing.

How do you show that an unfinished result is "an actor"

Josh have you read the paper? Where did you get stuck?

re: "this can only be nonsense. WHAT are you talking about?"

re why aren't you applying your...

By coincidence I think my very recent reply to a different comment responds to your point here.

I'd previously read that

I'd previously read that other reply to a different comment. I don't think it has any bearing. You talk there about two things being aspects of the same "thing", but I don't see that there's a "thing" there for them to be aspects of, whatever one ought to mean by "aspect" in this context. Btw, something went awry with your apparent attempt to define the reals by induction and a related set of equivalence classes; and I suspect this isn't a technicality, but instead may get at what you're missing about the difference between enumerations and completed structures (but I'm not exactly sure what you're missing). You haven't defined the real numbers (in the interval), nor even defined the rational numbers; you've defined by induction the rationals that have a finite binary representation, i.e., those whose denominator is a power of two. Any rational number whose denominator isn't a power of two does not correspond to any equivalence class there. For example, 1/3.

re "I'd previously read that..."

Btw, something went awry with your apparent attempt to define the reals by induction [....] Any rational number whose denominator isn't a power of two does not correspond to any equivalence class there. For example, 1/3.

I cleaned it up in a new topic post.

"Dedekind, Cantor, Conway, & Hewitt (w/ some Chomsky)"

There, I construct the reals in terms of partitions of sets of finite bitstrings, drawing out a correspondence of rationals to regular languages, constructible reals to recursively enumerable languages, and reals in general languages defined as prefixes of an unbounded history of coin flips. There is a clear relation to Dedekind and Conway and the construction I give is basically a paraphrase of Hewitt's construction of Real.

Have fun. I think It's cool. Sincerely for real and for true, mate.

Validity (i.e., correctness) of proofs is decidable

There is a computable total procedure Proof? such that
∀[p:Proof, s:Sentence]→ ├p ⌊s⌋ ⇔ Proof?.[p, s]=True

validity of proofs

There is a computable total procedure Proof? such that
∀[p:Proof, s:Sentence]→ ├p ⌊s⌋ ⇔ Proof?.[p, s]=True

I should pick a different word from validity then.

Your concept is a computable total procedure that checks whether the form of a proof is correct. That can be done in finite time.

I guess I should say verified, then.

Verifying a proof means inspecting it to see if assertions made by inspection are accurate (such as an assertion of equality between two reals).

What is a proof?

Somehow you have a total procedure to verify proofs but you can't enumerate all proofs, then, right? Because if you can enumerate proofs and you can check their validity, then you can enumerate theorems.

These proofs must be abstract entities, I guess, that somehow witness the truth of sentences? I don't know... none of this makes any sense to me. If your proofs aren't actual proofs, then it doesn't seem truthful to read "├ Ψ" as asserting the provability of Ψ.

as I see it: re What is a proof?

Taking constants as atomic, all sentences are finite. All proofs are finite. This is because they are inductively constructed in finite numbers of steps.

If we take judgements such as testing the equality of constants as a total procedure, Proof? is a total procedure.

Not all constants contain a finite amount of information. The one-line Actor construction on page 10 (infinite random stream of bits) illustrates that.

In an operational model, Proof? may not terminate, even though it appears total taking constants as atomic and equality of constants as a total procedure.

So there is a hair that could be split here between a procedure defined within DL that is abstractly total, and an implementation of DL in which a faithful implementation of that total procedure must not (in the general case) terminate for all inputs (but must terminate for some).

Ok

So if we restrict to the subset of DirectLogic that coincides roughly with second order logic, we can enumerate the sentences but not the proofs, because some of those might contain actors and there might be uncountably many of them.

Maybe that is what he has in mind. He can always clarify. It sounds like the model I had in mind isn't going to work with the axiom Hewitt just produced. I'm not going to try to think of another model, though. I will say that none of the things Hewitt is doing seem to have the flavor of increasing consistency strength of the system. I suspect it will either have a relatively straightforward set theoretic model or will be inconsistent. I could be wrong.

not (re) OK

There are uncountably many Sentences in DL.

"Proof?" is a total computable proc; theorems of Math not enum

"Proof?" is a total computable procedure but the theorems of Mathematics are not computationally enumerable.

⊢<sub>ℕ</sub>Consistent[ℕ]: open problem in Math

Consistent[ℕ] is a fundamental open problem in (Meta-)Mathematics for the Peano/Dedekind axiomatically defined categorical closed theory ℕ where Consistent[ℕ] means that no contradiction can be derived from the axioms and rules of inference of ℕ.

Arbitrary Reals Do Not Exist

I think this is interesting, however I am not sure I see the point, or see it as something fundamental yet. Surely 3.141... = 3.141... if the generating expressions are equal. In this case we might have a symbol Pi that represents this number, and we can clearly say Pi = Pi with no problems. e^(i * Pi) = -1 is an important identity, and relies on Euler's formula, which reduces to a series limit (remember I mentioned Zeno's paradox). So we should be able to represent such reals as power series expansions, and evaluate the limits of such series. There are several series that converge to Pi, and any system for mathematics must recognise their equality.

Personally I think the answer is that the reals should not exist in the system. You cannot create a real number by enumerating digits as you will stop at some finite precision number before the universe ends.All other reals should be represented by the equation that generates them, and equality can be proved by a transformational proof on the equation only. So we can prove e^(i * x) = cos x + i * sin x by power series expansion for example.

how to construct an arbitrary real

The real numbers in the range [0,1] are isomorphic to their infinite binary expansion. I.e., 0 is an infinite list of 0 bits. 1 is an infinite list of 1 bits. 1/2 is a list with a 1 bit first, then infinite 0 bits.

As a notation, you could write 1/2 (an infinite list of 0s preceded by a 1) on paper, like this: 1‾0

3/4 = 11‾0

1 = ‾1

1/3 = ‾ 01

What's going on here is that there is enough regularity in those binary expansions for you to reason about them inductively, very easily.

In this age of miracle and wonder you could also denote a unique real number with an active device. Picture a little box with a button. When you push the button it displays a 0 or a 1, the next digit in some real number.

Perhaps the digits a given device produces are deterministic or perhaps they are random. It makes no difference.

Suppose we have a device that produces random digits.

Does that device represent a specific real number, one that incidentally is (with probability 1) a non-constrable real?

On the one hand you could say that the real number we're thinking of "doesn't exist" since we will never be able to examine all of its digits.

On the other hand you could say that the real number indicated by that device certainly does exist, in the sense that any assertion made about its nth digit can be verified or refuted by inspection.

Here is a practical consideration:

We can use the mathematics of real numbers to reason about the output of this device, even though we can prove fewer facts about the number than if it were a known, constractable real.

Why not then, as DL does, allow numbers such as this to appear as constant terms in abstract Sentences?

If your foundational theory makes it easy to allow such numbers, even if you could remove them from consideration in a sub-theory, isn't that a virtue of your foundational theory? The virtue being that it recognizes and makes useful the commonality between such output signals of random bits, and abstract real numbers?

Rational numbers are fine.

Rational numbers are not real numbers. So a/b where a and b are finite integers are rational. No problem with these numbers, and you represent them as fractions.

Your representation is just another finite domain, and if you could define addition and multiplication you could have an algebraic group. The maths of this finite domain does not require reals, just like the maths of rational numbers does not require reals.

The device cannot output a real number, when the universe ends, it will have output a finite number of digits, and this number will be rational.

Only special reals like Pi and 'e' actually exist, and they are the output of symbolic equations. It is vital we can establish the identity of all the different symbolic generators of Pi. The only way I know to do this is by algebraic manipulation of the pure generator functions.

rational numbers are fine

The device cannot output a real number, when the universe ends, it will have output a finite number of digits, and this number will be rational.

You can take this approach but if you do, you will need a thoroughly finitary foundational theory.

For example, applying your principle, we can not use the classical natural numbers. Instead, we must assume that there is some finite largest integer.

If you are not prepared to make such a drastic commitment, then you may not accuse my digit-generating box of being a rational number, because that will lead to falsehoods. In particular, you will be able to falsely prove the existence of some specific rational, but I will be able in principle (with probability 1, at least) to show that you have produced a contradiction and your system is inconsistent.

Not really

I don't think a foundational logic needs an infinite set of objects representing the real numbers.

think that through (re not really)

I don't think a foundational logic needs an infinite set of objects representing the real numbers.

Interesting theory. Do you have a write up of it?

You are in (I think) good

You are in (I think) good company: MF118: Real fish, real numbers, real jobs. (The connection to real numbers comes around the five minute mark.)

That might not be the best first introduction to this like-mind mathematician so here is a link to his (ongoing) series that deals with this and other number math (not logic) issues: MathFoundations. There are over one hundred videos on very specific topics. If the first one that I linked is not interesting perhaps one of the other ones will be.

you may not accuse my

you may not accuse my digit-generating box of being a rational number because that will lead to falsehoods

If your digit generating box has a finite size, i.e. a finite number of states, and only a single event input, then it will certainly generate a rational number.

It'll certainly never

It'll certainly never generate an irrational number, since the output would only be irrational if the box had finished generating an infinite number of digits, which won't happen by the definition of "infinite".

re: it'll certainly never

It'll certainly never generate an irrational number, since the output would only be irrational if the box had finished generating an infinite number of digits, which won't happen by the definition of "infinite".

Anyone is certainly welcome to propose a foundation of mathematics for computer science in which there are no uncountable classes of object. It could be interesting. Instead of saying "fixpoints don't exist in mathematics" you could be the one saying "turing machines don't exist in mathematics; neither does lambda calculus".

This has nothing whatever to

This has nothing whatever to do with uncountable classes of object. This is about using infinities as an excuse to not bother to define things, which makes it possible to make arbitrary and ultimately meaningless claims. Completed infinities are one of the ways of viewing the meltdown that befell mathematical foundations (one impression of the elephant), and it does seem Hewitt has failed to realize he's introduced an even more virulent foundational failure than the ones he claims to have eliminated (setting aside the question of whether he actually eliminated them, or of whether eliminating them does what he thinks it does; note, I've not forgotten I'm to take a look at the paper you recommended).

John Shutt, please clarify

This is about using infinities as an excuse to not bother to define things, which makes it possible to make arbitrary and ultimately meaningless claims.

Fine. That is an interesting assertion. I think it is not a fair description of the work under consideration but I could be wrong.

Let us explore the question.

What specifically is it that you think has not been defined? Or a specific example thereof? Perhaps I can be convinced you are right or perhaps I can fill in a gap for you.

Doubts concerning the

Doubts concerning the foundations of mathematics pretty much start at infinities and spiral out from there. Remember that quote attributed to Kronecker, "God made the integers, all else is the work of man"? Starting point. The basic objection to the Law of the Excluded Middle (another view of the elephant) is that it applies only to completed, which is to say finite, structures. Russell and Whitehead avoid impredicative definitions by requiring any particular definition to contain only a finite depth of nesting of other definitions, i.e., a finite type (another view of the elephant). And, just to make a clean sweep of the three schools of thought, Hilbert's meta-mathematics is finitistic (sometimes called finitary). Finiteness, completion, is the heart of the whole thing; it's the essence of the concept of proof; "infinite proof" isn't an oxymoron, it's a contradiction in terms.

You ask for specifics. Makes sense, from your perspective. From my perspective, there's an efficient way to pursue such specifics, and an inefficient way. Efficient is to read Hewitt's paper (the one you recommended) directly, if one is going to do this at all. I said I would, and my intention has not changed — out of respect to you, not to Hewitt who started out with some credibility on which to earn a fair hearing and has proceeded to squander it. If you are overlooking something, I would expect it to be, most likely, an intelligent mistake resulting from looking for more sense beneath Hewitt's work than is actually there.

hint (re infinite proofs)

Finiteness, completion, is the heart of the whole thing; it's the essence of the concept of proof; "infinite proof" isn't an oxymoron, it's a contradiction in terms.

There aren't any "infinite proofs" in DL but there are some constants that contain a countably infinite amount of information.

There aren't any "infinite

There aren't any "infinite proofs" in DL.

I didn't say there were, and reserve judgement on that specific technical point, though noting the techical point may turn out to be slippery.

insult is not argumentation

Hewitt has failed to realize he's introduced an even more virulent foundational failure than the ones he claims to have eliminated

Explain, please.

see above

See above. (Note: I have no interest in arguing, as well as no interest in insulting. If I think a line of reasoning is invalid, I mean say so; not an insult, just a statement. I do try to be careful to qualify my statements, a care that appears to have gotten lost with trimmed context in this case.)

transcendental darts

A fun, short video on this subject: Transcendental Darts (by Vi Hart).

Suppose we have a device that produces random digits.

I'm not convinced this is a sound assumption. Can non-deterministic random devices physically exist? How would you know?

In any case, such a device would not exist in any finite expression of a deterministic logic/math/PL. Nor could it be a physical finite state machine.

As far as I'm concerned, most real numbers aren't. Yet, there are some useful numbers outside the rationals, such as pi (which can be expressed in a finite program) and sqrt(2) (which can be expressed with an appropriate notation).

Though, I'm interested in rational trigonometry [tutorials] because I like rational numbers much more than roots.

Why not then, as DL does, allow numbers such as this to appear as constant terms in abstract Sentences?

I can think of at least a couple reasons I wouldn't want infinite sentences in my logic:

• I cannot determine in finite time when two arbitrary, infinite sentences are structurally identical. Consequently, you cannot always determine whether sentences contradict each other (e.g. X = 01001010...., Y = 01001010...., X =/= Y; do these contradict? inspect possibly forever to find out!)
• I cannot generally compose such sentences - e.g. if I add or subtract two real numbers expressed as an infinite series of digits, there are problematic edge cases where I cannot compute a carry value without an infinite computation.

One might reasonably argue an infinite sequence of digits isn't a valid representation of a real number because real numbers are well ordered and closed over addition, subtraction, multiplication... but the infinite series is not closed (due to divergence for carry values in edge cases) and not well ordered (you can't generally compute a least element due to divergence in comparisons).

"Real" numbers

Whether the real numbers are or aren't real isn't important. The question is whether reasoning about them leads to contradiction. As long as it doesn't, they're simpler than the alternative.

we can prove that, using math (re transcendental darts)

Can non-deterministic random devices physically exist? How would you know?

It follows from three physics hypotheses. Two of these hypotheses are among the best confirmed empirical results in the history of laboratory physics. The third hypothesis is not empirically falsifiable but amounts to a belief that information travels no faster than the speed of light. (Conway and Kochen's FWT, of course..)

More simply, though, it is trivial to build random number generators based on quantum noise that pass all known tests of randomness.

In any case, such a device would not exist in any finite expression of a deterministic logic/math/PL.

Correct. That is among the reasons Direct Logic distinguishes between strings and Sentences.

As far as I'm concerned, most real numbers aren't.

I won't deprive you of that view. As I said, one can say such numbers don't exist because we can't manifest them in full. And one can say that such numbers do exist because we can produce examples with the property that we can examine any digit of them we choose to examine.

I cannot determine in finite time when two arbitrary, infinite sentences are structurally identical. Consequently, you cannot always determine whether sentences contradict each other

Correct. Not "always".

I cannot generally compose such sentences - e.g. if I add or subtract two real numbers expressed as an infinite series of digits, there are problematic edge cases where I cannot compute a carry value without an infinite computation.

Correct. There are uncountably many such cases.

We can prove that. Using math.

can't examine any digit

And one can say that such numbers do exist because we can produce examples with the property that we can examine any digit of them we choose to examine.

If I were to say, "I have this monad with excellent properties so long as you don't use 'bind'...", one might reasonably argue: "well, that's not really a property of your monad, then."

I feel this applies to your example. "I have these numbers with a nice property, that you can inspect any digit, so long as you don't use 'add' (or compare, or anything else, really)." If you do add two numbers, you might be unable to examine even the first digit.

It isn't merely an issue of manifesting numbers 'in full'. Outside of an algebraic context where we can compare or compose numbers, all you really have is 'infinite strings'. Infinite strings, even if you limit their alphabet to a set of popular digits, simply aren't the same concept as numbers.

More simply, though, it is trivial to build random number generators based on quantum noise that pass all known tests of randomness.

We can also trivially create secure deterministic finite-state pseudo-random number generators that will pass all known tests of randomness until well beyond the heat death of the universe. :)

I'm not convinced even the best of our empirical experiments have isolated enough variables to determine whether behavior is non-deterministic. We're still trying to understand gravity, and we don't really know whether there is a deterministic theory behind quantum noise and coupling that is beyond our direct observation.

A couple years back I read this gem which purports to greatly simplify a variety of quantum interactions (e.g. compared to Feynman diagrams) by placing them into a timeless multi-dimensional structure. Someone else (on reddit, I think) commented that they were working on a similar theory, but with four more dimensions, that covered even more interactions. I haven't kept up, but I'm curious what will become of these.

mathematics for computer science

I feel this applies to your example. "I have these numbers with a nice property, that you can inspect any digit, so long as you don't use 'add' (or compare, or anything else, really)." If you do add two numbers, you might be unable to examine even the first digit.

Correct.

Of course, the sum is still perfectly well defined by a kind of structural induction.

timeless multi-dimensional structure.

Of human interest is what we know when.

re: math for cs

Of course, the sum is still perfectly well defined by a kind of structural induction

Divergence seems a rather large imperfection to overlook. Structural induction on a coinductive or infinite structure isn't well defined in every logic.

Getting back to the topic of a math or logic for computer science, a reasonable question is whether our logic should accept as well defined a sum over infinite strings (or the infinite strings themselves).

Infinity

I can say 9.9... plus 0.0...1 is 10 but again I have created a new symbolic representation with its own algebraic properties.

What is interesting is mathematical identity has no concept of time. A = B implies A and B have always been and will always be the same. Two things are identical whether or not we have discovered the proof of such yet. The value of Pi existed and was Pi before we calculated it.

I think this is a fundamental difference to computability which does involve time. We cannot compute Pi (completely) but we do know mathematically that "sin Pi = 0". Reality is finite, and reality is computable. Reality is consistent but not complete (as is what is computable).

But we can compute proofs (as we agree they must be finite). That is the key. The timeless nature of equality let's us symbolically manipulate abstractions. For example sqrt(2) does not exist, it is not real, it cannot be computed and does not exist in reality. But we can construct the abstraction of square numbers, and we can extrapolate from this what the square root is. We can construct an abstract representation of such numbers which is finite, and compute using that.

My thinking would be that infinity has to be a fundamental concept in the system, as do some kind of recursive definitions. We should be able to represent series like "sum(n is 1 .. infinity) [1/n]" and know that it's value is 2 and operate algebraically on such series in finite time (such as adding two series). Mathematica seems to do this quite well whilst running on a computer with no irrational numbers.

This would suggest we need fixed points and infinity, but not uncountable objects, such a thing is not computable and does not exist. The fact that irrational numbers are not real, and I cannot store one in my memory strongly suggests they are not foundational.

Sand Castles

Somehow this discussion reminds me of two crazy naked philosophers sitting on the beach in sand castles while a flood is coming while bickering about that their own sand castle will withstand the flood and the other party got it totally wrong.

Never mind me. I am just enjoying the sunset.

Waiting For Godel

Isn't there a play about that: "Waiting for Godel"?

Unfortunately, Terry

Unfortunately, Terry Pratchett died. Otherwise he could have written a great book or play about people here.

Cheers,
Carl

Thomas: thanks for noticing the typo

Thomas,

Thanks for noticing that end note 10 repeats end note 9 in
Classical Direct Logic

Cheers,
Carl

-

Posted in wrong place.

defining reals

The custom when talking about real numbers is to mention an integer part and a fractional part. Here is an alternative:


Constructing the reals:

"0." is a real approximation

If "0.X" is a real approximation then so is "0.X1"

If "0.X" is a real approximation then "0.X0" is an incomplete real approximation.

If 0.X0 is an incomplete real approximation, then so is 0.X00.

If 0.X0 is an incomplete real approximation then 0.X01 is a real approximation.

Note that if 0.X is a real approximation then 0.X is also a dyadic rational.

The dyadic rationals are well ordered.

"0." is a real number (not only a real approximation).

"1." is a real number and is also a dyadic fraction.

Every dyadic rational is also a real number.

In addition, certain sets of dyadic fraction are also real numbers.

If "0.X1" is a real number then the set of all dyadic fractions less than "0.X1" is also a real number.

If Y is a real number, then the set of all dyadic fractions greater than Y is also a real number.



That is the logical structure of the output described by Hewitt's one-line Actor-definition, constructively expressed in terms of sets, with no reference to a stepwise program-evaluation concept of time.

You have no intention of responding to any of my

questions about how sentences or actors can be uncountable.

I guess I'm not worthy, eh?

Appropos of nothing

Maybe if we're going to have theories about computer science phrases like "uncountable infinities" should be considered evidence that we've gone off track.

Maybe we should start with finite model theories, finite arithmetic and numerical approximations and only go beyond those if absolutely necessary.

Has anyone ever needed the axiom of choice for a computer program?

Logic, Neurons and Godel Numbers

Here's a presentation about the relation between logic and neural networks:
http://staff.computing.dundee.ac.uk/katya/UK.pdf
Whilst the topic of the paper itself is interesting, I think there is some important stuff here.

You cannot prohibit Godel numbering. The numbers are not assigned by the logic system, but by the observer (in this case the neural net, but in our case the human brain). The argument that real numbers prohibit Godel numbering by being infinite is shallow. I can represent a real number by a single "Godel number" standing for each unique real in the formula (as I cannot know the actual numbers, the fact that they are unique is all that is important). Effectively we just need a unique variable for each real and use De Bruijn numbering. To know a real we must know the generator equation, in which case we can Godel number the generator equation and subtitute this for the real number. Any argument about systematically prohibiting this is void because the numbering happens in the (human) observer not in the logic itself. The key point is to understand the number is the observers internal representation of the logic. Any arithmetic then performed on this is then just the observers intuitive arithmetic and outside the system under consideration. I'm not really saying anything about the applicability of the incompleteness theorems, just the specific point of Reals breaking Godel numbering. The observers neural net is finite, therefore all maths must have a finite representation in the neural net, and that is what we assign the Godel number to.

Strings are countable but sentences and propositions are not

By a famous proof due to Cantor, the real numbers are not countable.

Consequently, strings are countable but sentences and propositions are not. Hence, Gödel numbers are only applicable to strings.

Doesn't likely block the construction of a Goedel sentence

Uncountabilty doesn't likely block the construction of a Goedel sentence. I can simply restrict myself to that part of the syntax which I can write down.

Maybe something else blocks the definition of a Goedel sentence but this isn't it.

It's gobbleldygook

The only way that "sentences and propositions" can be uncountable is if you allow infinitely long sentences and propositions.

How is that useful or necessary?

The only way that "sentences

The only way that "sentences and propositions" can be uncountable is if you allow infinitely long sentences and propositions.

They can also be uncountable if you allow a single token to contain an infinite amount of information. Which is also neither useful nor necessary. But of course it's all irrelevant to Gödel's results, since those results only need the finite parts of the theory to exist, regardless of whether you choose to assume infinite stuff "exists" too.

uncountably many sentences

Looking at multiple LtU threads about this topic:

Some people think of math as a kind of language of finite strings drawn over finite alphabets. (Some would say "countable alphabets" but, without loss of generality, we can assume finite alphabets.)

A typical construction begins with a grammar for propositions which might be parsed as expressions and sub-expressions with primitive terms, drawn from the finite alphabets, as leaf nodes.

A generative grammar lays out the constructible set of well-formed propositions.

A finite set of propositions are distinguished as axioms. (Some would allow for an effectively enumerable set of axioms but without loss of generality we can assume a finite set.)

A finite set of computable structural relations between finite sets of propositions is defined which we call "inference rules".

All of the axioms are taken to be proved, by assumption.

Any proposition which can be reached from earlier proved propositions by a finite chain of inferences is also taken to be proved.

No other proposition is taken to be proved.

----------------------------------------------------------

Starting from the above concept of math, something interesting arises.

In math, even as understood above, we can easily define uncountable sets (such as 2). We can also establish that some elements of such sets are "unconstructable".

Can terms of an uncountable set be used as terms (not necessarily "primitive terms", but terms) of a proposition?

We can allow members of ℕ as terms, what about 2?

On LtU, many have said (words to the effect) that some members of 2 can be used as terms but only, specifically, the constructable members. (I.e.: It is ok to represent some reals by their generating function.)

That's essentially the classic "constructivist" position.

The traditional objection to admitting unconstructable reals as terms in propositions can be loosely paraphrased as "Now you're just talking about how many angels can fit on the head of a pin!"

--------------------------------------------------

For the moment, let's suppose that we only allow computable reals to exist as terms in sentences. We allow them only in the form of a "generating" function.

Already, we have broken Goedel's proofs. They no longer work.

Not even Goedel's first theorem stands up to allowing computable reals as terms.

In this sense:

Goedel's proofs require the set of proofs to be computably enumerable.

The constructable reals are not computably enumerable (by diagonalization).

Q.E.D.

-----------------------------------------------------

As an aside, in physics, quantum randomness seems to be a pretty real, irreducible phenomenon.

If we hypothesize that it is, then the following emerges:

A random bitstream of unbounded length may be constructed.

The set of all such possible bitstreams, of course, includes streams which are not computable. This is because given such a stream, and given a deterministic computation which allegedly computes the random bitstream, eventually (with probability 1) the random bitstream will prove the computation incorrect.

------------------------------------------------------------

Uncontroversially: propositions can be drawn as finite expressions over finite sets.

Uncontroversially: propositions can allow computable reals as terms, and this is enough to already block Goedel's theorems, since we can no longer computable enumerate proofs.

Empirically, uncomputably random bitstreams of unbounded length appear to be constructable in the real world. Routable addresses of such physically realized bitstreams can certainly appear in data structures in computer programs and are in that sense on equal footing with any other possible term of a proposition's expression tree.

The main complaint about allowing these random bitstreams to be treated as reified, uncomputable reals is that they break Goedel's proofs but, in fact, we already broke those proofs when we let computable reals be used as terms.

Controversially

The bit about putting computable reals in terms looks wrong. If such a proposition (with a subterm encoded as a Turing machine representing a purported computable real) is proven, the proof will implicitly establish that the embedded Turing machine does indeed encode a computable real. The set of computable reals that appear in valid proofs will thus be an enumerable proper subset of all computable reals.

re controversially

Matt, that's a krazy-cooltm critique that I think is subtly wrong. To make a case (that I get your drift and that I think I can convince you it is wrong)... you argued:

If such a proposition (with a subterm encoded as a Turing machine representing a purported computable real) is proven, the proof will implicitly establish that the embedded Turing machine does indeed encode a computable real. The set of computable reals that appear in valid proofs will thus be an enumerable proper subset of all computable reals.

I think I can correctly paraphrase you this way (and correct me if I'm wrong):

Perhaps we can't uncontroversially admit "constructable reals" as terms in a theory because a theory's proofs are, by assumption, computationally enumerable. We assume a theory is is a proposition grammar over some finite alphabets, plus a finite number of propositions selected as axioms, plus a finite number of structural inference rules. The proofs are all the finite chains of inferences. These are obviously enumerable. Thus we can admit some constructable reals as terms in this theory, but only a computably countable subset of the constructable reals.

If we are talking about a foundation for mathematics in general, your reasoning had better not apply. Because...(for example)...

Given a theory that constructs some subset of constructable reals, a mathematician can rigorously construct a constructable real not available in that theory (by simple diagonalization, if nothing else).

What that tells us is that "constructable real" exists as a useful and rigorous class of terms, in "mathematics in general".

Repeating in different words:

Mathematicians robustly discuss the (not computably enumerable) "constructable reals", outside the context of any specific finitary formal system. (For example, We're doing it now. We're talking formalizably about the class of constructible reals in general, and are free to pick arbitrary imagined terms from it. If you're reading this: "You're soaking in it," as the old commercial used to say. We can rigorously reason about arbitrary terms from the constructable reals.)

The complete class of constructable reals is manifest as the space of potential creative acts by living practitioners of math. This is a class whose members people recognize when they see them.

If mathematics-in-general is to be formalized, it must admit terms that mathematicians discuss "between theories" (so to speak). A foundation should describe the class of available creative choices without pretending to rob them of creativity by enumerating tghem.

Describing the landscape

I agree with your paraphrase, but disagree with your reasoning that follows.

Mathematicians robustly discuss the (not computably enumerable) "constructable reals", outside the context of any specific finitary formal system.

There are two points of view one can take. One is that it is widely held that almost all of mathematics can be done inside set theory and that ZFC is an adequate formalization of set theory for this purpose. It is certainly possible, in ZFC, to reason about the set of all computable reals. In fact all of the arguments we're making here can be formalized in ZFC. So from this point of view, everything is fine with just a simple formal system.

The other point of view is to take a step back and ask which particular Turing machines can be proven to represent computable reals inside ZFC. A mathematician, believing in the soundness of ZFC, can then construct a new computable real by diagonalization that cannot be constructed inside of ZFC. This line of reasoning led Roger Penrose to argue in The Emperor's New Mind that mathematicians cannot be modeled as Turing machines and he went so far as to speculate that quantum mechanics might be crucial to the way mathematicians reason.

I think the situation is much more mundane: a human mathematician is simply unable to formalize his own mathematical thought process.

For example, let's take ZFC as system 0. By diagonalization, we can add a new axiom that gives us another computable real and call this system 1. Similarly we can obtain system 2, system 3, etc. A mathematician will have a flash of insight, though, and will quickly generalize to system n, for any natural n. He can then diagonalize over this entire pattern of all finite numbers and obtain system ω. Diagonalization of this system leads to system ω+1 and we can see that this pattern applies at every ordinal.

But a mathematician is unable to construct an arbitrary ordinal. We'll count 0, 1,2 ... ω, ω+1 ... 2ω, 2ω + 1,... ω*ω, ... polynomials of ω, ... The thing about counting ordinals is that we always feel like we're making progress, but the insights that are needed to get to a bigger class of ordinal get harder and harder.

So a mathematician will be able to produce a more powerful system for any system that we understand to be correct. But this will become more and more difficult as these systems rely on deeper and deeper insight for that correctness. There exists a sequence of systems approaching the limit of what a mathematician will understand, the union of which will be exactly what the mathematician could understand.

I suppose you could call this limit system "mathematics" and reason about it in the abstract, but I don't know what insights you might glean from doing so. It probably varies by mathematician and it seems mostly inaccessible to insight, by construction. This system will be enumerable, though, if you believe that people are ordinary machines and believe the Church-Turing thesis.

Uncountable many TMs

Take a countable number of TMs all generating different bitstreams. Construct a Cantor TM which runs every nth TM upto the nth bit and outputs the reversed bit.

Now I have established there are uncountably many TMs.

Where did I go wrong?

countable + 1 = countable

Where did I go wrong?

You started by assuming a countable set of TMs, and you showed the existence of one TM that you hadn't counted. But a countable infinity plus one is still a countable infinity.

Comment deleted

.
My argument here wasn't quite right. I don't see how to use diagonalization to generate the reals, so it was confused

You went wrong a few places

1) your Cantor TM contains all the other TMs therefor its length is infinite. The set of infinite length TMs is not countable.

2) you assumed that the set of outputs from your countable TMs is complete. But if there is a one to one correspondence between TMs and outputs then a countable set of TMs can't generate an uncountable set of reals - so there had to be holes.

But maybe the diagonalization argument is more slippery than it seems at first glance.

What if you applied the diagonalization argument, not to the set of reals but to the set of integers .. ie you prove that the list of integers isn't complete. Why is that less compelling as a proof that integers aren't countable?

Dunno

I am trying to make Lord's argument precise with TMs such that, hopefully, we'll know better where the argument is flawed (if so, of course).

I have some ideas about it but I think I'll just post it to stackexchange to see the response from a mathematician.

The answer to my question

of why diagonalization doesn't apply to integers is that diagonalization constructs an infinite number that isn't in the list of integers, but the list of integers was pre-defined to not contain any infinite numbers.

Why constructable reals are not computably enumerable

The programming language TRIV is exactly like the the Scheme programming language in all respects except input and output.

TRIV has no provision for input. Programs only produce output.

TRIV output is limited to (any number, in any order) of the digits "0" and "1".

We define the class REAL_TRIV to be those TRIV programs which print an unbounded number of digits, and that always print additional digits after a finite amount of time.

It is possible to computably enumerate all TRIV programs. One can simply generate all strings from shortest to longest, and pick otu the subset of strings that are valid TRIV programs.

It is not possible to computably enumerate all of the REAL_TRIV programs because if you could do so, you could solve the halting problem for TRIV.

In that way, REAL_TRIV is countable since it is a subset of a countable set.

And REAL_TRIV is uncomputably countable because to count it would require solving the halting problem.

(This argument adapted from one by Andrej Bauer: http://mathoverflow.net/questions/30643/are-real-numbers-countable-in-constructive-mathematics)

It is not possible to

It is not possible to computably enumerate all of the REAL_TRIV programs because if you could do so, you could solve the halting problem for TRIV.

Actually, the way you've defined your programs, the halting problem for those programs is trivial: none of then ever halt. But even if you fix that definition, the unboundedness of the output is irrelevant to enumerating the machines, because the machines are not their output.

Funny

when I said the same thing to Thomas he accused me of not knowing the language of mathematics and not being rigorous...

Though Hewitt popped up at one point and agreed that his indeterminate actor (what is it called, Real[]...) is only one actor not an uncountable number of actors but he didn't acknowledge that this contradicted what he wrote before.

Sometimes I wonder if Thomas is a sockpuppet for Hewitt.

re Funny

Jokes on you Josh. To recognize members of REAL_TRIV you must be able to tell if a program diverges or does not diverge before printing more digits. That is an instance of the halting problem.

Sometimes I wonder if Thomas is a sockpuppet for Hewitt.

No, but we're both older and are more familiar with a lot of stuff that you probably know but don't always recognize when it comes around.

Real.[ ] a particular Actor: there are uncountable possibilities

Real.[ ] is a particular Actor; but there are uncountably many possibilities as to which one.

Uhm "which one"

How does it make any sense to associate an actor with it's output rather than its code?

- especially if you have random instructions.

The actor is its code, not what that code does in some random case.

That's a bit like saying that you have an infinite number of bombs because the shrapnel from one bomb can lie in many different ways after an explosion.

In an abstract sense Real.[] has only one possible output which is
{[a random digit], [a random digit],[a random digit], ... }

oh, c'mon now. this is silly: re Uhm "which one"

The actor is its code, not what that code does in some random case.

In unix, what is the difference between an executable and a process? Hmm? "Uhhmm?"

That's a bit like saying that you have an infinite number of bombs because the shrapnel from one bomb can lie in many different ways after an explosion.

Unsolicited advice: think harder.

:/ I have one piece of code 10 bytes long

and you say it's an uncountable infinity of actors...

You have managed to invent the world's most inefficient possible encoding, but you haven't added any information.

If I have one integer and encode it base pi so that it takes an infinite number of digits to encode that doesn't mean that I have an infinite amount of information, it just means that I suck at encoding.

Inefficient mappings do not make paradoxes

The number of programs/actors/thingamajigs defined by CODE in that uses A FINITE NUMBER OF SYMBOLS and is of A FINITE LENGTH is at most a countable infinity.

No one can argue with that. Period.

If you want to recast that countable infinity of thingamajigs as an uncountable number of results or anything else, then you've added nothing, you haven't increased the amount of thingamajigs, you've just mapped them inefficiently. The answer to the paradox is "stop mapping them badly"

If I map each integer to an uncountable infinity of things, that doesn't make integers uncountable, it just means that my mapping is so poor that it doesn't have the properties of the original sequence.

re inefficient mappings do not make paradoxes

Josh, you are bold faced and ALL CAPS arguing with uncontroversial, settled math, that you can read about elsewhere.

If you were taking a polite, inquisitive, and exhibiting-effort approach I'm sure it's at least likely people would want to help debug you but...

One more time:

You can enumerate programs, no problem, as you note. You can not computably enumerate programs that produce unbounded strings of 0s and 1s because if you could do so, then you would have in your hands a solution to the halting problem.

In my opinion, my side is uncontroversial, settled math

Let me add that programs with state are also countable.

Putting in an instruction that generates random events and pretending that the eternal future is a single event is allowing you to fool yourself about what is countable.

Also I note how pointedly you ignore the argument that is in bold text. Nice sidestep.

The definition of "countable" is not hard to understand, you know.

For any given list of reals, Real.[ ] might not be an element

For any given list of reals, Real.[ ] might not be an element, i.e., the reals are uncountable.

re It is not possible to

Not quite right, John:

Actually, the way you've defined your programs, the halting problem for those programs is trivial: none of then ever halt. But even if you fix that definition, the unboundedness of the output is irrelevant to enumerating the machines, because the machines are not their output.

The halting problem appears in REAL_TRIV context because we need to know if there is always a finite amount of time until the next digit to be printed. The question is whether or not the computation between printing digits halts.

The halting problem appears

The halting problem appears in REAL_TRIV context because we need to know if there is always a finite amount of time until the next digit to be printed. The question is whether or not the computation between printing digits halts.

Interesting. A strange case of misdirection. It honestly wouldn't have occurred to me you'd deliberately build an explicit requirement for eager decidability into your definition. The great advantage of enumeration lies in laziness; and besides, the title of your post suggested you were interested in the constructible reals, not incidental properties of the programs that generate them. Requiring your program to always produce another digit in finite time serves no apparent purpose other that to make it trivially impossible to enumerate all such programs (so trivially I'd have expected a much shorter demonstration of inenumerability if it were intentional).

Look, it's fine that you're represent constructible reals by programs that output sequences of binary digits. But if you remove the (frankly obstructionist) requirement that each such program be required to always output another digit, what have you lost? Nothing. You already weren't going to be able to decide whether or not two arbitrary programs represent the same real.

re the halting problem appears

Requiring your program to always produce another digit in finite time serves no apparent purpose other that to make it trivially impossible to enumerate all such programs .

What you have said here is that you want to argue about the constructable reals while displaying ignorance of what the term means. Really basic level ignorance. Like you have made no visible effort to learn the term at all. It is not a hard concept. There are tons of materials on-line from which you may get a clue. It is a very basic concept that doesn't require any advanced math. Young kids who make an effort easily get the concept. I'm not sure what excuse you might have other than comment forums bring out the worst in lots of people.

If you are sincere, you should stop commenting for a while and try to learn more because you are saying some pretty ignorant things.

If you are not, then now we have identified an actual troll.

I have no interest in

I have no interest in arguing, and never have; I detest it. I perceive that when I try to help you hone your understanding, you interpret it as arguing. Therefore I'll stop trying to help.

re . I perceive that when I try to help you hone your understand

. I perceive that when I try to help you hone your understanding,

The best way to hone my understanding is to make your point directly, not try to play psychological games.

All subsets of countable sets are countable so the constructable reals are countable.

A set is computably enumerable if a program can generate all of its elements such that every element will be produced as output in finite time.

The constructable reals are countable but not computably enumerable.

This is not something new I just made up. Its not controversial.

A strange case of misdirection. It honestly wouldn't have occurred to me you'd deliberately build an explicit requirement for eager decidability into your definition.

"misdirection" is an accusation, to which you add the insinuation of "deliberately" and "your definition" which is why I replied harshly.

If you honestly can not fathom why decidability is built-in to the definition of "constructable real" then you don't know what the words mean.

Remember that this whole topic arises under the question of Goedel's arguments which require that we be able to effectively enumerate proofs.

Uncountable random bitstreams

Uncountable random bitstreams do not exist. You either require infinite space (for an infinite qbit machine) or infinite time (to repeatedly get output from a finite qbit machine).

Until you can experimentally construct one, what you have is theoretical physics (and incorrect in my opinion). The fact is you can define equations in mathematics that have nothing to do with reality and call it theoretical physics, and get good predictions most of the time. Relativity came from the assumption "what would spacetime look like if the speed of light were constant". Initially this was just a mathematical exercise to understand how such a hypothetical system would behave. This hypothetical mathematical system can be imagined independently of the true nature of the universe. Only by experimentally measuring the speed of light are we able to verify that locally the universe does behave like this, but we are not sure whether it approximates reality at cosmic scales and timescales. So although you can imagine and mathematically model a universe in which infinite bitstreams can exist,that does not mean they exist in our universe. Einstein puts it better:

As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality.

To dismiss this argument as sophistry is to demonstrate a profound misunderstanding of what physics is.

re Uncountable random bitstreams do not exist.

Uncountable random bitstreams do not exist.

Yes, but mathematicians can still rigoursly reason about arbitrary members of the class of potential random bitstreams that physical construction can choose from. A foundation for math-in-general should permit that.

Second order statement

And those mathematicians would be reasoning in second order logic, and those statements can be assigned finite Gödel numbers.

Types categorically axiomatize uncountably many reals

Using types, the Dedekind axioms categorically axiomatize uncountably many reals.

It is irrelevant that there is a Gödel number for the Peano/Dedekind type-checked string for the axioms that categorically defined real numbers.