Fixed points considered harmful

Fixed points have been an important topic ever since Church invented the lambda calculus in the early 1930s.

However, the existence of fixed points has unfortunate consequences:
* In a programming language, they preclude having having strong types
* In logic, they lead to inconsistency.

Fortunately, types intuitively preclude the existence of fixed points, e.g.,
ActorScript and Direct Logic.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Minimal logic

Yeah, I think what Hewitt's talking about is minimal logic, not intuitionistic logic.

Proposition not Boolean; Peano induction predicate is Boolean^ℕ

Proposition is not type Boolean (nor is it type Heyting); the Peano induction predicate is type Boolean.

Inconsistency Robust Type Checking

Thinking about this another way, is type checking decidable? If we assert proposition A has type B this can be true or false, but if it is not decidable then proposition A has type (B or not B) is not true. IE can type checking be inconsistent, and therefore we need to use an inconsistency robust logic to reason about it?

Type checking is rapidly decidable in ActorScript

Type checking is rapidly decidable in ActorScript because it is very important for IDE responsiveness and error reporting.

types constrain syntax

(btw: I think I am using the DL vocabulary a little more properly if I say that propositions are boolean expressions, rather than "sentences", although a proposition is also a sentence.)

A proposition is an expression that, syntactically, is of Boolean type.

DL is able to prove itself inferentially incomplete, meaning that for some propositions, P, neither P or not P can be proved. Not every boolean expression can be reduced to a boolean value.

Not Boolean Then?

If the law of the excluded middle does not hold (IE neither P can be true) then they are not Boolean. Boolean algebra includes the law of the excluded middle and operates on bits (true/false) values. Saying something is Boolean implies it is a bit and it obeys all the axioms of Boolean algebra. To me the type sounds more like Heytingean, IE a bit that follows the axioms of Heyting algebra, which is the algebra used in proof theory and does not include the law of the excluded middle.

If they follow the axioms of Direct Logic, then they are neither Boolean or Heytingean. You would have to ask which algebra encodes the rules of Direct Logic. Boolean = classical logic, Heytingean = intuitional logic, what is Direct Logic?

propositions are expressions

I think I have not been precise enough in language.

Sentences are abstract grammar trees. These grammar trees are treated directly as mathematical objects.

Sentences with no free variables are propositions.

If a sentence can be reduced, it can be reduced to a boolean. This is guaranteed by the syntactic types of sentences and the rules of inference. This is the sense in which propositions are "expressions of boolean type."

Propositions are *not* expressions

Propositions are not expressions. In particular a Proposition is not Expression◅Boolean▻.

A sentence with no free variables can be abstracted to be a proposition.

Propositions & Expressions.

Do propositions evaluate to expressions in DL?

Excluded Middle in Classical but not Inconsistency Robust DL

Excluded Middle holds in Classical Direct Logic but not Inconsistency Robust Direct Logic.
Laws for Boolean algebra hold.
See Formalizing common sense reasoning

However, Proposition is not the same type as Boolean, nor it is the same type as Heyting.

Reasoning About Code

How would I reason about code then. Lets say I have a sort function, and I want to write a proposition that is true if the code correctly sorts. This "truth" is obviously not Boolean because it might not be possible to prove the proposition, and Boolean algebra requires that (P or not P).

That's not the definition I

That's not the definition I gave, so debunking it doesn't invalidate mine.

re that's not the definition i gave

That's not the definition [of Z] I gave, so debunking it doesn't invalidate mine.

Your definition, stripped of types for the moment, is this:

  Z.[f] -> [x] -> f.[Z.[f]].[x]

which is actually Y, not Z. In particular, it diverges in a strict language. I substituted a correct definition of Z.

Working demo

In particular, it diverges in a strict language.

It does not:

// JavaScript code

function z(f) {
    return function (x) {
        return f(z(f))(x);
    };
}

z(function (factorial) {
    return function (n) {
        return n === 0 ?
            1 :
            n * factorial(n - 1);
    };
})(3)

// returns 6

d'oh! (re working demo)

You are right. I made a stupid mistake, misinterpreting your definition as a result. I will get back to you on the typing question.

re d'oh (corrected, I think.)

According to your definition for Z:

1. Z can only fix functions whose type follows the schema ([[a]↦b]↦([a]↦b)).

2. Z is only guaranteed to be accurate for a subset of its range. It can not be accurate if the fixed point sought is not expressible as a total, computable function.

3. Even if the fixed point sought is expressible as a total computable function, the desired total computable function in question may not be derivable from the argument provided Z.

Z is not very restrictive

Regarding points 2 and 3: Procedures of type [a]↦b can be partial. See Hewitt's comment "Total math functions: different from executable procedures" elsewhere in this thread.

1. Z can only fix functions whose type follows the schema ([[a]↦b]↦([a]↦b)).

If you think [a]↦b is too much of a restriction on the type of a fixed point, I recommend setting a to be the unit type. Then it's comparable to what the Y combinator does in a lazy language; we're just tracking the laziness of b explicitly in the type.

(Well, "lazy" is an overloaded term. This simulated laziness is call-by-name rather than Haskell's call-by-need.)

2. Z is only guaranteed to be accurate for a subset of its range.

What do you mean by "accurate" here?

re What do you mean by "accurate" here?

What do you mean by "accurate" here?

I think that, interpreted with strict semantics, Z.[f] can diverge rather than finding a fixed point.

Sketch:

f.[g:int |-> int] |-> int |-> int ===
  [x:int]:int ->
    let y = g.[x - 1]
      ((y == 1) && (x < 2)) ? 1 : x * y

ff.[x:int]:int -> (x < 0) ? 1 : !x

"ff" is factorial for integers >= 0, and 1 for integers less than 0.

"ff" is a fixed point of "f".

"Z.[f].[x]" diverges.

That difficult-to-compute fixed point is nice

Nice, I was wondering how to construct an example like that one.

One function that has lots of fixed points but doesn't make them easy to calculate is the identity function. Your function's a better example because (I think) it uniquely determines a fixed point, but still doesn't give us the ability to compute it under an eager evaluation strategy. (In fact, I don't think it lets us compute Z.[f].[x] or Y.[f].[x] under a lazy strategy either.)

The reason I was hoping for an example like this was so that I could talk about how this imperfection probably doesn't matter to Keean Schupke and marco.

When they talk about how much they would prefer to use languages that let them define fixed point combinators, they seem to be okay with taking on the burden of responsibility to avoid nontermination in their programs. If they try to define factorial in the way you've shown and it diverges, they can consider it a buggy program and try again.

Simpler


f :: (int -> int) -> (int -> int)
f g x = g (x-1) * 0
ff x = 0

Thanks :D <eom>

.

Y combinator has a definition (this is *not* the definition)

The following definition of RossAngleY in ActorScript is not the definition of the Y combinator:

SingleArgument◅aType▻ ≡ [aType]↦aType

RossAngleY◅aType▻.[f:([SingleArgument◅aType▻]↦SingleArgument◅aType▻)]:SingleArgument◅aType▻ ≡
 [x:aType]->(f.[RossAngleY◅aType▻.[f]]).[x]

Claim: There is no implementation of the untyped fixed point operator in ActorScript.

[Above edited for clarity and to fix a bug noticed by Ross Angle.]

Is it valid?

Is it valid ActorScript though?

Edit: It looks like it is valid, and your argument is that this is not the 'Y' combinator?

ActorScript only exists on paper?

Can someone confirm this?

Practice lags behind theory

Currently, there is no publicly available implementation of ActorScript.

In theory, that is true.

In theory, that is true.

is that this is not the 'Y' combinator?

your argument is that this is not the 'Y' combinator?

Ross Angle defined Z this way:

Z◅a, b▻.[f:([[a]↦b]↦([a]↦b))]:([a]↦b) ≡
    [x:a]→ (f.[Z◅a, b▻.[f]]).[x]

According to that definition:

1. Z can only fix functions whose type follows the schema ([[a]↦b]↦([a]↦b)).

2. These typed versions of Z are only guaranteed to be accurate for a subset of their ranges. They can not be accurate if the fixed point sought is not expressible as a total computable function.

3. Even if the fixed point sought is expressible as a total computable function, the desired total computable function in question may not be derivable from the argument provided Z.

re Y combinator has a defintion

The following definition of RossAngleY in ActorScript is not the definition of the Y combinator:

Grr. I just spent an hour coming up with that translation only to find you'd just posted it.

A Meta-Comment on Style

I'm not sure this is appropriate for LtU, but it's occurring so blatantly in this thread that a lesson seems worth drawing out into the open from it.

Carl is engaging here in a demand for rigor on the one hand while simultaneously engaging in hand-waving, hide-the-pea, and proof by strong assertion. It's plain to anyone reading the responses that this behavior undermines his credibility and effectiveness in the discussion. It's very hard to discern whether his responses are hand-waving bullsh*t or merely reflect an individual so deep into his thoughts that it is difficult to explain his assumptions and model.

The appearance of hand waving is self-defeating, and stature does not engender tolerance for it. It's a habit that all of us who are similarly afflicted should seek to abandon.

There is a cumulative effect to hand waving that warrants attention as well. When someone gives the appearance of hand waving pervasively, a very unpleasant question comes to insinuate itself into the minds of the listeners: "What part of this person's work, if any, can actually be believed?" At a minimum, the habit of hand waving drives listeners and readers to a more intensely skeptical view of everything the speaker says and has said.

Carl has done some truly superb work, and he has the intellectual and moral courage to delve with confidence into things that may turn out to fail spectacularly. That makes him a very rare researcher indeed, and much to be admired. Speaking solely for myself, it is distressing to realize that I view all of his results with stronger provisional skepticism because of interaction patterns like the one here.

Food for thought for myself, for Carl, and perhaps for others.

Finding ambiguities and addressing open issues

Dear shap,

Thanks for your thoughtful comment.

I would greatly appreciate your pointing out any ambiguities that you find in my comments.

However, it is impossible to reiterate on LtU the entire contents of the articles on HAL from the book "Inconsistency Robustness".
Consequently, it is necessary to make references to the articles.

Also, the articles on HAL are at the leading edge of research (and consequently may have some bugs and ambiguities).
Of course, there are still some open issues currently being investigated.

Regards,
Carl

May I suggest

May I suggest you simply put some fibrations and adjoint functors in the mix? People will more easily recognize it as leading research in that manner. All the terms, typing, and logic is somewhat seventies to most people, I gather.

The problem isn't the

The problem isn't the vintage of the buzzwords. It's that when asked to connect the dots on his reasoning, he doesn't. More modern buzzwords aren't necessary with valid reasoning, and don't help without it. As time has gone on, circumstantial evidence has accumulated that his extraordinary claims are due to various gaps in reasoning that cannot be filled. Shap has captured it well, I think: the two likely alternative explanations for the bad impression he's created are that he gets it but can't express it, or that he doesn't get it. Whichever problem applies, unless he's able to recognize and fix the problem, he'd accomplish more by leaving off the mathematical foundations stuff and concentrating on other kinds of research.

If Category Theory then ActorScript

Yeah well. My personal observation is that category theory and ActorScript stand on the same footing: both purely syntactic formalisms with little to none formal semantics. It isn't unprecedented, in the functional field you have Z, Bird-Meertens, and the somewhat unknown Funmath; purely syntactic works with little to none theoretical or tooling backing and often heralded by strong claims from fanatic proponents. Looks like these formalisms are dying off though.

This is just how the world used to work. So, I'll cut the guy some slack.

Formal semantics of ActorScript is in meta-constructs

In the article, formal semantics of ActorScript is in the meta-constructs.

Hint: this is somewhat analogous to writing the Eval procedure for Lisp in Lisp.

Yah. I somewhat got lost

Yah. I somewhat got lost when reading the type checking rules.

Type checking rules of ActorScript are very strict

Type checking rules of ActorScript are very strict.

I would be happy to address any questions that you might have.

1 Corinthians 14

If anyone speaks in a tongue, it should be by two or at the most three, and each in turn, and one must interpret; but if there is no interpreter, he must keep silent in the church; and let him speak to himself and to God.

Category theory is not much help in software engineering

Unfortunately, category theory is not much help in software engineering :-(

Warning: high ferrous content in this message ;-)

I actually agree with that

I have complained on LtU somewhere before that I don't believe in the Haskell manner of exposing trivialities about functional models implemented, and trying to lift the exposed trivialities in order to reuse certain algorithms.

I've never been impressed

I've never been impressed that using high-powered category theory is useful in practice, no. I have observed that it can be quite illuminating — on a conceptual level, not a technical one — to ask, what are the morphisms, what are the categories, and even how do these things fit together into an adjunction. It can be quite fascinating to draw a "twisted hour-glass" and label the parts of it. (You can see what I mean by "twisted hour-glass" on page 8 of this pdf). But that's not the way category theory is being used in Haskell etc.; it's category theory as an aid to thinking, rather than as a technical device.

Monads are unsuited for concurrency

John,

Thanks for the link to your article Monads for Programming Languages.

Unfortunately, your article neglects to say that monads originate in the work of Scott, Strachey, and Milne:

* Dana Scott and Christopher Strachey. 
  "Toward a mathematical semantics for computer languages"
  Oxford Programming Research Group Technical Monograph. PRG-6. 1971
* Robert Milne and Christopher Strachey. 
  "A Theory of Programming Language Semantics"
  Chapman and Hall. 1976.

Your article also neglects to say that monads are unsuited for concurrency :-(

Regards,
Carl

PS. Based on reading your article, you should be careful about "The pot calling the kettle black" when it comes to the pedagogy of scientific artcles ;-)

Liked it very much (without the irony, this time)

I like your article and the manner in which it reflects on Wadler's work. I also liked your presentation of category theory.

To put it blunt: It's bloody nice to read a critique once in a while instead of mindless non-critical thinking exposed in too many papers these days. Groupies and fanboys, whatever happened to critical thought?

I disagree somewhat with your remark with that's how category theory is used in Haskell. Like people abuse C++'s template system to concoct the weirdest non-trivial solutions I gather there are some functional programmers who think that they should expose the most trivial structures and make sure they use the monad, functor, or whatever type class to structure their program instead of simply stating a functional program directly.

I see little use for category theory in software engineering. An arrow has a beginning and an end, and you can compose them - that's the heart of category theory and that's simply too trivial an observation to be of much help to the hard problems often encountered.

John: necessary to read & understand articles to critique them

Dear John,

It is necessary to read & understand scientific articles to properly critique them.

Admittedly, reading scientific articles is not easy :-(

Universities have courses and seminars about this stuff :-)

Regards,
Carl

Co-routines

Are co-routines possible in ActorScript? For example something like the infinite Fibonacci generator:

function fib() {
    var x = 1;
    var y = 1;
    while (true) {
        yield x;
        var t = x + y;
        x = y;
        y = t;
    }
}

Although we can rewrite this without the infinite loop and yield:

var fib = (function() {
    var x = 1;
    var y = 1;
    return function() {
        var t = x;
        x = y;
        y = x + t;
        return t;
    }
})();

I guess I am not clear how ActorScript handles mutation and closures?

What, formally, is "mathematics"?

Is there a single formal system (up to isomorphism) that deserves the proper name "mathematics"?

If I understand correctly (which is a dubious proposition), that is one of the starting questions from which Hewitt's recent publications can be understood.

A "formal system" in the sense I am using it here is a particularly strict set of rules with two key properties:

1. The rules proscribe a domain of discourse to which particular utterances can be definitely said to either belong or not belong.

2. The rules are decidable in the sense that a computer program can enumerate all possible utterances in order of increasing length and accurately report whether each utterance is or is not within the domani of discourse proscribed by the formal system.

What would it mean, then, for a formal system to be deserving of the name "mathematics"?

Speaking informally, a succession of specific historic developments have imposed some structure on rigorous mathematics. Mathematical discourse is structured around theories, each with its axioms, rules of inference, propositions, rules of negation, its proofs and theorems. Utterances and parts of utterances fall into these various categories. Each theory can be interpreted as a kind of set of construction rules, capable of producing utterances which are part of the theory, and those which are not.

Mathematics as a whole is evidently some generalized domain of discourse in which it is possible to lay out various theories. To compare the similarities and differences of their axiom sets, rules of inference and so on. It is one in which theories can be abstracted and subsumed to form results which depend only on certain aspects -- such as theorems abstracted over rules of inference that can be interpreted in any specific theory that happens to share those rules of inference.

And mathematics in this informal sense is inconsistency robust. In particular, the introduction of a new theory to the discursive domain, "mathematics", might be discovered to be inconsistent meaning the theory's discourse contains at least one proposition and its negation. Yet when this happens, all unrelated theories remain intact.

Notice that the discourse of mathematics itself does not exclude inconsistent theories. In fact they play a critical role It is common for the consistency of theories to be unknown but suspected. If a theory is inconsistent it is still a mathematical theory (albeit one with a bug). The proofs internal to the theory that demonstrate its inconsistency remain valid proofs. The theory may be inconsistent but mathematics as a whole remains robust.

A formal system deserving the name "mathematics" must evidently be one in which all of this reasoning about theories, and their relations, can take place -- including the discourse concerning the contents of inconsistent theories.

Past foundational attempts, such that of formalizing all of mathematics in ZF set theory, are not quite the same thing. Those past foundational attempts have in common that they tried to include in the span of the foundation all consistent theories, but specifically exclude all inconsistent theories. (The crisis of this approach was the discovery that consistency is, in the general case, not decidable.)

Hewitt's proposition is, in part, to construct a formal system which is capable of incorporating any theory, including all of its proofs, without regard to whether any particular theory is consistent or not.

Here is the catch though:

If you propose to me a formal system which you say should be called "mathematics", and if I can mathematically prove something about your formal system that can not be proved in your formal system, then clearly I have created mathematics that is not in your formalization. Thus your formalization should not be called "mathematics".

One of Hewitt's significant claims is that he has defined a formal system deserving of the name "mathematics" in that no formalizable theory can fail to be expressible in it, directly -- and at the same time the formal system itself, as a whole is inconsistency robust -- and at the same time every useful theorem about "mathematics" is directly expressible in the formal system called "mathematics".

[I find this increasingly plausible all of the sudden but I am not rigorously convinced yet, possibly because I don't understand it well enough].

Perhaps my account of the problem space, which I laid out above, might help others better understand the whole "fixpoints don't exist" stuff. On the one hand, syntactic restrictions (plausibly) prevent any formally constructed sentence in "math" from being defined by fixpoint. On the other hand, Hewitt challenges you to show one, even just one such fixpoint, defined outside of "mathematics", which would be useful to add to "mathematics" itself.

In other words, the point of the restriction is that if Hewit's utility challenge is correct, you will never find anything useful to say in mathematics that can't be expressed formally within "mathematics".

Confusion may be avoided

From my dissertation:

Confusion may be avoided later by recognizing, at the outset, that there are no universally agreed-upon boundaries for logic and mathematics. Rather, each of the two subjects is pursued by a living community, and so ultimately each is defined by its community: logic is what logicians study, and mathematics is what mathematicians study. Historically, logic is the study of inference, and mathematics is the study of number; and this might still be claimed, since study of inference can be stretched to include nearly any reasoning, while number can be stretched to include nearly anything reasoned about; but in practice, rapid technological expansion of mathematics over the past four and a half centuries, and of logic over the past one and a half, has provided great scope for disagreement over the boundaries of both disciplines.

re confusion may be avoided

Those are assertions you make and Hewitt is making a contra assertion in the domain of "mathematics". The conflict here is (suddenly) pretty sensible to me and interesting.

To review:

ca. 1920 the foundational question arises: is there a finite set of axioms or axiom schema that spans all of math?

ca. Turing, Goedel, Church: every finite set of axioms or axiom schema is either incomplete or inconsistent

ca. Hewitt: Formal mathematics is an activity that can be formally described by a syntax for propositions, alongside some simple mechanical rules for transforming given sets of propositions into concluded sets of propositions based on very conservatively chosen rules of inference.

Within such a formalization of all of math, familiar mathematical concepts like "theory" appear as syntactic types.

This kind of formal system contains every theorem in every branch of mathematics. Every theorem appears as A-turnstile-B where A states the axioms of some theory and B, a theorem in that theory. (If A is empty, if a proposition B follows just from the formal rules of mathematics, it's just "turnstile B").

Mathematics, as Hewitt has formalized, is able to convincingly demonstrate by model that it contains no sentence which contains a self-encoding.

Mathematics, as Hewitt has formalized, is able to convincingly demonstrate by model that it is complete in the sense that all constructible propositions following from a given set of propositions can be computed.

And as formalized the system proves its own self-consistency.

----------------------

Shutt (the quote you posted) and Hewitt part ways over whether it is possible that Hewitt has, in every reasonable sense of the term, circumscribed all mathematical discourse possible. That is, anything that can't be expressed in his formalization, probably people will generally agree isn't mathematics. Whether they know of Hewitt's results or not -- it would turn out they would agree. I think underneath it all, he's basically got a simple and powerful system.

Does he have a pretty cleanly constructed system that, really, is pretty much all the "foundation of math" we'll ever get from anyone?

I also think he's thinking about what happens if you model the formalization of math as actors, in various ways, so that sets of propositions are just things wandering around through actor-space, getting soundly transformed and passed along at every node.

miscellany

Some miscellaneous comments:

The point of the pasage I quoted was to warn against getting mired in (and oneself not to get mired in) philosophical discussions of what mathematics is when all such philosophical discussions are illusory. So you'll understand why I'm not inclined to discuss philosophically what is mathematics.

Notwithstanding whatever casual shorthand anyone (including me) might use for it, the foundational issue that gained public visibility circa 1900 was not axioms encompassing all of mathematics, but axioms encompassing arithmetic.

Hewitt is quite explicitly exclusionary; he makes much of this. Moreover, in order to avoid Gödel's Theorems he doesn't just have to avoid making an explicitly self-referential statement, he has to avoid being able to reason in pretty elementary ways about arithmetic functions. There's no way that isn't exclusionary, even if he weren't bible-thumping about the need to exclude things.

This, in turn, leaves three possibilities: either he has a system that doesn't really avoid Gödel, or he has a system that avoids Gödel by not meeting Gödel's "sufficiently powerful" condition, or he doesn't have a well-enough defined system to be able to meaningfully judge whether or not it avoids Gödel.

re ca 1900 and exclusion

Notwithstanding whatever casual shorthand anyone (including me) might use for it, the foundational issue that gained public visibility circa 1900 was not axioms encompassing all of mathematics, but axioms encompassing arithmetic.

Thanks. I corrected it to say "ca. 1920".

Moreover, in order to avoid Gödel's Theorems he doesn't just have to avoid making an explicitly self-referential statement, he has to avoid being able to reason in pretty elementary ways about arithmetic functions.

You can define the logic you want for arithmetic functions in a theory T and make proofs of the form "T |- P".

This, in turn, leaves three possibilities: either he has a system that doesn't really avoid Gödel, or he has a system that avoids Gödel by not meeting Gödel's "sufficiently powerful" condition, or he doesn't have a well-enough defined system to be able to meaningfully judge whether or not it avoids Gödel.

It's the fourth one in your list. :-)

If you want extra axioms, A, for arithmetic, your proofs will have the form "A |- P". If your axioms are provably buggy, you can prove something like "|- (A |- B) and (A |- not(B))". You'll have something like "|- Inconsistent (A)". But the system (Hewitt's) in which you're doing all this... hasn't been rendered inconsistent at any step here.

This is just a formalization of the observation that people can tell if formalized math is rigorous using pretty weak, robust logic. Math in general is a formalizable as a system for making those judgements. Math in general contains terms that are theories about richer proposed sets of axioms.

A big part of the point of

A big part of the point of Gödel's results, of why they're a big deal, is that you can't avoid them through indirection.

Gödel's results depend on a nonexistent fixed point

Gödel's results depend on a funny kind of indirection depending on a nonexistent fixed point

avoiding Goedel (re a big part)

Goedel's second theorem produces a proof schema which can't be applied if sentences can not be internally defined by fixpoint operation.

Goedel's "incompleteness" says basically that a finite set of hypotheses doesn't yield all arithmetic truths as direct implications. In fact, the first theorem tells you how to take any finite set of arithmetic hypotheses (that is consistent) and generate a new, compatible hypothesis that can't be proved from the originals.

Hewitt's system contains all propositions Goedel can generate that way and makes them available as hypotheses (to sit on the left side of a turnstile).

Hmm

I think you're giving too much credit to what Hewitt's system accomplishes. As far as I can tell, you can replace the turnstile with an arrow (moving the subscript to the left or adding 'true' to the left if there is no subscript) without changing the semantics any. So how is this different from just using second order logic as a foundation for mathematics?

If you take second order logic without any other axioms, that's consistent and complete! You can already use second order logic as a foundation of mathematics in the way you're describing. Just put the theory you're interested in as a hypothesis (LHS of an implication). But that's not interesting.

Godel's first theorem establishes a limitation on what you can prove to be true about arithmetic. Whatever formal system Hewitt has in mind, it won't be able to prove all and only true propositions about the natural numbers, because there's no way around that aspect of Godel. Honestly, I'd find Hewitt's spiel less annoying if he would least acknowledge his understanding of what Godel did actually prove and how it applies to his system instead of implying that Godel just made an error somewhere.

re Hmmm

To turn second order logic into a theory of math-in-general wouldn't it have to be augmented with definitions for domains like sentences and proofs and theories and in that capacity also have enough to serve as its own metatheory?

Doing that naively would risk creating a too strong system in which a Goedel theorem 2 style argument could be made?

sentences and proofs and theories

If you can model sentences and proofs and theories by erasing those things, then they aren't going to be the cause of inconsistency. And again, as far as I can tell, you can model sentences, proofs and theories with ordinary propositions. That is, there seems to be a scheme of rewriting one of Hewitt's propositions that makes use of turnstiles and proofs and sentences and quoting/dequoting into one that doesn't by just projection down onto ordinary propositions and implication.

This allows for a soundness proof, through the observation that these features could be literally meaningless. Now if you added the ability to treat proofs, sentences, etc. as data that can be inspected, combined, quantified over, etc. then that would make it so that you can't just erase these elements. But then you'd be inconsistent.

re sentences and proofs and theoryies

If you can model sentences and proofs and theories by erasing those things, then they aren't going to be the cause of inconsistency

If you want to model them in 2nd order logic you'll have to give construction rules for members of the classes. If you want to model them with fidelity, your rules will include the type restrictions that characterizes them in Hewitt's system.

In short, if it works you'll have just written a bunch of new definitions that are isomorphic to Hewitt's. You won't have "erased" anything.

The erasing happens when you

The erasing happens when you go to give meaning to those definitions. We can work in a system that already has a type system. In that case you just erase the part of the type that claims to be doing the encoding. Quoted T just becomes T. Sentence T just becomes T. You shouldn't just look at the symbols he uses to decide what they mean. You should look at what you're allowed to do with the symbols to decide what they mean. His turnstile doesn't capture probability if you can't inspect proofs as data.

re The erasing happens

The erasing happens when you go to give meaning to those definitions.

That and the sentences follow it in your comment have no obvious meaning.

Earlier you seemed to propose that Hewitt's foundation could be modeled by some other proposed foundation which, well, one would hope. And you seemed to suggest that implied odd things about how much credit anyone deserved, something like that.

Beyond that I'm not so sure what your point is.

More carefully

Hewitt claims that mathematics proves its own consistency. His proof is that ⊢Ψ implies Ψ and ⊢¬Ψ implies ¬Ψ, and so we have a contradiction (Ψ and ¬Ψ). But why should I interpret the form "⊢Ψ" as asserting the provability of Ψ? He could have just as easily used ☺ Ψ instead of ⊢Ψ. So why interpret the turnstile as asserting provability?

We can observe that DirectLogic is consistent (AFAICT) by interpreting "⊢Ψ" as meaning "it is true that Ψ" rather than "it is provable that Ψ". This interpretation of turnstile makes it a no-op (erasable). There is no difference in this particular model between Ψ and ⊢Ψ. With this interpretation, the proof of the consistency of mathematics can be read as simply asserting that if mathematics contains a contradiction, then it contains a contradiction.

But I suspect the situation is actually worse than that. Hewitt would like you to read "⊢Ψ" as asserting the provability of Ψ. By the above argument (informal by necessity since a formal definition of DirectLogic doesn't exist), we don't have to interpret that way. But we can ask, is it possible to interpret the turnstile the way Hewitt intends? Is there a model of DirectLogic in which the turnstile is interpreted as asserting the existence of a proof of Ψ? I think the answer is "no". If you did have such a model, then you'd be able to consistently expose the kind of reasoning that would get you in trouble with Gödel's second theorem.

Regarding my statement that you're giving Hewitt too much credit, that was a poor phrasing. In this specific instance, I think you're overestimating the importance of what DirectLogic accomplishes because of the above. I find Hewitt's writing more sensible than I used to as I've been able to understand his language a little more. I still think his characterization of the Gödel incompleteness theorems is off the mark, but I have no interest in denying him whatever credit he's due.

Provability (⊢) is different from truth (⊧)

Provability (⊢) is different from truth (⊧).

In particular, true in the model ℕ is different from provability in the Peano/Dedekind categorical axiomatization for ℕ.

See page 15 of Inconsistency Robustness in Foundations.

PS. When it comes credit, colleagues and students did a lot of the work!

Semantics

Again, it looks like I can interpret "⊧Ψ" as "Ψ is true", so all of this is probably consistent, but you (AFAICT) haven't demonstrated that a model of DirectLogic exists in which the symbols all mean what you intend for them to mean. And I'm skeptical that such a model exists.

Direct Logic is much more powerful than Tarskian set semantics

Because Direct Logic is much more powerful than Tarskian set semantics, it has no Tarskian set semantics.

However, Direct Logic does formalize the Tarskian set semantics for the Peano/Dedekind categorical axiomatization of ℕ and prove the inferential incompleteness theorem for ℕ.

A consequence is that there are true but unprovable propositions in ℕ.

Proof?

Because Direct Logic is much more powerful than Tarskian set semantics, it has no Tarskian set semantics.

Do you have anything to offer in the way of proof of that claim (which I find dubious)?

Tarskian semantics: can't deal with inference, e.g., soundness

Tarskian set semantics has no way to deal with propositions involving inference (⊢), e.g classical soundness:

ForAll[p:Proposition]-> (⊢p) => p

Heh

DirectLogic must be stronger than set theory because your intended interpretation of DirectLogic isn't sound in set theory. That's a good one :).

Classic inference (⊢p) is intended to mean that there exists a proof of p. A proof is a finite object and so of course set theory has no trouble encoding that relation. What is your interpretation of ⊢?

Doing a Tarskian set semantics for inference (⊢) is difficult

Doing a Tarskian set semantics for inference (⊢) is difficult.

You can be first ;-)

Raincheck

:)

Stronger than set theory re (heh)

A proof is a finite object and so of course set theory has no trouble encoding that relation.

Sentences are abstract objects in DL. Some but not all Sentences can be obtained by parsing finite strings.

Sentences have the general form of a syntax tree of the sort you would expect except that...

Actors (which is to say arbitrary, non-deterministic, unbounded-in-general computations) can be used as terminals in sentences. The example in the paper is Sentences over real numbers considered as Actors that lazily produce successive digits of precision.

If ⊢p doesn't assert the

If ⊢p doesn't assert the existence of a finite proof object proving p, then I don't know what it means and will wait for someone to tell me.

re If ⊢p doesn't assert the

If ⊢p doesn't assert the existence of a finite proof object proving p, then I don't know what it means and will wait for someone to tell me.

Presumably Actors as terms in sentences tells us that proofs are validated by evaluation, but evaluation is not guaranteed to terminate.

For example, a nondeterministic Actor can produce a random real number in [0,1] as mentioned in the paper.

An attempted proof that two of those numbers have the first K digits for finite K would terminate. An attempted proof that they are equal might or might not terminate.

If I want to read the

If I want to read the statement of a theorem about "those two numbers," the random reals output by a non-deterministic Actor, do I need to wait for them to finish outputting an infinite number of digits, so that I can read them all? Or am I reading a statement which contains a description of that Actor computation, and then interpreting the statement to be about their potential output? I'm not sure why we'd want to humor infinite-length theorem statements, if it's the first case (call me a stickler, but if you say you have proved something, I'll probably demand to at least be able to read the claim in finite time; if I'm feeling particularly stubborn, I might also demand to see a proof that I can read in finite time too). If it's the second case, then we obviously have a countable set of sentences, at the meta-level.

Seriously, though, I get why internal to the system you'd want sentences to be uncountable, the way things are set up. I just don't get why we're not permitted to reason at the meta level (by, for example, making the observation that, externally, sentences are countable).

Regarding these proofs, if they are Actor computations, presumably they have finite descriptions. Why can't these descriptions themselves (instead of their evaluation) be the proof objects?

infinite sentences and math foundations

I'm not sure why we'd want to humor infinite-length theorem statements,

Indeed. Here are some suggestions of why we might choose to:

DL seeks to be a foundation for mathematics but in particular a foundation for mathematics for computer science.

DL uses a nicely general model of computation that includes unbounded, non-deterministic computation. A hardware device that uses some suitable quantum signal to generate (over time) a never-ending sequence of 0 and 1 bits is an example. Such a hardware device could be a particularly nice implementation of the real-number-generator example in the paper (defined by a program text that non-deterministically chooses 0 or 1 as the next digit, and then postpones similar generation of further digits until they are needed).

Parsimony suggests we go ahead and do what is possible: Unify such computations with various sets and classes that, heretofore, mathematicians have often treated simply as given constants.

In orthodox math we have a weird situation where some domains, such as reals, can not have every member denoted by some finite string ..... and so mathematical utterances written on paper as strings are confined to talk only about sub-domains of "constructable" reals or to talk about classes of abstract reals.

The real world, for all practical purposes, contains actual reals which neither constructable in that old sense nor are they a class -- they are a specific real. The device that measure electron spin and spews out random 0s and 1s endlessly is an example (or can be interpreted as such).

Reifying that output, our example infinite real from the hardware device, in a programming language, we get a value that is in every way isomorphic to some real in [0, 1]. We so happen to be able to effectively access its digits one after another, over time.

Presumptively the real number corresponding to our reified random bit generator is not a constructable real. Nevertheless, whatever our program does with this reified value amounts to proving theorems about it.

Well, shouldn't a foundation of mathematics, especially a foundation of mathematics for computer science, recognize that kind of proof?

Surely there are other approaches to "dealing with" the kinds of "infinite" values the hardware device gives us but that isn't the point. It isn't the point that DL is a unique way to reason about these kinds of things. It merely aims to be a comprehensive, parsimonious, useful way.

Private Communications

I suggest you and Hewitt converse privately about it. Maybe you can get a peer-reviewed article accepted.

re Private Communications

I'm discussing one of the papers. What are you doing?

I suggest you and Hewitt converse privately about it. Maybe you can get a peer-reviewed article accepted.

Have it your way

He has an informal system which he makes wild claims about, and a programming language which barely exists on paper.

If you feel discussing it, be my guest. Guess I got tired of reading all the bullshit about things which don't exist.

Of course, you can discuss things which don't exist ad infinitum, be my guest.

I am out.

Unconvinced

I don't have any strong feelings against DL itself, and sure, it can probably function well as a foundation of mathematics. (Mathematics is robust enough that the particular choice of foundations doesn't matter so much: high level arguments will look the same, but there may be some low-level details that change).

I also wanted to draw attention to constructive accounts of the reals which admit both the orthodox classical reals as a model, and a countable model of computable reals. In such settings we can freely talk about "reals" without committing to either model: everything we deduce applies equally to the classical reals and to the constructable reals. (I know you specified orthodox, and these accounts aren't exactly mainstream.)

Regarding your hardware example, I don't really get it, as at any time, the program has only seen a finite prefix of the "infinite" bit string generated by the device, so I would think it can only have "proven" things about these finite bit strings. If we're talking real life, then obviously there is no sense in which the device can generate an infinite bit string. What do we gain by pretending it can? Especially over an equally unrealistic assumption, that it can generate a finite bit string of any length? Even if you want to model things by pretending they can run forever, I'm still not seeing why we would want to allow infinite proofs.

re finite prefixes of the infinite bit string...

I don't really get it, as at any time, the program has only seen a finite prefix of the "infinite" bit string generated by the device, so I would think it can only have "proven" things about these finite bit strings.

The rationale I am offering is one of parsimony. As an example, suppose that I have two actors supplying a real in [0,1] as a lazy stream of bits. I can inductively define their sums and products. I can establish various algebraic relations among them. My program can reason with these arithmetic relations even if there is no upper bound on how long it might take to produce the first bit of a sum.

I'm still not seeing why we would want to allow infinite proofs.

Since these have practical implementations such proofs are in a useful sense about the real world.

Ok model, but the proofs?

I can follow modeling arithmetic on reals that way, I've got no problem with that. You can do it in other ways (I've actually worked on things pretty close to this), but as you say, that's not the point.

Where you lose me, is with the infinite proofs. I'm guessing, if I asked you to show me, in this hypothetical scenario, just how you established the various algebraic relations among the arithmetic operations, you'd be showing me some finite set of symbols on the screen. Perhaps code. Perhaps a logical proof. I think I'd be pretty confused if you told me to wait while you ran an infinite non deterministic Actor computation. In other words, even if you're proving things about infinite computations, I'd still expect finite proofs.

re Ok model, but the proofs?

DL is meant to be a "foundation for mathematics for computer science"

As a "foundation for mathematics" it is meant to be:

1. A foundation that can be built on to make other more specialized mathematical theories.

2. A meta-theory for mathematical theories in general.

Since DL is meant to be the "root" theory, the meta-theory of theories, it serves as its own meta-theory.

As its own meta-theory, it can reason in a first-class way about its own Strings, Sentences, Propositions, Proofs, and so forth.

It's a reflective theory, in that way.

The DL paper linked above specifically calls the notion of a sentence that could be described as:

"3.1415.... = 3.1415...."

That's an informal string and it is meant that you imagine the elipses replaced with all the digits of pi.

You can't build that as a string and then parse it.

But you can abstractly build a syntax tree (a Sentence) with those terms. And you can concretely build a value in a computing system that usefully reifies that sentence for all practical purposes.

As a human activity you have proofs going on in both areas: proofs about abstract sentences that can include arbitrary reals as terms. Proofs about concrete sentences that can include things like reified reals whose digits are computed as needed for whatever kind of inference your making (or whatever).

All those different activities have a common logical structure. DL (at least tries) to give you a parsimonious way to talk about that over-arching mathematical logic.

I think I'd be pretty confused if you told me to wait while you ran an infinite non deterministic Actor computation.

In a common case, your browser would just time out. :-)

backtracking somewhat

I'm accustomed to systems where proofs are programs, but we don't have to run them, but merely type-check them. Some of these programs are co-fixpoints, so they run "forever". So this is part of the disconnect for me. I would have thought, it ought to suffice to read the "source code" that's generating whatever is counting as proof objects. But I'll have to think about it more. I've also noticed something like staging seems to be at play here, and I think I'd better give that more careful thought, as well.

I have no problem envisioning that sentence "pi = pi" as an abstract sentence. In the version in my head, the nodes of type real at the bottom hold the concrete instance given by a procedure that computes pi to arbitrary precision. The whole thing has a finite description. Internally, yes, there are uncountably many sentences equating two reals. But I fail to see why we can't observe that externally, there are countably many such sentences. If we're talking practical, then I'd think this is all going to end up in a computer implementation, with DL "source code". It's hard to get away from the countability of code. Maybe one way to say this is that DL admits a countable model, even though it doesn't think it does (Skolem's paradox).

re backtracking

Re: "It's hard to get away from the countability of code."

Actors are not the code that (non-deterministically) defines their behavior. Informally, intuitively, think of an Actor as a running instance of the code, not the code itself. Since actors can be non-deterministic, the code alone doesn't identify a particular Actor.

unconvinsing infinites: proofs, processes and "real" numbers

(Note that I reregisterred on this site just to make this comment and I am very much otherwise out of my depth here.)

You might be interested in the arguments and work developed by Professor Norman Wildberger as express via his YouTube channel.

One popular but controversial video/lecture that seems to me to be somewhat relevant here is
MF80: Inconvenient truths about sqrt(2)
which is part on his Math Foundations series/playlist in which he builds up the number mathematics part of the broader "mathematics" as has been discussed here.

You may enjoy his methodical, step-by-step, no hand waving developement that enables exact proofs and calculations by avoiding infinite "numbers" and infinite processes.

he admins: unclosed italic tag in previous post

Can someone please fix that?

</em>

Actor descriptions vs. message values

Regarding these proofs, if they are Actor computations, presumably they have finite descriptions. Why can't these descriptions themselves (instead of their evaluation) be the proof objects?

Non-determinism means that an Actor programs don't necessarily specify a specific value.

I'm having a hard time

I'm having a hard time connecting this to anything familiar. In evaluating these non-deterministic actors, is it possible we get failure, i.e., a non-proof? Is non-termination admissible as a proof? There are plenty of proofs in orthodox mathematics that describe procedures that make arbitrary choices, but of course the point is that the choices don't matter. I was guessing something like this could be going on when I suggested the description should suffice. If the evaluation might fail, with a non-proof, then it seems like we need to do the evaluation first, before we can claim to have a proof (perhaps in the form of a finite execution trace?)

undecidable theorems re hard time

If the evaluation might fail, with a non-proof, then it seems like we need to do the evaluation first, before we can claim to have a proof (perhaps in the form of a finite execution trace?)

Right. If your program is trying to check a proof and the proof has been constructed with some term that refies some Actor, then there is a possibility that the failure of that Actor to return a value will in turn "hang" the proof check.

That doesn't mean that the structure of the proof is mathematically invalid or that your method for checking it is invalid. It only means the correctness of the proof might be undecidable.

ah, different conception of "proof"

I think this gets to the bottom of things, as I definitely take decidability of correctness as part of what being a "proof" means. I thought the whole point of a proof was to convince, and not being able to check a proof would seem to defeat that. But I'll just make sure I translate "proof" to something else (I'm going to call it "semi-proof" to myself, for now, like semidecidable), when I read about DL in the future. This may read as being sarcastic, but I don't mean to be. It's rather tricky to absorb all of the new definitions for terms that's required to make sense of Hewitt's system.

re proof?

[...] it has no Tarskian set semantics.

Do you have anything to offer in the way of proof of that claim (which I find dubious)?

Please forgive me if all I do here is expose some misunderstanding on my part but...

DL can apparently prove that some of its propositions are undecideable and infer that such propositions and their negations are both true, albeit unprovable.

Doesn't that imply that no model exists?

DL can apparently prove that

DL can apparently prove that some of its propositions are undecideable and infer that such propositions and their negations are both true

That would seem to immediately imply that DL is inconsistent, so I think you've remembered or stated that wrong. Can you find a link to that?

Doesn't that imply that no model exists?

It's my guess that a set theoretic model of DL does exist -- namely, the erasure-semantics model that just interprets ⊢p the same way it interprets p.

In Direct Logic, ⊢Ψ does *not* mean the same thing as Ψ

In Direct Logic, ⊢Ψ does not mean the same thing as Ψ

Which axiom of DirectLogic

Which axiom of DirectLogic lets us tell the difference?

There is a Ψ: ⊧<sub>ℕ</sub>Ψ but not ⊢<sub>ℕ</sub>Ψ

Truth is not the same as provability because by the Church/Turing incompleteness theorem, there is a Ψ such that ⊧Ψ but not ⊢Ψ

I found your definition of

I found your definition of 'closed theory'. How do you prove that ℕ is a closed theory? What does ⊢ mean in DirectLogic in the case that the theory is not closed?

Erasure at top level

I've spent too much time on this and until there is a formal account of DirectLogic, probably shouldn't spend any more time on it, but I wanted to refine my guess as to why DL is consistent.

At the top level, you're trying to encode mathematics as its experienced by mathematicians by defining ⊢ to mean "proven by unspecified mathematical argument" and not assuming the proofs of mathematics to be computationally enumerable (and, in fact, you can prove that they aren't). But note that, whatever axioms you choose, the theorems of DirectLogic will be computationally enumerable, even if you don't internalize this fact.

I think you can create a set theoretic model of DirectLogic by taking the provable sentences of mathematics (top level ⊢, in DirectLogic) to be all sentences that are actually true in the model. But for a particular enumerable theory X, there can still be a distinction between ⊧X and ⊢X in the model.

I don't see how you're ever going to get a handle on differentiating theorems that are provable by some nebulous mathematical argument (with no reference to a set of axioms) and theorems that are true but not provable by any nebulous mathematical argument.

Homework Assignment

Looks like Hewitt simply wants you to think over his thought experiment until you bite and do his homework.

I would advise against thinking about it too much.

re erasure at the top level

The DL paper linked at the top answers a lot of your issues. For example, it more precisely explains what turnstile means and why. It explains pretty directly (constructively) why proofs aren't enumerable. It contains a formal syntax, core definitions, and axioms.

It is an overview and an exposition of motivations and responses to those motivations but it does have a lot of the harder stuff you complain is missing. I find it much better than some of the earlier condensations of more or less the same material. Not sure how much of that is because the text improved and how much because I got the gist after which the details start to come in focus.

Do you think I'm still missing something?

Between asking "what does turnstile mean?" and the above post titled "erase at the top level", I (re-) read opening to the paper that introduces turnstile and figured out that there is supposed to be a distinction between DirectLogic and "mathematics". DirectLogic will have some computationally enumerable set of theorems that are provable in mathematics, but his ⊢ is intended to mean provable in some general system of mathematics (that presumably contains DirectLogic as a subset).

So the reason that Godel's second theorem won't apply internally to DirectLogic's ⊢ is its proofs aren't enumerable.

I still don't know how we know that e.g. ℕ is computationally enumerable, but I don't think it's that important. It's probably because we define ℕ in terms of the Peano axioms.

By the above reasoning that Mathematics contains DirectLogic, it seems true that for any statement Ψ that DirectLogic proves, we should have that ⊢ Ψ is true. This suggests to me that erasure (interpreting ⊢Ψ as Ψ in the model) should work as a semantics, which should probably lead to a set theoretic consistency proof. It also shows that we don't have to interpret ⊢ the way Hewitt intends.

a little, yes. re Do you think I'm still missing something?

Do you think I'm still missing something?

I do, based on what you just said. I'll try to be helpful. Conversely, please understand that I'm a naif in foundation debates. E.g., not all but quite a lot of the talk about relating to model theory is very hard for me to figure out.

You say:

I (re-) read opening to the paper that introduces turnstile and figured out that there is supposed to be a distinction between DirectLogic and "mathematics".

Rhetorically, the paper distinguishes between "mathematics" and "Mathematics".

The noun "mathematics" describes mainly the formalizable traces of the work of living labor... the activity of mathematicians present and historic. The historic social process of "mathematics" is retrospectively interpreted as one in which a proliferation of theories are each advanced. Some are internally inconsistent. Some are provably not. Some we can't tell, and so on. The society of mathematicians is robust in the sense that inconsistent theories which arise don't generate inconsistencies in any other theories (but their extensions). And there is a kind of very conservative, safe, overarching logic that unites the society of mathematicians in an evolving discussion about theories in general and the relations among particular theories.

A "foundation for mathematics" in the paper is conceived to be on the one hand a candidate logic to formalize that overarching conversation about theories in general, and on the other hand as a foundation that can be extended to create that proliferation of theories.

Classical Direct Logic is a proposed logic to serve as that foundation. It is arrogantly nicknamed "Mathematics".

DL is meant to be that foundational theory but it is in particular meant to be a "foundational theory of mathematics for computer science".

This gives rise to some of the unusual features of DL because the design of DL contemplates mathematical activities (e.g., generating and checking proofs) conducted autonomously by machine. As one example, close to the core of the thinking, DL maintains an explicit distinction between a string of symbols you might write on paper, and a sentence which is an abstract syntax tree. The distinction runs deep in the sense that DL allows the terms of sentences to include arbitary elements from uncountable sets as terms. Thus, not every Sentence in DL can be unparsed to produce a String.

DirectLogic will have some computationally enumerable set of theorems that are provable in mathematics,

You can define enumerable subsets of DL theorems but DL contains an uncountable number of theorems, I believe, since given any proposed enumeration it would not be hard to add new theorems by diagonalization.

⊢ is intended to mean provable in some general system of mathematics

It is meant to mean: provable using the syntax, domain definitions, and inference rules in the appendix.

So the reason that Godel's second theorem won't apply internally to DirectLogic's ⊢ is its proofs aren't enumerable.

No, Goedel's 2nd doesn't kill DL, in spite of DL's proof of self-consistency, because an inferentially self-referencing sentence can not be constructed. You can't construct a proposition PHI such that a proof of PHI formally implies, within DL itself, that PHI can not be proved.

By the above reasoning that Mathematics contains DirectLogic,

"Is" not "contains". Capital-M "Mathematics" is a nickname for "Classical Direct Logic". (I don't think the terminology is used with Bourbaki-style tight-buttedness throughout the exposition but you can see the idea there at least.)

it seems true that for any statement Ψ that DirectLogic proves, we should have that ⊢ Ψ is true.

By the definition of "⊢".

This suggests to me that erasure (interpreting ⊢Ψ as Ψ in the model) should work as a semantics,

I'm on ice thinner in this next answer than most, I think, but I'll try:

DL contains as first class objects other theories using (I think) the same syntax and definitions, with extensions allowed, and with alterations to inference rules allowed. I'm fuzzy on this but in any event you can create "sub-theories" for which DL is a meta-language and the sub-theories are first class objects in DL.

First, though I know this does not answer your question, the naked turnstile is useful notation just for keeping clear what is proved in what when using DL in meta-theory mode.

Second, still not quite answering you but getting closer: DL is meant to be useful as its own meta-theory, with its own proofs as first-class objects. "⊢Ψ" gives an abstract notation that says PSI has a proof.

Third, now I think I can make better sense of the answer Hewitt gave you to this question, the truth of PSI in DL is different from the existence of a proof of PSI in DL.

An undecidable PSI may be regarded as true and so too its negation. DL can't treat PSI as true or false in that case but it can hypothesize (for the sake of some argument) the truth of PSI without having to assume that the truth of PSI <=> PSI has a proof.

Does that help?

Thanks

I appreciate the detailed attempt at clarifying what you think is going on. I found this quote in the opening paragraph:

Classical Direct Logic is a foundation of mathematics for Computer Science, which has a foundational theory (for convenience called “Mathematics”) that can be used in any other theory.

The way I read that, it seems that Classical Direct Logic is supposed to be a superset of Mathematics. I suppose Hewitt could provide clarification if he desired.

I have a hard time with this uncountable sentences bit for reasons similar to those named by James Lottes (at least I think for the same reasons - I skimmed that exchange).

I don't see how a proof could ever be something that wasn't checked in finite time. I can understand how sentences could be uncountable, but for those sentences to be proven true, it seems to me that only finitely much of the sentence could be involved in the proof. And thus you could always generate a finite sentence that represented what was proven. e.g. 3.1415926... > 3. I don't need to know what digits the ellipsis represent to prove this sentence.

Stated in more glib way, I have a windows machine and a linux machine here, but neither is a counterexample to Church's thesis, as far I know. So when Hewitt ships DirectLogic.exe to me and I start using it to prove theorems, the ones I'm able to prove will live in a computationally enumerable set.

But it would be problematic for the meaning of ⊢ in the logic to be the set of theorems provable by a computationally enumerable process, because the logic is capable of proving that there is no such enumeration. So it wouldn't be sound.

Anyway, I do appreciate the effort in communication, but I really need to stop thinking about this. I'll continue to read any posts you make on the subject, so if you figure something out and explain it, I'll read it. But I'm not going to spend any more time wading through the article.

Each proof has a finite number of steps

Thanks for looking at the article.

Certainly, each proof has a finite number of steps :-)

re I don't see how a proof could ever be not checked finite time

I don't see how a proof could ever be something that wasn't checked in finite time. I can understand how sentences could be uncountable, but for those sentences to be proven true, it seems to me that only finitely much of the sentence could be involved in the proof. And thus you could always generate a finite sentence that represented what was proven. e.g. 3.1415926... > 3. I don't need to know what digits the ellipsis represent to prove this sentence.

DL says nothing at all about how long it takes to check a finite number of proof steps.

How long it takes could depend on what oracles your proof checker includes.

An example "oracle" would be an equality check for infinite streams of digits that can sometimes skip over infinitely many steps by using a (lisp-style) EQ? shortcut as a kind of implementation-specific internal hack.

How long a proof takes to check, in DL, has in the general case an undecidable lower bound.

Oracles

My interpretation is that the turnstile is deliberately left open to that kind of Oracle or to axioms that weren't originally included in the schema. The way I was suggesting you could prove this sound is to assume the ultimate Oracle: the truth oracle that answers whether a proposition is true. This is another way of looking at erasure semantics -- view the naked turnstile as a truth Oracle.

unerasable turnstile spotting (re Oracles)

What do you of this? On printed page 13 (pdf page 14) there's a proof by contradiction that "Mathematics is Open". The proof assumes a "provably computable total procedure Proof" such that it is provable that:

p Ψ ⇔ ∃ [i:N]→ Proof[i]=p

I believe that says PSI is provable by proof p if and only if there is some i such that p, a proof, is the value computed by Proof[i].

The argument continues by performing a diagonalization: "As a consequence of the above, there is a provably total procedure
ProvableComputableTotal that enumerates the provably total
computable procedures that can be used in the implementation of the
following procedure: ...". OK, so there is no such "Proof" procedure.

I'm not sure how we could erase the turnstile here.

The truth of PSI is clearly different in this theory from whether or not it is provable by a computable total total procedure.

To be reflective, to serve as its own meta-theory, DL needs a built-in alternative to Provability Logic (see printed page 38, PDF 39).

So to sum up this and the bit about equality-test oracles:

1. Whether or not a proposition is provable is not, in the general case, fully determined by the semantics of DL itself. It depends on the implementation. (And note that the proposition we are trying to prove may only ever exist within one implementation.)

2. In order for DL to make reflective arguments, such as about its own inferential incompleteness, the Mathematics (empty to the left, no theory subscript) turnstile can't be erased. That's because a proposition can clearly be true but unprovable, as illustrated by the ambiguity of provability in (1).

It could still mean truth

We're talking about a different piece of syntax, now:

p Ψ

What would a proof of Ψ look like for the truth oracle? It could just look like this:

Ψ. QED.

The catch is that this proof is only accepted by the oracle when Ψ is actually true. Thus proposition you quoted is just asserting, under this interpretation, that there is no computable enumeration of the true propositions.

But you might be right that you can't fully erase ├ Ψ down to true/false. If his logic allows you to reason that ├ Ψ iff there exists a proof p such that ├p Ψ, then we can't represent Ψ as true/false -- we need to remember the actual proposition (otherwise 'True. QED' would be a proof of any true proposition). I still think you can give it a simple semantics this way, though, where ├ Ψ means that Ψ is a true proposition.

I basically agree with your 1). The meaning of ├ is underspecified by DirectLogic in the sense that the axioms and inference rules of DirectLogic don't uniquely determine what it might be in a model of DirectLogic (probably, from what we've seen) . But I don't agree with your 2) because I think one particular model of DirectLogic could have the naked turnstile meaning "it is true that".

Impossible to computationally enumerate the true prop of ℕ

It is impossible to computationally enumerate the true propositions of the Peano/Dedekind categorical axiomatization of ℕ.

Church's paradox: theorems of Math aren't computationally enum

Church's paradox is that the theorems of Mathematics are not computationally enumerable. See page 40 of Inconsistency Robustness in Foundations.

[Franzén 2004] argued that mathematics is inexhaustible because of inferential undecidability of mathematical theories. The theorem that mathematics is open provides another independent argument for the inexhaustibility of mathematics.

Direct Logic: provably true but unprovable propositions in *ℕ*

Direct Logic can:
* define Tarskian set theory semantic satisfiabilty for the Peano/Dedekind categorical axiomatization of the theory ℕ
* use the above semantics to prove that there are true but unprovable propositions in the theory ℕ.

However, there is no Tarskian set theory semantics for Mathematics.

re DL proves: true but unprovable propositions in *ℕ*

true but unprovable propositions

Is it possible in DL to make a construction like this:

An actor of sort Rand01 is like the one in that paper that returns a non-deterministic, infinite list of binary digits.

An actor of sort Zeros returns an infinite list of 0s.

An Actor of sort ZerosOrRand01 returns an infinite list of binary digits. Based on a non-deterministic choice before the first digit is returned, it decides to act either as a Zeros Actor or a Rand01 Actor.

Can there be a class of propositions in which the equality of pairs of Actors of sort ZerosOrRand01 is asserted?

Is there then not a further final proposition that asserts all members of that class are decidable?

Is that final proposition not provably undecidable?

Church/Turing: all true propositions of N can't be proved

Church/Turing first proved that all true propositon of N cannot be proved. (Godel's proof is invalid because he used a "self-referential" proposition based on a non-existent fixed point.)

Please see page 14-15 of Inconsistency Robustness in Foundations

Gödel's proofs are invalid even though N is very powerful

Gödel's proofs are invalid even though:
* The Peano/Dedekind categorical axioms for N are very powerful.
* It was Church/Turing who first gave a valid proof of the inferential incompleteness of N using computational undecidability.

The reason that Gödel's proofs are invalid is that they make use of a nonexistent fixed point to construct the "self-referential" sentence "I am not provable."

Mathematics is provably both open and inexhautible

Mathematics is provably both open and inexhaustible.

Theorems of Mathematics can be generated but they cannot be exhaustively computationally enumerated.

See page 13 of Inconsistency Robustness in Foundations.

exhaustive enumeration

Theorems of Mathematics can be generated but they cannot be exhaustively computationally enumerated.

I was careful to confine my informal statement to just constructible theorems since you allow all reals to be used as terms.

Good point!

Thomas,

Good point!

Cheers,
Carl

Mathematics is yet another syntactic formalism

Mathematics is yet another very informal syntactic formalism. Therefore, everything you state about it is true; it's just letters on paper.

You can try to make things precise. Then you end up with either another syntactic formalism such as Z or Birds-Meertens, or you can try to fully formalize and mechanize it end you end up with a prover such as Lego/Coq/HOL/PVS.

I don't think mathematics exist, except for some vague agreed up notion as how to write symbols on paper.

Trivial disproof

Fixed points are mathematical objects. Fixed points do not exist in direct logic, therefore direct logic is not mathematics.

Although I do not doubt the underlying question. I think there is a formal language in which all of mathematics can be expressed, I just don't think it's direct logic.

I think what this language is like is an interesting question. At the moment I would say its something like Horn Clauses. A meta-logic in which we can write the axioms of mathematics and test them for consistency.

Obviously you could not prove the consistency of the meta-logic in itself (such a thing would be meaningless in any case) so the logic needs to be simple enough to be intuitively correct.

fixpoints

Fixed points are mathematical objects. Fixed points do not exist in direct logic,

Hewitt's assertion that "fixed points don't exist" is a description of a characteristic of a particular syntax. In the syntax of his system, there are no sentences containing self-encodings. A sentence-like-string containing a self-encoding is not part of any WFF.

You can still do Goedel's fixpoint constructions just fine, but only as meta-reasoning about systems being modeled. Basically, it becomes possible to state Goedel's theorems with a lot of formal precision.

Incomplete

I am thinking about things like Zeno's paradox. The fixed point of the infinite series 1 + 1/2 + 1/4 ... = 2. This is clearly part of mathematics, so any system that cannot express this is not "mathematics", but just another formal language. Hewitt's system may be consistent, but it is incomplete (hence not contra Godel).

With specific relevance to computing, Object instantiation is the fixed point of the record (see my post below).

Can Hewitt prove his system is complete (to be contra Godel it must be consistent and complete).

Further, if we observe that a required property of mathematics is that it is self-consistent, there is nothing that requires there be a single self-consistent system. There may be multiple possible "mathematics" each self-consistent, but not mutually compatible, kind of like independent mathematical universes. One could say that consistency vs completeness is obvious, if we try any be complete we necessarily have to include multiple incompatible mathematics into the whole, which results in an inconsistent system.

It is easy to see this in action. Heyting algebra and Boolean algebra both operate in the domain of single 'bit' values, yet they produce inconsistent results. You cannot have a single binary logic incorporating both, you have to choose the domain. Now we can introduce types to keep these separate, but then the types themselves can have different systems (extensional vs intensional type theory for example). So we can introduce a type-of-types, and so on to infinity.

I guess the real question is can we consider this infinite stack of types to be mathematics? Maybe this is what Hewitt is trying to say, that mathematics is by nature open and cannot ever be "complete", but it can be consistent. That seems reasonable, almost trivial, but does not contradict Godel.

So if Hewitt is saying that we use types to fragment mathematics in to self-consistent domains, such that consistency within a domain is a requirement, and that types keep domains separate, so that a Heytingean is a separate type from a Boolean, then the system is by definition consistent (but not complete). Q.E.D. Then I think I agree, but I don't see any necessity to ban fixed-points for this to be true.

fixpoints of procedures and "contra goedel"

The fixed point of the infinite series 1 + 1/2 + 1/4 ... = 2.

Hmm. 2 is the fixed point of a procedure \x.(1 + x/2). Hewitt says that can be expressed and proved.

Also that 2 is the limit of an inductively defined series 1, 1 + 1/2, 1 + 1/4, and so on. Yes, that can be proved too.

Can Hewitt prove his system is complete (to be contra Godel it must be consistent and complete).

Hewitt's formalization of math-in-general is "complete" this way:

Suppose you have a theory T such that T proves some proposition P:

T |- P.

In Hewitt's system you should be able to show:

|- (T |- P).

If T also happens to prove not(P), then in Hewitt's system you'd also get:

|- (T |- not(P))

Thus the inconsistency of T is demonstrated by those self-consistent theorems in Hewitt's system.

Hewitt's system contains the infamous trivial proof that it does not prove any proposition P and not(P). (No "|- P" and "|- not(P)".)

Goedel's second theorem shows a method for deriving a contradiction from a proof of self-consistency but Goedel's 2nd theorem method doesn't work in Hewitt's system. Goedel's method requires the target system to permit a sentence to be defined as a fixed point of a formula containing S itself. Type restrictions in Hewitt's syntax prevent any such function from being defined.

I guess the real question is can we consider this infinite stack of types to be mathematics? Maybe this is what Hewitt is trying to say, that mathematics is by nature open and cannot ever be "complete", but it can be consistent. That seems reasonable, almost trivial, but does not contradict Godel.

Goedel is often assumed to have killed off Hilbert's program from the 1920s. In 1927 Hilbert wrote the quote below. Goedel is commonly supposed to have shown Hilberts goal a lost cause:

It is a great honour and at the same time a necessity for me to round out and develop my thoughts on the foundations of mathematics, which was expounded here one day five years ago and which since then have constantly kept me most actively occupied. With this new way of providing a foundation for mathematics, which we may appropriately call a proof theory, I pursue a significant goal, for I should like to eliminate once and for all the questions regarding the foundations of mathematics, in the form in which they are now posed, by turning every mathematical proposition into a formula that can be concretely exhibited and strictly derived, thus recasting mathematical definitions and inferences in such a way that they are unshakeable and yet provide an adequate picture of the whole science. I believe that I can attain this goal completely with my proof theory, even if a great deal of work must still be done before it is fully developed.

No more than any other science can mathematics be founded by logic alone; rather, as a condition for the use of logical inferences and the performance of logical operations, something must already be given to us in our faculty of representation, certain extra-logical concrete objects that are intuitively present as immediate experience prior to in thought. If logical inference is to be reliable, it must be possible to survey these objects completely in all their parts, and the fact that they occur, that they differ from one another, and that they follow each other, or are concatenated, is immediate, given intuitively, together with the objects, is something that neither can be reduced to anything else nor requires reduction. This is the basic philosophical position that I regard as requisite for mathematics and, in general, for all scientific thinking, understanding, and communication. And in mathematics, in particular, we consider is the concrete signs themselves, whose shape, according to the conception we have adopted, is immediately, clear and recognisable. This is the very least that must be presupposed; no scientific thinker can dispense with it, and therefore everyone must maintain it, consciously, or not.

Subsequent to Hilbert and Goedel types became better understood.

Hewitt is (at least claiming, plausibly) to point out that Goedel did not kill Hilbert's program at all. In fact, Hewitt has a candidate system to do what Hilbert wanted.

T inconsistent means ⊢<sub>T</sub>Ψ and ⊢<sub>T</sub> ¬Ψ

That theory T is inconsistent means that there is some proposition Ψ such that

T Ψ 
⊢T ¬Ψ 

Hilbert's program was not entirely well defined and somewhat more complicated than you indicated above.

Also, Hilbert would likely not have been happy with the Direct Logic proof of consistency although he was angry at Gödel for the incompleteness results.

Hewitt: OK

OK (re notation).

Re Hilbert's program being not entirely well defined: That's why they thought it was incredibly hard.

Struggling

Every time I start to think I see something useful here, something else comes along that just doesn't seem to make sense. One sentence seems full of insight, the next incoherent and random.

What does "|- T" mean? I can infer T from nothing? (I have read that part of the paper, and I know what it says it means, but I don't get how it can mean that).

It seems to me that there is a big self referential statement here, mathematics is consistent, and what is consistent is mathematics. The system which outlaws self referential statements is defined by one? Isn't that a paradox?

⊢<sub>T</sub> Ψ means that theory T infers proposition Ψ

T Ψ means that theory T infers propositon Ψ .

re struggling

What does "|- T" mean?

As Hewitt points out it means I created confusion by wrong notation. (re |-T). Sorry.

Goedel's method requires a fixed point

Goedel's method requires the target system to permit a sentence to be defined as a fixed point of a formula containing S itself.

I am not entirely sure that is the case. Possibly you get it much better than I do.

I've been looking at when you cross the border from complete and consistent to complete or consistent. So, turns out Presburger arithmetic is consistent and complete. The difference seems to be that you shift from something which only has addition to something which has both addition and multiplication; which, by itself, I find a highly unsatisfactory observation (I would like to see explained another time.)

Don't you need fixed points to define addition? From a lambda calculus perspective, long time ago for me, I wouldn't see the difference between addition and multiplication that trivially. I would assume that if you would need a fixed point for multiplication then you would also need a fixed point for addition. (The latter implying incompleteness of Presburger arithmetic if the existence of fixed points in an LC possibly imply incompleteness a-la Godel.)

(It would be nice to see that in LC addition can be defined directly and for multiplication you need a fixed point, though, as another argument supporting Goedel informally.)

It's all nice, but getting rid of even addition?

EDIT: I guess I should have written something like "the fixed point isn't the problem, multiplication (or something like that) is." I.e., solving the wrong problem.
EDIT: Or: It's likely you need a fixed point to prove Godel's incompleteness results. But that's not the same as that you've proven you need a fixed point for a proof of incompleteness.

I am not entirely sure that

I am not entirely sure that is the case.

It's not. One way of thinking of diagonalization is that it operationally guarantees a fixpoint will occur (not that I've often seen the term "fixpoint" used for it in that context). It doesn't matter whether or not the occurrence can be talked about within a particular formalism.

re: "not entirely sure" and 2nd theorem

It doesn't matter whether or not the occurrence can be talked about within a particular formalism.

Unless you are using Goedel's second theorem to try to produce a contradiction in an allegedly overpowerful, inconsistent system.

It's a theorem. It doesn't

It's a theorem. It doesn't matter what you're using it for.

re it's a theorem

John, I think you are confused about Goedel's 2nd result:

It's a theorem. It doesn't matter what you're using it for.

The theorem has the form of a construction. The construction tells you how to construct a contradiction in certain over-powerful systems.

The construction can not be carried out in Hewitt's system.

There is no other way to appply Goedel's 2nd theorem to a system other than to use it to try to carry out the construction it proposes.

No, I'm not confused

No, I'm not confused. You can take a look at my proof of the general result, if you like (pretty sure there's a link to it around here somewhere).

Church/Turing incompleteness doesn't mean consistency unprovable

The Church/Turing inferential incompleteness theorem does not mean that consistency is unprovable.

re I'm not confused

I'll try and map for you:

Goedel, Church, Turing, many since prove that sufficiently powerful systems must be inferentially incomplete or else inconsistent.

Goedel proved that in some sufficiently powerful systems, a proposition asserting self-consistency of the system could be constructed in such a way that either that proposition could not be proved, or else a contradiction could be derived within the system.

Hewitt's system contains a proposition of self consistency and a proof for that proposition, but the proposition is not of the sort that Goedel needs to derive a contradiction.

Goedel's construction of the kind of proposition he needs is blocked in Hewitt's system, similarly to how you are blocked from creating a circular list structure in a pure dialect of lisp.

Types mean that fixed points don't exist for sentences or props

Types mean that fixed points don't exist for sentences; also they don't exist for propositions.

Consistency is possible for math theories but not completeness

Consistency is possible for practical classical mathematical theories but not inferential completeness.

re Goedel needs a fixed point

I am not entirely sure that is the case.

You are thinking of Goedel's incompleteness work as one theorem but we're really talking about two different ones here.

First:

Let us stand outside of a system X for a second and work with a model of X, Xm. In Xm let's assume formula and sequences of formula are encoded as numbers. So is a proof procedure for X (a way to check the correctness of a sequence of steps. Further, assume that X itself has enough "power" not to express "Xm encodes X" but instead just enough to describe Xm. Standing outside of X, you and I know by demonstration that Xm formulas can be translated back into X formulas. In X itself, by assumption at the moment, there is no way to prove that an encoded formula corresponds to its unencoded version.

Goedel's first theorem essentially gives us a computational method for producing an encoded sentence, Sm, such that we can be confident X can prove neither S or not(S), unless else X can prove both S and not(S).

X is therefore inferentially incomplete or else it is inconsistent.

We could create a new system, X+S, in which S is adopted as a new axiom.

In Hewitt's system we could treat S as an axiom in a new theory and continue on.

So far no problems.

Goedel's second theorem ponders what would happen in a system powerful enough that we didn't have to step outside it.

In other words, what if system X could itself prove that S is provable if and only if the encoded form, Sm, is "provable" in the Xm model?

Further suppose that, like Hewitt's system, X contains a theorem that says it X is consistent: "no S exists where S and not(S) are both provable".

Given that much, Goedel shows how to derive a contradiction in X. X would, in that event, be inconsistent. Roughly, "We just proved, using Sm, that S can not be proved. But S implies that S can not be proved. So S is true. A contradiction!"

You see: Goedel's second theorem is a kind of proof schema that can be adapted to attack any overly powerful (hence inconsistent) system.

For Goedel's attack-from-the-inside method to work, a system X has to permit a sentence S to be defined recursively. Roughly, "S is defined to be the formula Q with the encoding of S substituted for a certain of Q's terms." S is recursively defined in terms of its own encoding. S is a fixpoint of a function mapping sentences to sentences.

Hewitt says the type restrictions of his system prevent such a definition of S. I think roughly the restriction amounts to: You can construct sentences only from primitive terms and from earlier constructed sentences.

You can't form a sentence S that says "This sentence, S, is provable if and only if Sm has a model proof in Xm."

With that restriction, the attack vector of Goedel's second theorem, his method for deriving a contradiction in an inconsistent system, doesn't work. You could take this as empirical support for the claim that Hewitt's system is consistent.

Hewitt's larger argument "contra Goedel" is that Goedel was able to get away defining sentences by fixpoint that way only because of a confusion at the time. That, indeed, the syntactic restriction Hewitt proposes should be universal to all of math. Permitting self-referential sentences in any but a trivial system is, after all, a way to instantly make the system inconsistent!

The Second Theorem still

The Second Theorem still doesn't require explicit self-reference. The "fixpoint" still doesn't have to be known by the formal system being studied. The formal system does have to do a bit of elementary reasoning, but nothing troublesome, other than the basic premise that the system proves its own consistency.

Technically, the First Theorem doesn't require explicit self-ref

Technically, the First Theorem doesn't require explicit self-reference either. It's convenient to explain it as a sentence with a "this" in it to philosophers, but technically you have a simple relation R(x, "x"). (The mathematics doesn't have a concept of the noun "this".)

That's not weirder than any relation P(x, y), the quoting is weird but the self-reference doesn't exist. I think there's some discussion about it on the Wikipedia page but I am not sure.

John, you're wrong about 2nd theorem

The Second Theorem still doesn't require explicit self-reference.

Yes it does. Goedel wrote:

Theorem XI: [which we call the second theorem]: Let x be an arbitrary recursive consistent class of FORUMULAS. Then the SENTENCE which asserts that x is consistent is not x-PROVABLE; in particular, the consistency of P is unprovable in P, assuming that P is consistent [...]

[The proof is outlined, then....]

Now we establish the following: All the defined concepts (proved assertions) of Section 2 and Section 4 are expressible (provable) in P. For, we have used throughout only the ordinary methods of definition and proof of classical mathematics as they are formalized in system P. In particular, x (like every recursive class) is definable in P. Let w be the SENTENCE by which Wid(x) is expressed in P. [...]

The second theorem requires that the target system be able to define its own class of sentences (which Hewitt's does) and to construct a member of that class by fixpoint (which Hewitt's system does not permit).

Critical to the proof is that the target system is able to deduce from unprovability of an encoded statement in the model, the unprovability of the unencoded statement in the target system itself. If the target system can not do that, Goedel's attack doesn't work.

It's well-known (or at least

It's well-known (or at least I tend to think so, granting that in a wider sense none of this is well-known) that Gödel's original proofs were far more specific than the priciples they embodied; as I recall, he deliberately limited the actual proofs to PM to avoid accusations of abstract flimflammery. Whereas in my treatment I removed as many assumptions as possible in order to debunk claims that his results were somehow consequences of idiosyncratic assumptions in his treatment.

[note: imperfectly phrased, but the bottom line is right.]

Gödel's "proofs" based on PM which had types (now corrected)

Gödel's "proofs" were based on Principia Mathematica, which had types that are now corrected thereby clearly invalidating the "proofs".

Responding to Wittgenstein's critique (that "self-referential" sentences lead to inconsistency in mathematics), Gödel retreated to first-order logic (which has the bug that it can't categorically axiomatize N).

Both right

The crux is the premise of the second theorem that the system under question proves a certain consistency sentence, which we informally read as "the system proves its own consistency". The particular form this sentence takes matters a great deal.

In one sense, the second theorem surely applies to Hewitt's system: if his system (with his theory of naturals included, of course) could prove the canonical Gödel consistency sentence, or a consistency sentence satisfying conditions that have since been considered in order to generalize Gödel's result, then surely the system is inconsistent. (I agree with you).

In another sense, the second theorem doesn't apply, as we have no evidence that this premise is satisfied. In particular, Hewitt's "self-consistency of mathematics" is so trivial as to give us no a priori expectation that it qualifies as a strong enough consistency sentence. In particular, in order to make use of it, we would seem to be bound by the syntactic and type restrictions that are in place with the quotation mechanism it uses (Hewitt's syntax reflection). There's plenty of opportunity there to block a construction. (However, the last thread on this proved that it is actually subtle whether the construction is blocked). This is how I interpret what Thomas Lord is saying. (And I think I agree with him.)

John, I'd like to see your version of 2nd result

Whereas in my treatment I removed as many assumptions as possible in order to debunk claims that his results were somehow consequences of idiosyncratic assumptions in his treatment.

If you are both correct and faithful to Goedel in consequence, then your cleaned up version of the 2nd result should provide us a recipe for constructing some specific propositions from which we can derive a contradiction.

Where can I see your recipe for performing a construction in a target system?

A pattern in Hewitt's

A pattern in Hewitt's arguments is treating Gödel's results as if they were anomalous consequences of some pathological assumption, so that some minor disruption of the particular assumption can cause the basic result to fail. That's unrealistic; as some of us have pointed out from time to time, the reason Gödel is famous is that the principles involved are stunningly robust. These particular treatments are just parts of the elephant. I did try for a somewhat larger part of the beast, but I suspect it's still only a small part compared to the whole. For what it's worth, the part of the elephant I've been looking at is described in my blog post (I was right, there is a link to it eleswhere in this discussion) "Computation and truth".

Church/Turing incompleteness robust; Gödel's "proofs" fail

The Church/Turing proof of the inferential incompleteness theorem is robust; but Gödel's arguments fail because fixed points don't exist.

Chaitin [2007] presented the following analysis:

Gödel’s proof of inferential undecidability [incompleteness] was too
superficial. It didn't get at the real heart of what was going on. It
was more tantalizing than anything else. It was not a good reason for
something so devastating and fundamental. It was too clever by half.
It was too superficial. [It was based on the clever construction]
“I'm unprovable.” So what? This doesn't give any insight how serious
the problem is.

The deeper reason for inferential incompleteness is given by the Church/Turing proof.
Gödel’s argument for the 1st incompleteness theorem was based on the pathological assumption that fixed points exist.

re pattern in Hewitt's

A pattern in Hewitt's arguments is treating Gödel's results as if they were anomalous consequences of some pathological assumption, so that some minor disruption of the particular assumption can cause the basic result to fail.

It's really not. He's saying something really simple in this area but it is evidently hard to communicate simply.

If I may I refer you to the "proof sketch" comment I very recently made to marco on this topic page. It might shed some light.

Yah. That's what you claimed before.

Still unconvinced one needs fixed points for the construction of Godel's argument.

burden re Still unconvinced one needs fixed points for the const

Still unconvinced one needs fixed points for the construction of Godel's argument.

That's on you.

Too Cheeky Nando's with the Lads

Of course not. You claim one needs a fixed point to express certain constructs. But when I look at, what I expected would usually be recursive constructs, addition and multiplication of Church numerals, I find they are easily expressed without recursion.

What is to say that this isn't true in the general case?

In order for the proposition "By forbidding fixed point combinators Godel's argument doesn't hold" to be true, the burden is on you to prove that fixed point combinators are required for his proof.

Maybe it's true, maybe it isn't. But I am not the one who is stating that by ruling out fixed point combinators certain arguments don't hold anymore.

(Formal explanation of your argument: """So you know when you go down town with the lads and you all realize you’re hank marvin’ so you say “lads let’s go Maccers” but your mate Smithy a.k.a. The Bantersaurus Rex has some mula left on his nandos gift card and he’s like “mate let’s a have a cheeky nandos on me” and you go “Smithy my son you’re an absolute ledge” so you go have an extra cheeky nandos with a side order of Top Quality Banter.""")

(Hewitt is the Bantersaurus Rex/Smithy in this case.)

re too cheeky

Of course not. You claim one needs a fixed point to express certain constructs.

No, I don't claim that. Goedel does.

Goedels second result shows how to construct, within a system, a sentence that is either unprovable, or that produces a contradiction. In this construction, the sentence implies the consistency of the system itself by asserting the non-existence of any proposition for which proofs of the proposition and its negation both exist.

A stated condition of the construction is that the system must permit you to construct new sentences by arbitrary recursion. Goedel offers no alternative construction without this. Nobody else ever has either. As I said: if you think there is one, that's on you. It's your burden.

Meanwhile, sentences in Hewitt's system can not be defined by arbitrary recursion and Goedel's construction can't be applied. Hewitt's system can separately prove its own self-consistency, but Goedel's construction can't be used to derive a contradiction.

Maybe some orignal construction, very different from Goedel's, could derive a contradiction in Hewitt's system. You never know. Consistency is ultimately an empirical question for all systems. But we do know that Hewitt's can prove its own consistency, Goedel's construction can't be performed in Hewitt's system, there is no contradiction there, and in retrospect you can see what Hewitt means with his critiques of Goedel and other earlier mathematicians working in this area.

Arbitrary Side Condition

A stated condition of the construction is that the system must permit you to construct new sentences by arbitrary recursion.

And where does Goedel use an Y combinator in his construction exactly? And how is the construction of new sentences fundamentally different from superficially arbitrary recursive construction of numbers by addition or multiplication?

At least with the shift from Presburger arithmetic to an incomplete arithmetic a-la Goedel by adding multiplication there's a minimal hint where something starts to break.

Instead, you've seemed to have invented an arbitrary side condition which, if you rule it out, would prevent Goedel's argument. That's just your imagination.

You invented the side condition and therefore you need to prove it. Even if it is likely true. Likely isn't good enough.

proof sketch re: arbitrary side condition

And where does Goedel use an Y combinator in his construction exactly? And how is the construction of new sentences fundamentally different from superficially arbitrary recursive construction of numbers by addition or multiplication?

In the first theorem, the apparent recursion can be eliminated. In the second it can not.

To make this clearer, I will paraphrase Goedel's sketch of the first proof, using a particularly good translation (linked below), and then elaborate with respect to the second incompleteness result.

Let Rn be an ordered list of sentences with one free variable. (Goedel calls these "class signs".)

Let Rn(x) be the sentence formed by substituting x for the free variable in the nth class sign.

K is a class of natural numbers:

K := { n in Nat.| ~ Provable (Rn(n)) }

In that way, K is a class of numbers such that the nth class sign, with the free variable replaced by n, is not provable.

Now we define S, a class sign. For some q, S is Rq

S == Rq

What is S specifically? Which "q"? S is:

S(n) := Rq(n) := n ∈ K == n ∈ { k in Nat. | ~Provable(Rk(k)) }

Goedel's famous sentence is:

S(q)

Either S(q) can not be proved, or the system is inconsistent.

S(q) <=> q ∈ K <=> ~Provable(Rq(q)) <=> S(q) <=> ~Provable(S(q))

Goedel's second result requires us to make a definition like this in
the target system:

X := S(q)

equivalently:

X := Rq(q)

equivalently:

X := q ∈ K where (X == Rq(q))

Suppose that X is provable in the system where it is defined.

Then within that system we trivially get:

~(q ∈ K)

but the proof of X gives us:

(q ∈ K)

so we have a contradiction.

In Hewitt's system, this definition:

X := q ∈ K where (X == Rq(q))

is ruled out.

http://www.ursuletz.com/cs150-fall2005/lectures/goedel.pdf

In Direct Logic, sentences aren't countable; strings countable

Thomas,

In Direct Logic, sentences are not countable because they can contain real numbers. However, strings are countable.

Also, sentences are different from propositions (which do not contain free variables.)

Consequently, Gödel's argument for the 1st incompleteness theorem fails. However, the Church/Turing proof of the 1st incompleteness theorem is valid.

countability of sentences

In Direct Logic, sentences are not countable because they can contain real numbers.

I think you refer to Goedel's use of of the well ordered set Rn, a set of formula containing a single free variables.

The set Rn does not need to include all formulas with one free variable for Goedel's proof to work. It can include only, say, a specific and easily laid out countable set that can be inductively defined. (Which he more or less does in the the details for PM.)

Also, sentences are different from propositions (which do not contain free variables.

In all of these comments I am being loose with terminology rather than strictly adhering to yours. I might try to switch later but right now I'm just trying to help people better intuit what's going on here.

Consequently, Gödel's argument for the 1st incompleteness theorem fails.

I think you are mistaken there in the sense that we can formalize his first argument well enough to apply it to a partial syntactic model of your system.

Gödel's argument for the 1st incompleteness theorem fails.

In Direct Logic, fixed points on propositions do not exist and consequently Gödel's proofs are not valid. Even in the closed theory ℕ, a “self-referential” sentence cannot be constructed by enumerating strings for sentences. Let SentenceFromStringWithIndex be a procedure that enumerates sentences that can be produced by parsing mathematical strings of the closed theory ℕ and IndexOfSentenceFromString be a procedure that returns the index of a sentence that can be obtained by parsing a mathematical string.

I_am_not_provable:Proposition ≡ ⌊SentenceFromStringWithIndex.[Fix[Diagonal]]⌋ 
    where  Diagonal.[i:ℕ]:ℕ ≡ IndexOfSentenceFromString.[⌈⊬ ⌊SentenceFromStringWithIndex.[i]⌋⌉]

Because of types in Classical Direct Logic the fixed point operator Fix cannot be defined because of the following:

 
   ℕtoℕ ≡ [ℕ]↦ℕ
   Helper.[f:ℕtoℕ]:ℕtoℕ ≡ [x:([⍰]↦ℕtoℕ])]→ f.[x.[x]]
   Fix.[f:ℕtoℕ]:ℕtoℕ ≡ (Helper.[f]).[Helper.[f]]

The missing type ⍰ does not exist.

Hah, Hewitt re: Gödel's argument for the 1st incompleteness ...

In Direct Logic, fixed points on propositions do not exist and consequently Gödel's proofs are not valid.

Goedel's metamath demonstration in the 1st incompleteness theorem doesn't deeply rely on treating propositions of the target system as first class objects. It can operate entirely on a model of the target system.

The judgement that the model is faithful to the modeled system is not a formal judgement.

Gödel's argument for 1st incompleteness thm requires fixed point

Gödel's argument for 1st incompleteness theorem requires a fixed point.

re: Gödel's argument for 1st incompleteness thm requires fixed p

Gödel's argument for 1st incompleteness thm requires fixed point

Yes, but it can effectively compute the needed fixed point using the integer encoded forms of things. The structure of the argument in the 1st, as I read it, doesn't require any formal proof that the encoded form faithfully encodes the subject system. We're supposed to believe that the encoding is faithful by direct apprehension.

It doesn't matter much. The substitute version using ChurchTuring in DL is considerably more elegant, not least since it lacks the need for an encoding.

Because of types, sentences do *not* have fixed points

Because of types, sentences do not have fixed points.

Also, creating some of the sentences from strings does not give them fixed points.

One of the defects, imho, of

One of the defects, imho, of the way Gödel's Theorems are usually treated, is an obsessive focusing on particulars of the formal system. When explaining why the halting problem is undecidable, one doesn't get into details of how particular machines work; so if Gödel's results are comparably fundamental one oughtn't have to mess with nitty gritty technical details either. That was part of what I wanted to explore in my blog post. As I pointed out there, at the time it was all thought of as a matter of tinkering with rules of deduction and axioms, so it makes sense one would concentrate on those things, but in retrospect it doesn't seem to be about rules of deduction or axioms at all, in fact it seems to be mostly independent of those things. By the time I'd finished going through it all, I was fairly satisfied that it is, indeed, possible to get at the basic results without having to resort to things like the Law of the Excluded Middle, or reductio ad absurdum. I wasn't even bothering to look for fixpoints, which I wouldn't have expected to find, but since it's been brought up here, I observe that there's nothing like that going on either. Except that in any treatment of the Second Theorem it's hard to see how you can avoid getting into a little of the behavior of the system; not very much, and probably there are lots of different specific choices of what exactly to require, but something. At the very least, you need to be clear on what you mean when you say the system "proves its own consistency".

Unlike Gödel's results, Church/Turing incompleteness is robust

Unlike Gödel's results (which depend on the existence of fixed points for sentences that don't exist because of types), the Church/Turing proof of inferential incompleteness is robust:
*Church first proved incompleteness using the lambda calculus
*Turing then proved incompleteness using Turing Machines

re: one of the defects

John, I have read the Goedel sections of your blog post.

The first error it contains is here:

Gödel's Theorem (aka Gödel's First Theorem) says that any sufficiently powerful formal system is either incomplete or inconsistent — in essence, either it can't prove everything that's true, or it can prove things that aren't true.

The first incompleteness result shows that any sufficiently powerful system either:

1. allows assertions to be made which it can neither prove or disprove

or

2. the system allows assertions to be made which it can both prove and disprove.

This is different. We are being more careful here to talk about assertions and formal proof within a system, rather talking vaguely about truth.

Another very critical mistake in your blog post is here:

Gödel's results are commonly phrased in terms of what a formal system can prove about itself, and treated in terms of the rules of deduction in the formal system.

Goedel's results pertain to formal systems and their proofs, that much is true.

Goedel's second result is commonly stated in terms of what a formal system can prove about itself because that is what the second result is concerned with.

Goedel's first result is never stated in terms of what a formal system can prove about itself because that is not what it is concerned with.

The types in Hewitt's system are used to avoid falling into a trap that Goedel's second result shows exists.

You say:

I phrased myself in terms of what we can know about the system — regardless of how we come to know — rather than what the system can prove about itself.

By implication, you have no interest in or anything to say about the second incompleteness result.

Inventing nonsensical

Inventing nonsensical misreadings of what I wrote as an excuse for claiming it's wrong is the sort of pettiness I'd expect of Hewitt. I honestly thought better of you.

re: nonsensical

I honestly thought better of you.

Suddenly, likewise.

Do you understand why we might expect Hewitt's DL self-consistency proof to invoke suspicions DL is inconsistent, based on Goedel's 2nd theorem?

Do you understand how Goedel's 2nd theorem does not automatically apply?

Do you understand how Hewitt proposes to prevent it from applying?

I am not asking if you accept that Hewitt has got it right or not. I am asking if you understand what he claims to argue.

From your blog account of Goedel it is genuinely unclear.

I understand this stuff

I understand this stuff reasonably well, yes (though your third question is quite different since it introduces Hewitt into the equation).

I've finally, btw, got a somewhat-plausible hypothesis for what went wrong with your reading of my blog post. More than half my fault. I gave you a link into the middle of the thing. Should have pointed you to the start of the post; I forgot that the earlier parts of the post are probably needed context if the reader is to understand what I'm doing (it's possible some people still won't connect with what I'm doing, but lacking the context would be a serious disadvantage). Though I admit, that only explains your first two-out-of-three remarks; it was always the third one that I found really objectionable, since it quotes a statement about form and then turns around and interprets it as if it were an absurd statement about substance.

Do you understand why I no longer take Hewitt's unsubstantiated claims seriously? Once upon a time we were both (so it seemed) operating on the assumption that Hewitt had some possibly-interesting ideas that he wasn't communicating well, and it seems you've continued to operate on that assumption; but he slowly expended all his capital with me, and I finally concluded that, by Occam's Razor, by far the simplest and therefore most likely explanation for his behavior over time would be that he lacks basic understanding of the subject.

That said, I still take you seriously, and if you say there's something interesting in Hewitt's recent writings, I'm willing to take a look (in my copious free time). Which paper did you have in mind?

re: Which paper do you have in mind?

Which paper did you have in mind?

Inconsistency Robustness in Foundations: Mathematics self proves its own Consistency and Other Matters

I like this quote:

The development of Direct Logic has strengthened the position of working mathematicians as follows:
 Allowing freedom from the philosophical dogma of the First-Order Thesis
 Providing a usable type theory for all of Mathematics
 Allowing theories to freely reason about theories
 Providing Inconsistency Robust Direct Logic for safely reasoning about theories of practice that are (of necessity) pervasively inconsistent.

I'll take a look when I get

I'll take a look when I get a chance, and offer my thoughts on it here; then, at least, we can compare/contrast our reactions to it, which may be interesting to us (regardless of what Hewitt thinks of it). I suspect the last time I looked at that one wasn't so very long ago, but, we'll see. The passage you quote doesn't exactly inspire me with confidence, since high-level claims are something he's never been short of.

also John

Apparently you feel insulted and I don't wish that you do.

Skimming a few pages from a most recent version of one of Hewitt's endlessly revised links, some of the introductory language seemed clearer to me and a bunch of stuff just clicked. In concept, Hewitt's system is butt simple but with plenty of tricky details to be careful to get right.

One way you could describe is that he has specified a kind of semantic tower of formal theories with a single, simplistic theory at root.

The tower comfortably admits inconsistent theories without any risk of contagion of inconsistency to sibling or cousin theories or any ancestor theory.

None of the theories, including the root theory denoted by the unsubscripted turnstile, admit the kind of self-referential trap Goedel described in his 2nd incompleteness result (part 4 of "On formally undecidable...").

Via its sub-theories, which are all first class object about which it can reason, the root theory appears to admit all classical mathematics. For example, you can keep generating new arithmetic axioms by diagnonalization arguments for the rest of your life and DL can keep incorporating them as axioms in new sub-theories.

At least from the earlier

At least from the earlier part of the post that I absent-mindedly led you to bypass, it should be evident that what I'm interested in is the broad principles involved. On the First Theorem, I maintain that the broad principle involved does in fact concern completeness in the sense of external truth. Note, though, that the only external truth we actually need for the proof is the truth acquired by the reasoning in the proof, which is to say, the meta-mathematics of the proof. The advantage of acknowledging that we care about the meta-mathematics only on the grounds that it's true is, that it highlights our disinterest in details of the system whose properties are being studied. That disinterest is part of why tactics of the ilk Hewitt has been advocating (at least, those he's been mentioning here; yes, I'm going to take a fresh look at the paper you recommended) don't advance the cause of avoding the broad principle.

Precision and rigor are important in foundations of CS

Precision and rigor are important in the foundations of Computer Science both for broad principles and in detailed technical arguments in order that computer computer systems can carry out all reasoning processes.

That Gödel's incompleteness arguments are invalid is important because otherwise we would have inconsistencies in the mathematical foundations of Computer Science.

.

[wrong thread]

Science is always a work in progress

Science is always a work in progress.

The powerful (try to) insist that their statements are literal
depictions of a single reality. ‘It really is that way’, they tell us.
‘There is no alternative.’ But those on the receiving end of such
homilies learn to read them allegorically, these are techniques used
by subordinates to read through the words of the powerful to the
concealed realities that have produced them.
Law [2004]

Whatever

God man. They already asked me once to write a prover for an inconsistent system once during my PhD studies. I flat-out refused.

I don't do mysticism. Clean up your mess or go packing.

Comments and suggestions appreciated for articles on HAL

Comments and suggestions gratefully appreciated for the articles on HAL.

Thanks!
Carl

It's just a joke

That Gödel's incompleteness arguments are invalid is important because otherwise we would have inconsistencies in the mathematical foundations of Computer Science.

So what. This is simply likely true. I am not bothered by it.

Gödel implies that if you try to reason over numbers, it turns out you can't. So, either our reasoning or our numbers are flawed.

Personally, I am betting that the numbers are flawed. Life might be just that, continuous. Panta rei, everything flows, all else is a cosmic joke.

Still. Classical higher-order logic reasoning works 99.99, and then some, of the time. It's effective, we haven't found primes which are dividable by other numbers yet. There's only that one Gödel sentence.

I still think it's just a nice joke though.

Inconsistencies in mathematical foundations of CS are not a joke

The Peano/Dedekind categorical axiomatization of N is not flawed.

Furthermore, inconsistencies in mathematical foundations of Computer Science are not a joke because they can lead to security holes :-(

Sounds unlikely

"they can lead to security holes"

https://xkcd.com/704/

Unlikely you can deduce my mother's phone number from 1=0 ;-)

It's unlikely you can deduce my mother's phone number from 1=0 ;-)

Yeah well. I am not a Platonist

Godels result is always lingering there over you in the meta-logic. The logic you use to reason over other systems.

Personally, I think Godel tells us that we tried to discretize a continuous system and that works great, except for in the limit. Numbers don't exist in nature, you saw two blobs in a vague picture and thought you could assign two-ness to it. Well, you can, works 99.99% of the time, but Godel tells us that mode of thinking is fundamentally flawed.

So you end up reasoning in an inconsistent system where your results are invalid in the limit. Godel might have found an inconsistency in an inconsistent system and the whole Godel sentence might not even exist. Valid reasoning but inconsistent system.

That doesn't need to be bad, might not have implications for security holes. There simply are modes by which you can brute-force problems, do some finitistic reasoning, or keep it that simple that it looks true to you.

That's my 2cts on it.

Until they prove otherwise

I would say does not have implications for security holes.

It sounds like bullshit.

What sounds like bullshit to you?

My reading of Godel is that Godel fundamentally tells us that the universe is a continuum, cannot be understood, and that our discretization of that continuum is an effective, but flawed nonetheless, endeavor.

Contrary to Hewitt I am not here to claim that's anything correct, mind you. It's just a believe I am stuck with concerning the relation between human thought and reality.

So, if you find that bullshit, that's okay with me.

But Hewitt is right in that it has implications for security holes. If all logic is inconsistent, then it follows that you cannot prove security protocols correct in the limit.

Personally, I would say that as long as you steer away from constructing Godel sentences, you're in that 99.99, and then somewhat, percent effective region, and it simply doesn't matter much.

But it's just an odd belief.

When you're reasoning about computers, you're

reasoning about discontinuous finite things with defined and limited operations.

The fact (?) that the universe is continuous doesn't enter in at all.

As far as I can tell, discrete math is not threatened by this bullshit at all. Valid proofs about finite systems are not threatened at all.

In fact if you want to make proofs about computer systems, then I would say, start with a system that uses no uncountable infinities because you don't need them in the slightest.

Define discrete math

Depends on what you understand what discrete math is. Godel tells us that systems which can reason about arithmetic and are sufficiently strong are not both consistent and complete and cannot state their own consistency.

So now we know there are (likely many) things you can state about numbers which cannot be proven.

They have proven Godel correct in Nuprl?, Coq, and lately Isabelle/HOL.

These are mechanical systems in which mathematical theorems and also protocols are verified. It would be nice if it had turned out that these systems are trustworthy beyond doubt. The proof of Godel seems to tell us that they are not beyond doubt; so you need an answer for that somewhere.

Uncountable finities are somewhat of a byproduct of the current discussion with Hewitt. I would say that he didn't make it better, but he made his position worse; not only is there still doubt about the discrete part of his logic, there's now also doubt about how he treats infinities and infinitesimals.

We're in a deep mess already without starting on the topic of infinities.

Does Godel apply to a totally finite system?

Say that "sentence" is something that can only talk about N bits and no more and that "sentence" can't be more than M symbols long.

Now we're talking about something like a computer system and programs.

Does Godel apply here? I doubt it.

This is simply likely true.

This is simply likely true. I am not bothered by it.

Although I agree with your indifference to Gödel's results being true, I point out that it's not true those results would require inconsistencies. They require a formal system to be either inconsistent or incomplete. But yes, that's not a big deal; as I remarked in my blog post on this stuff, the impossibility of a formal system being both complete and consistent is just a characteristic of formal systems, rather like it's a property of Euclidean geometry that a triangle can't have more than one obtuse interior angle.

What's the conclusion or conclusions so far?

I can't follow this discussion so I'm curious what's your opinion of how it hashed out.

Did Hewitt impress anyone with his proposed foundation?

Is Godel important?

Is it true that fix points can be banished from math without diminishing it, sort of like replacing infinitesimals with limits?

Is banishing fixed points throwing the baby out with the bathwater, a bit like banning division instead of banning division by zero?

I was a bit taken aback to realize that you all were saying that mathematical statements aren't countable which means nothing more than that you're allowed to make statements about real numbers :/ That sounds utterly useless.

Is there any THERE there?

Is there a logic which is actually more usable and correct in practical computations or has CS gained nothing useful yet from all of this?

[edit] I was aiming this question at John Shutt. Also I realized after I posted it, there is another way to get to uncountable sentences than embedding elements of the continuum in otherwise finite sentences, there is also taking power sets of countable infinities of sentences. It occurs to me that no human is going to invent a proof whose Kolmogorov complexity requires an infinite basis. Ie if you can't describe a proof in a finite amount of time, then no human will reason about it. Even if you can show that you are meta-reasoning about such objects, you should also note that it's useless - no such object will ever have meaning to a human being.

Beyond that, they're talking about Actors being uncountable. Uhm so they're not talking about computer science at all anymore.

Have they taken a model for computation and turned abstracted it into some other sort of mathematics or is this whole conversation getting confused?

An actor that isn't a finite series of steps with finite data is at best some sort of abstraction that isn't in computer science. If you are including infinitely long computations like comparing reals or members of infinite sets, at best you're into hypercomputing, which I imagine is in general more disreputable than "fixed points" but which is ok as mathematics if not computation. But maybe those things could be a replacement for "fixed points" a spot in mathematics to be suspicious of. We can take or leave the axiom of choice... If we're only allowed countable infinities or no infinities at all, then all sorts of paradoxes disappear.

Meh I shouldn't talk about what I don't understand.

But what is this? Uncountable sentences? Those aren't mathematics as humans do. Uncountable actors? Those aren't, in general, programs that we can reason about let alone run. What is going on here?

Conclusions so far, IMO

Gödel's work is important because he first published on inferential incompleteness although his proofs are invalid. Church/Turing first published a valid proof of inferential incompleteness for Peano/Dedekind categorical axiomatization of N.

Fixed points do not exist in logical foundations of Mathematics without loss of power, which is fortunate because existence of fixed points leads to inconsistencies in Mathematics.

Strings are countable although for generality, sentences and propositions are not countable.

Types provide foundations for Mathematics that are much more powerful than first-order logic.

For details, see Inconsistency Robustness in Foundations

"fixed points leads to inconsistencies in Mathematics."

Is it possible that only a certain, tiny subsets of fixed points lead to inconsistencies and that the best answer is to analyze what it is about those fixed points that make them problematic rather than throwing out fixed points?

That sounds to me more like how mathematics progresses.

more "conclusions so far"

Did Hewitt impress anyone with his proposed foundation?

I am quite impressed. I came to this state of being impressed (and even able to explain some of it to others) from an earlier state of thinking it was nutty rambling.

I don't have anything to add here to what Hewitt said about Goedel and fixpoints.

The utility of uncountably many abstract mathematical sentences arises in the context of fully automated machine reasoning in which we can generalize the terms of sentences in certain ways.

But also from the other side: the generalization of pure math to include this concept also tames awkard foundational issues that college curriculums heretofore sidestep because they had no worked answers. I mean this quite literally. I had some of the best math instructors and program anywhere as an undergrad and even those guys were quite *explicit* that they were sweeping certain foundational questions under the rug because they didn't have a satisfying way to answer them. They "knew" by feel what kinds of reasoning were safe and where dragons could lay -- but they couldn't lay down a foundation to make that sense rigorous.

They could have done foundations for that curriculum with Hewitt's DL and little bit about Actors (in the mathematical abstract).

Is there any THERE there?

The comments on this LtU topic are only scratching a part of the surface. Wait till it comes around again how this relates to pervasively inconsistent networked information systems.

Deeper issues to be addressed in inconsistency robust logic.

Thanks Thomas!

There are indeed deeper issues to be addressed in inconsistency robust logic.

How are you getting to "uncountable"

Unless you're building sentences from elements of the continuum I can't imagine how you claim that sentences are uncountable.

And I can't think of any utility to sentences made from actual reals except that they're valid... Can't they replaced by sentences with symbols in the place of reals and some sort of statement to the effect of "there are reals for which this is true"

[Edit] I suppose you could take the power set of countably many sentences to make uncountable sentences. Once again I can't see any utility. And you could banish that by leaving out an unrestricted axiom of choice

[2nd edit] Ok I can imagine a proof that used a power set of sentences. ... So I guess I have to grant the possibility of a mathematics that has uncountable sentences, but such sentences could be reasoned about but not even a single sentence from the greater set could be examined!

[3rd edit] For CS, it would seem to me that programs aren't infinite, perhaps sentences that are members of the continuum aren't so relevant.

"Wait till it comes around again how this relates to"

pervasively inconsistent networked information systems.

I don't believe this will relate to anything relevant to computers in my lifetime.

I know it's insulting

"The utility of uncountably many abstract mathematical sentences arises in the context of fully automated machine reasoning in which we can generalize the terms of sentences in certain ways."

But I am tempted to ask, you DO realize that merely generating infinite numbers of sentences can't get you to uncountable infinities. Unless there is a power set of an already infinite set (as you use to generate reals) then you don't have an uncountable set.

So where does automated reasoning need power sets of infinite sets of sentences?

Uncountablity comes from the type Boolean<sup>ℕ</sup>

Uncountablity comes from the type Boolean.

No there

Note: I told Thomas Lord I would take a look at the updated document of Hewitt's when I had a chance. I still mean to. I'm not looking forward to it; I suspect it's not really changed from the last time I looked at it. But I take Thomas Lord seriously enough to do that, even though Hewitt has slowly lost my respect over time.

That said, from the accumulation of what I've seen said here, and various occasions in the past when I've examined various of Hewitt's papers, there's no there there.

Infinities cannot be completed; if they could be they wouldn't be infinite. Perhaps we can reason indirectly about things that are not finite; for example, the decimal representation of an irrational number is itself infinite but we can describe it by giving an algorithm for enumerating its digits, and then use that to reason about it. How far can one go in reasoning about such things? Well, it's certainly possible to go too far. When you treat things that don't exist as if they do exist you can get into serious trouble. For example, a man in a village who shaves all men in the village who do not shave themselves, or the set of all sets that do not contain themselves. Every unicorn has one horn; but also, every unicorn has two horns (also five horns), and if you suppose that unicorns exist, you can therefore conclude that one is equal to two, and that's not going to end well. It was recognized way back that infinities are problematic; if they exist at all, one has to be careful how one reasons about them. Reasoning is itself finite, and so if we're to reason about infinities we have to do it starting from a finite foundation. Building infinities into a system of reasoning makes it unsuitable as a foundation, as it becomes something that needs a foundation under it in order to legitimize it.

They seem deeply confused at what "uncountable" means

ℕ is countable. ℤ is countable.

If programs are finite sequences of instructions then they can't generate an uncountable set.

Just because you can generate some numbers that aren't rational numbers with a finite program doesn't mean that you can generate ALL of them. The number of finite programs that have a finite amount of data is countable therefore the number of elements you can represent with a one to one mapping of such is countable.

If you had an uncountable number of actors then the address of any actor would take an infinite amount of memory.

The number of actors whose address is finite is a countable set! Therefore you can't have an uncountable set of actors with finite addresses.

These people don't seem to understand as much about Cantor as the average internet crackpot.

Also, mathematics may treat SOME VERY LIMITED set of kinds limits as completed, but if you treat general computing, the output of any system of actors as completed when infinite, then what they hell does that imply? Something monstrous! Something interesting but as far as I know never analyzed.

Actor addresses are uncountable

Actor addresses are uncountable, e.g., for a countable list of addresses of Actor reals, there is an address for an Actor that is not in the list.

So even the address of a single actor takes infinite memory

:/
And you can't have an uncountable number of actors that are distinct NOT COUNTING THEIR ADDRESS unless they have infinite memory and or instructions too.

And there is no point in infinite instructions or infinite number of even finite memory if programs take finite time.

This is a very odd system indeed.

ℂ, ℍ, and ℝ are uncountable. A computing machine is finite.

ℂ, ℍ, and ℝ are uncountable.

At given point in time, a physical machine will have a finite number of Actors.

Peano/Dedekind axiomatization of N is consistent but incomplete

The Peano/Dedekind categorical axiomatization of N is consistent but inferentially incomplete because there are true but unprovable propositions for N.

Not a recursive definition

[...] this definition:

X := q ∈ K where (X == Rq(q))

Even if that were a recursive definition, it was given after two non-recursive definitions of X as a way to clarify their ramifications. You only have to pick whichever definition you're willing to believe, and the other two can be informal.

But look closer: It's not a recursive definition at all. It's just the definition "X := q ∈ K" and there happens to be a reminder "X == Rq(q)" meaninglessly tacked onto the side.