When are Actors appropriate?

So there have been conversations about what is appropriate talk on LtU. Personally I am not offended by brusque commentary. On the other hand, while I respect Hewitt, I do think that the 'actors everywhere' thing has not gotten any better, only worse. I posted about process calculi and the first post was to railroad it towards actors, and sure enough enough people took the bait. :-(

I believe that conversations should be allowed to go off on to tangents e.g. talking about quantum. We all have subjective opinions like Justice Stewart about when it has gone from respectable tangent over into frankly trolling beavhiour, even if the troller doesn't do it to be a troll.

I'm asking everybody who is getting involved in each and every actor thread to please stop when they are on topics that did not in fact start out talking about actors. Personally I think it is OK if you go and start a new parallel topic where you draw the link and leave the conversation there. I do not need anybody to stop talking, I just would very much appreciate it if we could all try to see if we can keep the torrent of actor threads contained in their own places, so that other threads can live or die on their own.

(At the very least, realize that by forcing a thread to go onto 2 pages you have broken my ability to get to the new posts. Hardy har-har.)

Thanks for your support.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I see it the same way. At

I see it the same way. At some level it's understandable if people have particular standpoints they bring with them everywhere, because they're not going to learn a new standpoint overnight, but Hewitt's papers have been especially derailing.

the way you see it is wrong

In a very short time I've greatly expanded my knowledge of process calculi and their semantic theories thanks to actually taking Hewitt's comments seriously.

Re: the way you see it is wrong

In a very short time I've greatly expanded my knowledge of process calculi and their semantic theories thanks to actually taking Hewitt's comments seriously.

You might or might not remember all this, but when Hewitt started posting again recently, I read the papers and started defending them. The papers now have some pretty interesting arguments I hadn't seen a couple of years ago, and people were dismissing them in ways that I didn't find fair or convincing.

Now three things have happened:

1) You've been taking up that defense mantle with such vigor that I find barely anything to add on that side.

2) Thanks to that defense I was doing, I've expanded my knowledge of direct logic (the Ketonen and Weyhrauch kind), Gödel's biography, finitary foundationalism, ordinal analysis, and weak arithmetics. If I want an end-all, be-all foundation for mathematics, at this point I consider F arithmetic to be a very promising seed. I'm not sure I have a purpose in mind for Classical Direct Logic anymore, since it does not prove the consistency of its own axiomatization (only of its Proposition objects).

3) Some serious challenges to Hewitt's claims have come up that I haven't seen defenses against. I'm thinking in particular of our constructions of a diagonalization and a fixpoint combinator.

Now that I've had less reason to be an eager consumer of Hewitt's papers, I see you and Hewitt continue to post bursts of comments to half-related topics, which then spark the same subthreads we've seen everywhere else. It's repetitive (due to repeated terminology mistakes and our stubbornness from fatigue) and hard to follow (due to extremely long and interrelated threads).

I look forward to when these threads fall out of fashion again, and I sincerely hope we aren't driving too many people away in the process. Countably Infinite's departure last time still sticks with me.

Ross Angle, let's zoom in on the interesting bit.

3) Some serious challenges to Hewitt's claims have come up that I haven't seen defenses against. I'm thinking in particular of our constructions of a diagonalization and a fixpoint combinator.

Show me what you are talking about, please. I must have missed that. It sounds wrong from the vague description you give there but what is it specifically?

The challenges I mentioned

Thanks for asking. To keep the conversation in context, I've posted replies to those threads:

A diagonalization (flawed, but which makes me suspect there's a validity or design flaw in the non-existence theorem Hewitt was putting forward).

A fixpoint combinator that still seems like it should work.

Maybe you'll convince me yet. :)

re challenges you mentioned

I replied over there. You have to look at your version of Y but reconcile it with the rules of construction for Sentences on page 27 of the Direct Logic paper. Unless I am missing something, you can not use your defintion to create a Goedel-style Sentence.

also ross:

Keep in mind that Direct Logic has no problem proving (without need for a meta-logic!) it's own inferential incompleteness.

The problem with Goedel's second-theorem hand-wave is that it would seem to imply that DL's proof of self consistency can be leveraged to produce a contradiction in DL.

Alternative sources

As said, I've tried reading some of Hewitt's papers (in 2011), and gave up. What I didn't try was finding other sources on the same topics. Would somebody (not Hewitt) recommend good alternative (not from Hewitt) sources on direct logic?

More recent information on ActorScript and Direct Logic

More recent information on ActorScript and Direct Logic can be found
here.

Comments and suggestions for improvement are greatly appreciated.

So…

… that would be a "no", then?

Additional information in "Inconsistency Robustness"

There is additional information in "Inconsistency Robustness" by College Publications.

For example, I highly recommend the articles on inconsistency robust logic by John Woods and Eric Kao.

The book is in press and will be available soon, e.g., on Amazon, etc.

unavailable

The web site for that doesn't make it look like it is available at all. I emailed them to ask. Meanwhile I did turn up a document on IR by Woods.

Wood's article online is a draft preprint

Wood's article online is a draft preprint.

It differs significantly from his chapter in the book.

Bad

Not a good intro, bordering on the embarassing. Yet, somehow, trying to make sense of it helped me understand some interesting points.

The consistency proofs on page 2 seem especially ludicrous — the "contradiction" step only has a contradiction if you assume consistency in the first place. (|- T) and (|- not T) don't contradict each other. That's a standard problem with Hewitt's proof, so this paper won't help. I haven't found it discussed in the LtU thread on his proof.

In fact, I've realized what might be the confusing thing: Direct Logic has too much reflective power! You can get the contradiction by adding the step that `(|- T) => T` (called either integrity [1] or reiteration [2]), but that's broken. You can only prove this rule sound if you're assuming consistency in the first place, so this proof boils down to a convoluted variant of "since Direct Logic assumes consistency, it proves consistency". Not so interesting. This also doesn't provide a counterexample of Gödel's theorems, not without an external soundness proof for the logic.

Also, typically reflection principles tend to threaten consistency by Gödel's theorems. But at least, now we know that those theorems don't apply because Direct Logic lacks a syntax in the standard sense.

What's funny is that Hewitt seems to include this "integrity" rule in his appendix on "Classical Natural Deduction". OK, that's maybe not wrong, since he uses Jaśkowski's instead of Gentzen's; but please, title it "Jaśkowski's Natural Deduction", since Gentzen's the usual standard: good writing doesn't require jumping around so much to make sense of it.

[1] https://docs.google.com/file/d/0B79uetkQ_hCKbkFpbFJQVFhvdU0/edit
[2] http://arxiv.org/pdf/0812.4852v103.pdf

Use HAL. arXiv articles are obsolete :-(

Use HAL. The arXiv articles are obsolete e and there is no way to update the articles or even alert readers that they are out of date :-(

Consistency proof of Math: no formal assumption of consistency

By inspection, the consistency proof (using proof by contradiction) of Mathematics does not formally assume consistency in the proof.
Of course, Jaśkowski natural deduction is the one that mathematicians use in practice and it is commonly taught in Computer Science education.
Gentzen "natural deduction" is a philosophical curiosity that is not fundamental to Computer Science.

In Computer Science, soundness of Mathematics is a very important principle that must be available for use by computer systems for a multitude of purposes.
In practice, Computer systems need all the power of Direct Logic.

The nonexistent "self-referential" sentence (I am not provable. used by Gödel in his results) leads to contradictions in Mathematics.
Fortunately, Gödel's attempted construction of a "self-referential" sentence fails for Mathematics in Direct Logic because its notation is rigorously defined using types in the articles on HAL.

Another post about Godel, with apologies to LtU

You can only prove this rule sound if you're assuming consistency in the first place, so this proof boils down to a convoluted variant of "since Direct Logic assumes consistency, it proves consistency"

I don't think this is the full story. You're not supposed to be able to get around Godel by just adding an axiom that your logic is consistent. If you carry out such a plan of adding such an axiom to an ordinary encoding, you'll end up with an inconsistent system. (Note: Of course, you can start with system L and add an axiom Con(L) that asserts the consistency of L, ending up with a new system L' that is only inconsistent if L was. I'm talking about carefully constructing L' = L+Con(L').)

I think the reason that there's no contradiction in DirectLogic is because it DL doesn't fully axiomitize what provability means inside DL. So there is no way to get a handle on it as asserting the existence of something recursively enumerable from inside the logic. It also means an onlooker is allowed to interpret turnstile as meaning something else. Here's a relevant passage from the Standard Encyclopedia of Philosophy:

Giving a rigorous proof of the second theorem in a more general form that covers all such sentences, however, has turned out to be very complicated. The basic reason for this is that, unlike in the first theorem, not just any, merely extensionally adequate provability predicate works for the formalization of the consistency claim. The manner of presentation makes all the difference. For example, Rosser's provability predicate mentioned above would not do; one can prove the “consistency” of F in F, if consistency is expressed in terms of Rosser's provability predicate. One must thus add some further conditions for the provability predicate in order for the proof of the second incompleteness theorem to go through. Following Feferman (1960), it is customary to say that whereas the first theorem and its relatives are extensional results, the second theorem is intensional: it must be possible to think that Cons(F) in some sense expresses the consistency of F—that it really means that F is consistent.

It would be nice if there was a simple variant of Godel's theorem that formalized these ideas.

I agree that this doesn't seem particularly interesting. What's the point of postulating a nebulously defined "mathematics" and proving that its theorems are not recursively enumerable? That doesn't seem to model "the real situation" in any useful way. The broad goals of DirectLogic seem admirable, but this aspect seems rather useless.

Here's a test for the usefulness of a logic that can purportedly prove its own consistency: Can you use this feature to prove correct an implementation of the logic on general purpose hardware? That would be useful, as it would allow you to "cross compile" to another architecture in an environment where you're only allowed to deliver proven assembly code. I claim that you cannot do that with DirectLogic. The problem is that while DirectLogic has symbols in its grammar for provability, it doesn't internalize the meaning of those symbols as provability. It therefore won't be able to fully reason about how a data structure representing terms of the logic would map onto its internal concept of provability. And that's why this approach shouldn't count as bypassing Godel. It's just word games and doesn't actually bypass the practical consequences of Godel's second theorem.

[This post was aimed at Tom as much as you]

Direct Logic formalizes what is known about Mathematics (⊢)

Direct Logic formalizes what is known about Mathematics (⊢) on a rigorous basis so that it can be incorporated in the foundations of Computer Science.

The problems with Gödel's work stem from his assumption of the existence of a non-existent "self-referential" proposition. Consequently, his alleged proof of the inference incompleteness theorem (First Incompleteness Theorem) is invalid. Church/Turing provided a valid proof using computational undecidability.

Variants of the 2nd theorem

Matt, you might be interested in some recent work by Harvey Friedman on the 2nd theorem, e.g., here. He's interested in finding a good way to formalize the notion of "a sentence that adequately formalizes the consistency of a system," and proposes an interesting necessary condition on such sentences for a handful of systems. For PA, the condition on a candidate consistency sentence A is that PRA+A proves all the theorems PA proves, where PRA is PA with the induction schema restricted to quantifier-free formulas. All of the standard constructions satisfy this criteria, and there's a direct proof that any such A cannot be provable in PA. I don't have any idea if these notions can be extended to a broader context than first-order logic, but I thought they were interesting nonetheless. I especially liked the resemblance to Hilbert's program: the theorems of a stronger system are "safe" (provable in a weaker system) under the knowledge of consistency of that system.

Thanks for the link!

Cool, thanks for the link! I need to be careful not too look too hard at it lest I end up down another rabbit hole, but it is an interesting question as to whether this approach might generalize. I don't really see how to do it, either.

Friedman's work is *only* for first-order logic :-(

Friedman's work is *only* for first-order logic, the limitations of which are crucial to his results :-(

The Peano/Dedekind categorical axiomatization is much stronger than first-order PA.

In my experience, Hewitt

In my experience, Hewitt comes across as not participating in discussions here at all, but as spamming about his ideas in a purely one-way proselytizing. Which would leave no room for him to learn anything, in exchange for making it difficult to use LtU for anything except discussing Hewitt. As I've remarked before (in some words or other), if he's going to be a useful contributor to LtU, he has to either change how he comes across (if it's a misimpression) or change how he is (if it's not).

Mostly agree

Mostly agree; I find myself avoiding LtU because of the incessant Actor / Inconsistency Robustness tangents.

don't universalize your experience

spamming about his ideas in a purely one-way proselytizing

He is terse and witty and interactive. I do not mean to deny that your impression is otherwise, only that you are (understandably) wrong.

Which would leave no room for him to learn anything, in exchange for making it difficult to use LtU for anything except discussing Hewitt.

In Berkeley, some real big jerks blame the struggling performance of retail businesses on freaks and travellers, who generally operate outside of consumerism whenever they can.

The jerks are observably idiots because retail struggles because it has so little to offer the actual community.

I think there is an analogy here. I don't see LtU overflowing with a lot of deep, interesting stuff, arguably with Hewitt as an exception if you are ever patient enough to figure out what he's talking about.

Most of what appears to be going on in academic PLT appears to be treading water, bluffing while working obscure and uninteresting technicalities, etc. The field, not just the web site, is largely stagnant.

By contrast, the deeper I dig into Hewitt's work, the richer and more fertile it appears.

BTW, not really a direct reply, but this paper he recently mentioned is pretty tasty so far (I'm half way through) and might(?) help you better appreciate the structure of the abstrations he's bringing to bear and what the motivation for those abstractions is.

http://www.pcs.usp.br/~coin-aamas06/10_commitment-43_16pages.pdf

I'd love to see a real serious critique of Hewitt's work. So far, you Hewitt critics are offering anti-intellectual garbage, instead.

dis/agree

Yes, I want to learn about wonderful new things. If you can get into Hewitt's stuff and figure it out for humanity, I would seriously appreciate that. No kidding. Some folks have done that before e.g. how actors turned into E and I for one am very grateful for all involved.

In the meanwhile, his style of posting might be seen as killing off a lot of other good things on LtU.

You tend to do a better job than Hewitt in terms of style, but the horse has left the barn in terms of some not-insignificant portion of the LtU population being able to put up with anything related to Hewitt posts, I suspect. Again, that is very unfortunate, I wish it weren't so.

The point is not that I think Hewitt should shut up ever. I don't. I just think the final result of his style is trollish, and unfortunately people seem to fall for it, and things snowball.

Every Kant needs a Fichte

After all, we owe Scheme to Hewitt's lack of the "gift for exposition" that Kant regretted he did not have.

Here is what happened.

raould you posted a link to a description of a book you say you checked out of the library and could not much understand.

The book does not appear to be available on-line.

You quoted an excerpt that vaguely categorizes the book but gives no substantive hint at its original contributions.

Nevermind, we can at least make some vague guesses that, OK, this is coming from a Scott-Milner-Parker tradition and is going to work out some technicalities of some damn eddy of that line of thought.

The work in question doesn't appear to have been widely cited or used. It's been a while. It's really not too clear why we should care about it but...

The way you described it made it sound like you keep hearing about this "process calculi" stuff and wanted to dig in and started with this book. Probably(?) you have had some exposure to process calculi before but you wanted to learn more.

Process calculi arise out of some contesting lines of theory starting in the 70s and continuing to today.

Awareness of the history of that thought is pretty damn useful to wrapping your head around books like the one you posted about.

One of the principle researchers from the foundational era (just a few decades back) of the theory in that book offered some comments relating it to the alternative approach to which Milner was responding.

That's pretty nice of that guy if you ask me but, anwyay....

Then some jerky comments came along in response and Hewitt defended himself succinctly and with some wit.

Then you confused your non-understanding of the material at hand with some kind of flaw of Hewitt and publicly called for his exile from society.

I'm sorry, but, shame on you.

it is to sigh, as they say in franch

I appreciate my failure to present my other OT in any clear manner. My fault, I accept it.

But I think it has become pretty clear that even when nothing mentions actors, suddenly a post appears that pushes the conversation towards actors.

I suspect most folks on LtU have reached the point where they see that as trolling, even if not done with any malicious intent -- the results are the same, overall.

Certainly this is all subjective, but I am guessing that most folks have put the Hewitt actor threads into their mental kill file. Unfortunately with the user interface we have here with this old Drupal site, we don't have a nice way to personally avoid stuff that has gone overboard. I don't think actors should never be mentioned, I just do personally think that the topic has been rather ruined by the Hewitt habit on LtU, much as I respect him and his work. I mean the world of CS is better off with the meat of his contributions, than with anything I have ever done or will do.

But LtU is being drowned in it, and I see that as wrong. That's why I respectfully proposed people voluntarily moving all Hewitt-actor stuff to their own threads. I hope that those who are involved in the Hewitt-actor threads can understand why I suggested this. It keeps the H-a conversations alive, without driving most other people away.

thanks for your support.

p.s. I don't see where I've called for anybody to be banished from society. I do see where I've asked for people to consider what is going on here on LtU and how to try to adjust it. Hyperbole isn't really going to help at the moment.

p.p.s. I do see your point of there being an actual historical connection between actors and process calculi. The sad thing is that Hewitt's posts on LtU have broken most anybody's interest. I for one have a really hard time not seeing it as yet another railroading [1] [2]. That's overall very sad, really, for everyone involved.

p.p.p.s.

It isn't just what you say. It is how you say it. God knows I'm enough of an a*hole to know that (as in I always say it wrong). If Hewitt would flip the style from "I will post a long thing only to actively re/de rail the conversion into talking only about actors, and I will only respond with terse koans to everything else" into "I will be as terse as Marco when I want to point out connections with actors, and I will be helpfully educationally kindly usefully verbose when people are trying to engage with me further" then there really would be no problem.

that's petulance

Society does not conform to your demands.

"even when nothing mentions actors"

But I think it has become pretty clear that even when nothing mentions actors, suddenly a post appears that pushes the conversation towards actors.

Are you also opposed to mentions of set theories in response to posts that don't explicitly mention them?

nope

One last try from my side :-)

I think I understand your perspective as I've had it many times before, and probably will find myself in situations where I will have it again.

Personally I think there are nuances of usability that are involved in deciding when and how to communicate. To try to stuff it down to a 2D spectrum, I'd suggest that one one side is e.g. somebody who only writes "FUCK" every other word, and not because they have a condition which causes them to do it other than pure disestablishmentarianism. On the far other side is somebody who always says please and thankyou and is a clear concise communicator. In the middle all all sorts of pathologies. Each is more or less subjectively bearable based on who is reading it and where they are reading it. Overall the way Hewitt has posted stuff has, unfortunately I think for several people here, been not good. It hasn't furthered much constructively. A person might be right, but they might also be poorly communicating, and perhaps even outright accidentally destructive in the way they communicate.

I see LtU overall, for a long time, as having suffered from the way in which Hewitt has communicated. I wish it were likely that the style would respectfully adjust, not 'conform', but I don't expect it.

I don't expect that what I'm saying will gain traction with you, and that you will continue perceive it as me only trying to shut people up or make them conform. I get it, I've felt that way sometimes too. That isn't my main intent.

I'm sorry for the gulf.

Excellent work here; crack the related censorship at Wikipedia?

Thomas,

Since you have done such an excellent job of defeating censorship here, would you like to try cracking the corresponding censorship at Wikipedia in the article Incompleteness Theorems ;-)

Cheers,
Carl

re wikipedia

I am not so much a masochist. :-P

For what it's worth …

(I originally started with a paragraph characterising people's intentions which was necessarily speculative and, I think, didn't necessarily add anything.)

For what it's worth, I agree with raould that, when I see one of Hewitt's posts, I stop reading. That is an entirely personal, non-expert reaction. It is not to say that they are not worthwhile contributions; it is entirely possible that the fault is (some or all) mine for not finding their worth.

However, from my (again) entirely non-expert point of view, it seems to me that Hewitt's posts aggressively advance his own point of view and push aside others'. Whether they are meant to do so or not, I read them that way, as un-civil discourse (regardless of their technical merit). I take Thomas Lord's point that it may seem that the theoretical folks are just as enthusiastic about pushing their own agenda; but, while I have seen many comments in which a theoretical point of view is offered, I have never seen one that seems to suggest that a practical point of view is not also worthwhile. That is, the theoretical comments that I have seen seem almost always to be intended additively, and I think that is an important difference.

I take Thomas Lord's point

I take Thomas Lord's point that it may seem that the theoretical folks are just as enthusiastic about pushing their own agenda

Pushing your own agenda is fine since we want to hear an author's view, but how you push it matters. In the 9 years I've been here, I don't recall any paper's author act like Hewitt. I would expect his terse, possibly unhelpful replies from an author who is understandably too busy to provide a more thorough explanation, but Hewitt's high posting rate suggests that isn't the issue. Better to have 1 or 2 long posts elaborating on a few concrete issues, than 50 terse 1 line posts simply repeating what you've already said many times.

Which isn't a commentary on Hewitt's work, which I haven't read, simply his PR, which is unhelpful at best. I'm glad Thomas has taken up that mantle, as at least he interprets and elaborates in a helpful manner. Hopefully it can be settled one way or another once and for all.

It is what it is

Proof by reflexivity.

I think at least half of the problem is that Hewitt speaks a different language than most everyone else, so that one of his short comments can spark a 20 post subthread of comic misunderstanding.

Not to deny the phenomenon,

Not to deny the phenomenon, but I doubt that 'more than half'. Looks to me like just one of several obsctacles obscuring others' view of errors in his reasoning; poor communication of misguided ideas can be much more of a problem than poor communication of correct ones.

It's an impish MIT style

The 70s, man, you had to be there or at least close to it.

judgements like "poor communication".

Judgements like "poor communication" a problematic. For example, one way to look at it is that the effectiveness of the communication is not necessarily the same at all time or space scales.

I subjectively have had the experience of being pretty sure he was advanced into dotage, followed by a retrospective recognition that I was completely wrong in that impression. The experience of that flip was an experience of a whole lot of previously said things that seemed crazy suddenly now being pretty darn concise and lucid.

That is the way the style I dubbed an MIT style works.

"Words are like fingers, pointing at the moon," the saying goes.

Effective Communication

Effective communication gets people to understand. If the audience does not understand, the fault is entirely with the speaker, to try and blame the audience for the speakers failure to get them to understand is never the right thing to do.

religious issues

If the audience does not understand, the fault is entirely with the speaker,

I don't subscribe to that religion.

Untenable position

I do not understand how anyone interested in educating others could have that point of view. It is so fundamentally self-evident that it is the duty of the teacher to explain their ideas to others that I don't believe you really think that, and are being rhetorical.

Clearly the definition of effective communication is that the audience understand, and I note you do not dispute this, but that you effectivly say you do not subscribe to effective communication?

The people who explain their ideas best are the ones society remembers, and the ideas of which become wide-spread. If Hewitt's ideas do not gain popularity it is nobody's fault but his own, for not explaining them well enough.

To blame the audience for not understanding is like a failed politician blaming the voters for not voting for them.

re If the audience does not understand, the fault is entirely wi

If the audience does not understand, the fault is entirely with the speaker,

the duty of the teacher to explain their ideas to others

I do not believe that "fault" and "duty" are really useful concepts to bring in here. (How about "sin"? "virtue"? Where do we stop?)

To blame the audience for not understanding

Nobody is doing that, either.

"Academic politics" = good writing

Indeed, it can be reasonable to leave the responsibility of understanding with the producers or with the consumers — you can argue either way. However, once you focus on modern scientific communities, you have less freedom. Hewitt complained at times of complicated "academic politics" interfering with his publications. I'm sure clarity is an issue, and who should do what is a matter of "academic politics".

Generally, the job of a scientist is to discover and communicate. Today in particular, readers are overloaded by information, hence simplifying the readers' job through good writing is overall more important — readers will abandon you for stuff which is written better. The US dominance might also have helped — the modern style we learn sits seems unique to the Anglo-Saxon world. Finally, and luckily, the whole system evolved to favor good writing — unlike some other disciplines, in CS it's hard to publish (at venues worth reading) without decent writing, and writing training is part of PhD training.

Program committees and call for papers explicitly enforce a high level of clarity — which also makes sense for a paper which might be read a lot, if you weight equally the time of writers and readers.
Moreover, to publish you have to make sure your reviewers get your paper, instead of rejecting it because they misunderstood something you didn't explain well, or because they have important questions that you forgot to answer. It's the same with proofs.

Of course, in your training as a PhD student you also get taught that all of this is a good thing. Reality is more complex: Researchers of older generations have more varied opinions, some research schools still act differently, and not everything that is published is so clear — some things are only clear to experts. I also understand that things were probably different when Hewitt started.

Still, Hewitt's writing is an extreme case, because he combines unclear writing with surprising claims, and seem to resent that his work is obscure in ways which I can't recognize by other obscure writers.

For instance, Gödel's theorems aren't trivial, but they are well-understood, and we know very well that you can only elude them with a logic where you can't do enough math (or where proof-checking is undecidable).
Similarly, first-order logic has been used for tons of applications in CS. Maybe this was a hack, OK, but claiming it's inadequate without comment won't get your paper accepted.

Overall, scientific communities define rules and conventions to do good science; if you reject those conventions, you're rejecting that community, and they'll typically reject you back.

re "Academic politics" = good writing

No. Hewitt's writing is pretty good. It's dense. Parts of it are generationally anachronistic (older people get his references better).

The big thing, there's a lot crammed in there. He has a fairly encyclopedic knowledge of some key branches in the history mathematical thought and he explains a bunch of stuff by locating it with reference to that history.

For instance, Gödel's theorems aren't trivial, but they are well-understood,

We have seen, right here on LtU, ample evidence to the contrary.

and we know very well that you can only elude them with a logic where you can't do enough math (or where proof-checking is undecidable).

It's easy to go wrong trying to informally paraphrase what Goedel allegedly proved.

First-order logic is an inadequate foundation for CS

First-order logic is an inadequate foundation for Computer Science.

Please see section on "Limitations of first-order logic" in Inconsistency Robustness in Foundations

I don't think there's

I don't think there's anything controversial about "poor communication" in this context. If almost everybody finds him hard to understand, he's not commmunicating well. The subjective experience you describe, even if you're correct in your perception, doesn't interfere with the judgement.

re If almost everybody

If almost everybody finds him hard to understand, he's not commmunicating well.

"Almost everybody" is a very large set, especially if we take a 4 dimensional view. I'm not buying your certainty.

In this context? It looks

In this context? It looks pretty straightforward. You believe you understand what he's saying and he's right. Do you think there's anyone else around here who believes those two things (of themselves)? They haven't been much in evidence; and you'd think they would be, because he's making claims that if true would be a big deal.

Shutt: an invitation. (re he's making claims that if true...)

he's making claims that if true would be a big deal.

I don't disagree with that but I also think that is a vague generalization.

Listen, you personally have really impressed me with some of the stuff you've written that has nothing to do with Hewitt. It's a thin wire and a long distance between us but at least in your presence on LtU I think I do recognize an intellectual.

I will be happy to contemplate and respond to any specific criticism of his work that you want to go over. For me, this is a good way to approach the texts and the ideas -- to see for myself how they hold up to critical "attack". Bring it on. You believe I'm wrong in my judgment about this work? Great! I'll be happy to join you in that belief if you can show me, specifically, what you mean.

LtU is for discussing papers, after all ;-)

You believe you understand what he's saying and he's right. Do you think there's anyone else around here who believes those two things (of themselves)?

It's ambiguous. It's certainly possible and plausible. I don't worry about that kind of question.

Not necessarily

he's making claims that if true would be a big deal.

I don't think that's necessarily true. I'm not sure any of the claims he's making about DirectLogic are wrong. His claims about Godel's results being invalid because of "fixed points" seem wrong. But it wouldn't surprise me that the stuff about math proving its consistency or uncountability of actors turns out to be right if properly interpreted. However, I suspect that they also won't be particularly interesting when properly interpreted, either. For example, I still suspect you can interpret his top level turnstile as more-or-less an assertion of truth, rather than provability. Of course, when I went to investigate, I got bogged down on the uncountability of sentences (which still appears to me to be a gimmick). I also think the documentation around this project is still too lacking to efficiently get to the bottom of issues like this.

nekkid turnstile

, I still suspect you can interpret his top level turnstile as more-or-less an assertion of truth,

I think what you are missing here is that Direct Logic is reflective in a higher order way. As Hewitt says: "DL is its own meta-logic."

That's a pretty compelling idea for a proposed "foundation". We don't need an infinite tower of meta-logics. DL is, among other things, about itself.

We can do things directly in DL, without stepping outside to some separate meta-logic, like talk about various classes of propositions, about proofs, and so on. Propositions and proofs are first class objects built-in to the axioms of DL.

Given that, well, you are well aware that there is a deep distinction between the provable and the true.

For example, it may be true that a given program will never halt even if no proof of that fact exists.

Just styalistically that's what nekkid turnstile is for: to allow reflection specifically on the provable.

But more deeply: I think(??) I understand you to still wonder if there is a first-order theory model of DL, at least for all intents and purposes.

I am very naive about model theory. Can you tell me a bit about how one might prove the non-existance of a first-order model? Then I can try to figure out if DL can be proved not to have a first-order model.

His claims about Godel's results being invalid because of "fixed points" seem wrong.

The meat of his claims about Goedel's fixpoint concern the second incompleteness theorem. I recommend you find a translation of the full paper and read that part of the paper. It's a complete handwave. It is utterly non-rigorous. It is sloppy as hell. It is not formalizable. It's an informal handwave at best. Read it (it's only a few pages) and get back to me. What Hewitt is doing there is giving it the most generous reading anyone has come up with and, quite correctly agreeing with but being more precise than Wittgenstein, dispelling the hand-wave.

The problem with Goedel's second is that people have this slogan that informally goes "If a system is powerful and proves itself consistent, it is inconsistent!"

The slogan is false. It is an over-generous interpretation of Goedel's second.

Hewitt's contribution in this area is .... rigor.

For a more formal treatment of Godel's 2nd

See here.

(Going a bit Hewitt myself on it: To me it proves that "discrete thought" is effective, an evolutionary trait, but flawed in the limit. Which tells us, since we assume some kind of consistency of reality, that reality is a continuum. For as far as these approximations apply to "continuous" thought.)

(Well. That's how I read it. Yeah, I am pretty weird. I haven't even started yet.)

it's worse than that (he's dead, Jim)

I agree it pretty much arises from the discreteness of formal systems. But you can't avoid it by announcing that your discrete elements have magical immunity to being counted. My view in my blog post on this stuff was, and so far still is, that to get around the discreteness aspect of Gödel you'd have to tamper with the concept of discrete propositions, or something equally shattering; rather than tampering merely with what the propositions are about.

[...] it's more than a little mind-bending to try to construct a notion of truth that's inherently not separable into discrete propositions; indeed, one might wonder if the consequences of the attempt could be rather Lovecraftian. So for the nonce I'll keep looking for profundity around Gödel's results without abandoning the concept of a proposition.

I think he made it worse, actually

My reasoning is that I can restrict myself to a subset of his syntax which is countable and try the Godel argument.

So, having uncountably many sentences doesn't help, it actually makes his position worse.

re it's worse than that

John if you believe that Goedel's 2nd theorem applies to Direct Logic then you can rigorously prove it by producing a contradiction. You won't be able to do so because the rules for constructing Sentences (which I've linked several times now. Page 27 of that paper.)

If you merely believe that DL is inferentially incomplete, good news, you can prove that pretty simply in DL itself.

if you believe that Goedel's

if you believe that Goedel's 2nd theorem applies to Direct Logic then you can rigorously prove it by producing a contradiction.

It should be clear, on a moment's consideration, that that doesn't follow. It requires the additional stipulation that Direct Logic be rigorously defined, lack of which was in the past sharply criticized; whether it's now well-defined is one of the things I'd expect to look for. Of course, it also requires that DL be 'sufficiently powerful', which from some past discussions I suppose could also warrant keeping a weather eye out.

Confusion about language and model

Additionally, there seems to be some confusion about language and model. If the model is uncountable it doesn't imply the terms of the language are. Right?

(I really want to get out of this discussion by now. Someone else, better informed, can debate this.)

DL is well defined.

It requires the additional stipulation that Direct Logic be rigorously defined, lack of which was in the past sharply criticized; whether it's now well-defined is one of the things I'd expect to look for. Of course, it also requires that DL be 'sufficiently powerful', which from some past discussions I suppose could also warrant keeping a weather eye out.

DL is well defined in the appendices of the damn paper

Re formal treatment of Goedel's 2nd

If you want to formalize Goedel's second, you need a formal system in which Sentences (which may contain free variables) and Propositions (which may not) are first class objects.

Hewitt's system (see page 27) imposes strict rules on how Sentences may be constructed.

There does not appear to be any way to carry out Goedel's construction without violateing those rules.

DL is inferentially incomplete and can prove itself so. It can do so more directly than Goedel's first theorem because it does not need to "encode" sentences. But you can not build a proposition of the sort Goedel suggestions in the 2nd theorem, a proposition that would lead to a contradiction.

If you say so

God. That's fine with me. If you can block it that manner that could imply that the construction of a Godel proof is impossible within the logic. (Remains a bit of a question what you could do if you try to do a Godel argument outside of the logic. Or within the logic with an encoding of sentences.)

But after claims about Y combinators (for which I don't think I'll find supporting evidence in literature) and claims about uncountability (which I think doesn't exclude the existence of a countable core such that a Godel construction could apply; also seems to confuse language and model), I am personally not going to read through it anymore. (I have to admit I am also not that interested in DL.)

But be my guest and keep discussing it with Matt M.

If you want to formalize

If you want to formalize Goedel's second, you need a formal system in which Sentences (which may contain free variables) and Propositions (which may not) are first class objects.

Actually, no, you don't. It suffices that integers are first class objects.

re if you want to formalize

If you want to formalize Goedel's second, you need a formal system in which Sentences (which may contain free variables) and Propositions (which may not) are first class objects.

Actually, no, you don't. It suffices that integers are first class objects.

You simply don't understand Goedel's second theorem. If you want to read it together I'd be happy to go over it with you.

You're mistaken. Though I

You're mistaken. Though I admit to a trivial misstatement, in that one has to reason about integers and also functions on integers (or something equivalent).

no I'm not. re You're mistaken

Goedel's second requires you to construct a sentence which you can prove to be unprovable, but that formally, within the system at hand, implies the sentence and so constitutes a proof of the sentence.

Goedel's second requires a reflective logic.

Since Goedel did not think in terms of types, he was unable to contemplate how a logic could be safely reflective as well as Hewitt, who decades later thinks in terms of types, can think about reflection. Goedel couldn't imagine how to avoid certain kinds of recursive definitions of a sentence. Hewitt points that out in retrospect.

As for Hewitt's general approach to categorical construction: Have you ever read any of Conway's "On numbers and games"? I strongly recommend, in this context, the first couple of chapters plus the bit between parts 0 and 1... where Conway addresses foundations.

Conway freely makes use of transfinite inductions (for fun!) and Hewitt freely excludes them (for computer science!) but those two guys are on the same page in terms of how to build foundations.

If the formal system can

If the formal system can reason about the behavior of arithmetic functions, and if some arithmetic function happens, coincidentally, to mirror the behavior of the system reasoning about arithmetic functions, the system can forbid anything else, like reflective sentences or fixpoints, all it wants; they're irrelevant. If it can't reason about arithmetic functions, it's pretty lame. If its behavior in doing so doesn't happen to conincide with the behavior of some arithmetic function, it's uncomputable, which is a differnet kind of lame.

re if the formal system can

Hi Shutt.

Goedel's first theorem meta-mathematically proves the incompleteness (or else inconsistency) of any axiom set strong enough do arithmetic.

DL, importing proof methods from Turing or Church proves its own incompleteness (or else inconsistency).

Goedels second theorem, which is really kind of a hand-wave pseudo-theorem, imagines a logic in which the first theorem is not done meta-mathematically, but is instead done directly in the system we are talking about. To accomplish this, the second therom requires that we construct a sentence which can formally prove, within the system, that one of its terms its its own integer "encoding".

Goedel's second theorem also assumes the system is self-provably self-consistent.

From the two assumptions (self-provably self-consistent and containing a self-referential sentence in that sense) Goedel describes how to derive a contradiction.

What Goedel could not see, because he lacked awareness of how types could be used here, is the possibility of a system that could formally (even if not dispositively) could prove its own self-consistency, yet in which the self-referential sentence was exluded by a static syntax check.

Hewitt has pointed out that such a system is possible, and is actually (once you get a few simple things), natural, simple, powerful, ample to formalize classical maths, etc.

Part IV of Goedel's paper is a hand-wave. Hewitt sorted it out. He's not kidding (and is right) when he points at a few sentences from Wittgenstein's record that show Wittgenstein kind of grasped how to sort it out, even if he didn't manage to formalize it rigorously as Hewitt has done.

p.s.: this is all "preliminary" in the sense, yeah yeah, Hewitt has a clean maths foundation, but really the practical applicability of this to engineering distributed, concurrent, and decentralized computing infrastructure appears quite fertile.

Seems utterly wrong

What Goedel could not see, because he lacked awareness of how types could be used here, is the possibility of a system that could formally (even if not dispositively) could prove its own self-consistency, yet in which the self-referential sentence was exluded by a static syntax check.

This is the kind of statement Hewitt makes with which I most strongly disagree. This characterization seems totally wrong to me. Godel's theorem doesn't require the kind of fixpoint Hewitt blocks with types! See my post here. The reason that proof can't be made to go through in DirectLogic has nothing to do with types. Look at the types involved - they're trivial.

The only obstruction to carrying out that proof in DirectLogic is that DL doesn't understand that propositions, formulas and theorems (of the naked turnstile) are enumerable. And of course Godel's theorem falls through if you take away those elements. The enumerability of theorems is the crux of the idea.

Also, one point on which you and John may be talking past each other: Godel's second theorem still applies to DirectLogic in this case, but using a homebrew provability construct defined within Nat instead of the naked turnstile. If it applied to the naked turnstile, I gather that would lead to inconsistency in DirectLogic.

ℕ does not allow "self-referential" props because of types

The Peano/Dedekind categorical axiomatization of ℕ does not allow "self-referential" propositions because they cannot be constructed using nonexistent fixed points that are disallowed by types. Also note that because of types, ℕ is much stronger than what can be axiomatized in any first-order theory.

Church/Turing provided the first correct proof that the closed mathematical theory ℕ is inferentially incomplete using the convention that the theorems of the closed mathematical theory ℕ are computationally enumerable. Because of Church's Paradox, the theorems of Mathematics are not computationally enumerable.

Strings are computationally enumerable. However, for generality in Computer Science, sentences and propositions are not countable in order to accommodate sentence and propositions that incorporate Actors.

It is very important for Computer Science that the following are rigorously defined and axiomatized using types:

Strings
Sentences
Propositions

ℕ does not need "self-referential" props

The Peano/Dedekind categorical axiomatization of ℕ does not allow "self-referential" propositions because they cannot be constructed using nonexistent fixed points that are disallowed by types

But Godel's theorem doesn't require "self-referential" propositions. See the link I provided above. Your response in that thread was that formulas aren't enumerable. I agree that's an effective response, but it also gets at the heart of my concern for your approach: you haven't effectively axiomitized the notion of provability in DL. Thus meta-level claims that DL proves its own consistency fail because the DL turnstile doesn't actually correspond to provability as it is usually understood.

Gödel's incompleteness "proofs" rely on a "self-ref" propositon

Gödel's incompleteness "proofs" manifestly rely crucially on the existence of a nonexistent "self-referential" propositon constructed using a nonexistent fixed point.

Consequently, Gödel's incompleteness "proofs" are incorrect.

I don't think so

Your presentation of Godel that I've seen you post several times relies on the existence of fixpoints. Godel's original presentation and the presentation I linked to in my post above rely on a diagonalization technique instead. The existence of an inconsistency proving fixpoint, in the presence of an assumed self-consistency proof, was a conclusion rather than an assumption.

Gödel's "proofs" relied on existence of "I am not provable."

Gödel's incompleteness "proofs" relied on existence of the self-referential proposition "I am not provable." constructed using a nonexistent fixed point (sometimes called the Diagonal Lemma).

re I don't think so

I have an idea for cracking this confusion:

Your presentation of Godel that I've seen you post several times relies on the existence of fixpoints. Godel's original presentation and the presentation I linked to in my post above rely on a diagonalization technique instead. The existence of an inconsistency proving fixpoint, in the presence of an assumed self-consistency proof, was a conclusion rather than an assumption.

There are two ways to understand Goedel's 1st theorem.

Way 1: Goedel gives you a computational procedure. If you perform that computation, you can produce an unprovable proposition or else a contradiction. The proposition asserts that a certain equation has no solutions. If you had a proof of the proposition, that no number is a solution to that equation, you would be able to use Goedel's computation to translate the proof into a number that... satisfies the equation. If your system is free from contradictions, no proof of that proposition is forthcoming.

Way 2: You have, let's say, built such a proposition. You have an equation with no solution (hopefully) and a proposition that asserts the equation has no solution. You know that if a proof were found a contradiction would arise. So, you conclude that the proposition is true. I don't mean that you informally, looking at the model, believe it faithful to the target system and so say the proposition is informally true. I mean you have a formal result that links the model in the equation to the truth or falsity of the proposition containing the equation.

Nothing Hewitt or I are saying detracts from understanding the 1st incompleteness theorem in "way 1". Of course, interpreting it that way is unsatisfying compared to "way 2".

The problem with "way 2" is that even though you are reasoning metamathematically, you are still reasoning over domains like Sentences and Propositions in the target system. You are still constrained by their rules of construction. The construction rules for Sentences in DL don't give you any way to formalize "way 2".

In your earlier post (other topic) about the diagonalization you think should go through, essentially you are asking to relax the rules for constructing sentences.

Basically, you are asking for syntax "trees" that contain cycles because, given those, and feeding some particular ones to rules of inference, you can do all kinds of Goedelian things. The domain of trees used here, though, doesn't allow cycles.

Nearing exhaustion

Nothing Hewitt or I are saying detracts from understanding the 1st incompleteness theorem in "way 1".

I think you're speaking for yourself here. Hewitt has been pretty clear, I think, that he doesn't see either of Godel's incompleteness theorems as valid.

Basically, you are asking for syntax "trees" that contain cycles because, given those, and feeding some particular ones to rules of inference, you can do all kinds of Goedelian things.

No, not at all. I'm asking for syntax "trees" that can be inspected as data because that's what expressions are. I similarly want proofs that can be inspected as data, because that's what proofs are. Without adding axioms that allow you to understand turnstile in terms of the existence of enumerable proofs, how can you claim in the meta-level that it means provability?

Note that syntax trees will not contain cycles. Godel doesn't use some oddball variant of PA that allows cycles. Rather he uses ordinary PA that doesn't include cycles and shows that the assumption of provability predicate would lead to a problematic cycle. That problematic cycle doesn't exist though, because PA is consistent and cannot encode its own provability relation. That's why using types to prevent self-reference doesn't make any sense - PA doesn't allow self-reference!

I think we may have to agree to put this aside at some point. I don't feel that we're making progress towards a resolution. Do you see what I'm saying about the meaning of turnstile being so important? Do you agree with my point that if you don't include enough axioms to pin down its meaning then you aren't warranted to claim at a meta-level that it represents provability?

re nearing exhaustion

I think you're speaking for yourself here. Hewitt has been pretty clear, I think, that he doesn't see either of Godel's incompleteness theorems as valid.

"Way 1" of understanding Goedel's 1st doesn't require formally valid proofs about the subject system. That's what distinguishes it from "way 2".

I similarly want proofs that can be inspected as data, because that's what proofs are. Without adding axioms that allow you to understand turnstile in terms of the existence of enumerable proofs, how can you claim in the meta-level that it means provability?

The DL rules of inference.

Note that syntax trees will not contain cycles. Godel doesn't use some oddball variant of PA that allows cycles.

In the second theorem he hand-waves concerning an imaginary variant of PA that would allow such cycles. Throughout his life he apparently never made this result precise.

Do you agree with my point that if you don't include enough axioms to pin down its meaning then you aren't warranted to claim at a meta-level that it represents provability?

You can introduce a naked turnstile before proposition P by showing a proof of P. If you assume the provability of P you can assume P.

I'm not sure what else you would mean by provability.

Hiatus

"Way 1" of understanding Goedel's 1st doesn't require formally valid proofs about the subject system. That's what distinguishes it from "way 2".

Right, but Hewitt doesn't seem to acknowledge way 1.

The DL rules of inference.

Sure, OK. We're on the same page here. Most inference rules can be viewed as axioms or vice versa. But it's important that the axioms + inference rules pin down the meaning of provability.

In the second theorem he hand-waves concerning an imaginary variant of PA that would allow such cycles.

So you're interpreting all of Hewitt's claims about the invalidity of Godel's theorems as referring exclusively to some commentary Godel published about a kind of general theorem he never actually proved? I have a hard time reading him that way.

You can introduce a naked turnstile before proposition P by showing a proof of P. If you assume the provability of P you can assume P.

Well, to really pin it down, you need that provability is equivalent to the existence of a certain finite proof object.

I am getting exhausted by this thread, so I'm going to take a break for a while.

Every proof has a finite number of steps

Of course, every proof has a finite number of steps and furthermore there is a total computable procedure Proof?:

∀[p:Proof, s:Sentence]→  ├p ⌊s⌋ ⇔ Proof?.[p, s]=True

Gödel's "I am not provable" comes from a nonexistent fixed point

Gödel's alleged Mathematical sentence "I am not provable" comes from a nonexistent fixed point (sometimes called the Diagonal Lemma) that doesn't exist because of types. His results were for Principia Mathematica, which was intended as the foundation of all of Mathematics. Unfortunately, Principia Mathematica had some defects in its types that have been corrected in Direct Logic.

Church/Turing correctly proved inferential incompleteness (sometimes called the "First Incompleteness Theorem") without using a nonexistent "self-referential" proposition. The Church/Turing theorem and its proof are very robust.

Of course syntax trees (e.g. Actors of type Sentence) in Direct Logic can be inspected as data. However, "self-referential" sentences cannot be constructed because of types.

Bare turnstile (i.e. ⊢ without a sub-scripted theory) is very important in Direct Logic where it means what it is supposed to mean, namely, provability in Mathematics.

Church's Paradox means theorems of Math not enumerable

Church's Paradox means that the theorems of Mathematics are not computationally enumerable. According to Church:

in the case of any system of symbolic logic, the set of all provable
theorems is [computationally] enumerable...  any system of symbolic
logic not hopelessly inadequate ... would contain the formal theorem
that this same system ... was either insufficient [theorems are not
computationally enumerable] or over-sufficient [that theorems are
computationally enumerable means that the system is inconsistent]...
    This, of course, is a deplorable state of affairs...
    Indeed, if there is no formalization of logic as a whole, then
there is no exact description of what logic is, for it in the very
nature of an exact description that it implies a formalization. And
if there no exact description of logic, then there is no sound basis
for supposing that there is such a thing as logic.

Uncountable = uncomputable

You should agree that (most) other presentations of logic have countable sets of sentences/propositions — even when types are involved. Moreover, Gödel's theorems only hold if proof-checking is decidable. If proof-checking is decidable, those sets are countable (more precisely, a terminating program only looks at a finite amount of data — so it either only has finite input, or it looks only at a finite part of its input).

Strings are countable; sentences and propositions are not

Strings are countable, but sentences and propositions are not countable.

Proof checking is nevertheless computationally decidable.

5 words

When Hewitt makes these assertions he never bothers with even five more words to say why they would be the case.

A person who isn't hip to Thomas Lord's claim that this is the "70's way" would be forgiven for assuming that Hewitt doesn't know the definition of countably infinite.

Theorems of Math are not computationally enumerable

Mathematics proves that it is open in the sense that it can prove that its theorems cannot be provably computationally enumerated:
Theorem ⊢Mathematics is Open
Proof. Suppose to obtain a contradiction that it is possible to prove closure, i.e., there is a provably computable total procedure Theorem:[ℕ]↦Proposition such that it is provable that

∀[Ψ:Proposition]→  (⊢ Ψ)  ⇔  ∃[i:ℕ]→TheoremsEnumerator.[i]=Ψ

A subset of the theorems are those proving that certain procedures [ℕ]↦ℕ are total.
Consequently, there is ProvedTotalsEnumerator:[ℕ]↦([ℕ]↦ℕ) that enumerates the provably total computable procedures [ℕ]↦ℕ that can be used in the implementation of the following procedure:

Diagonal.[i:ℕ]:ℕ ≡ 1+(ProvedTotalsEnumerator.[i]).[i]

However,

• Diagonal is a provably computable total procedure because it is implemented using provably computable total procedures.
• Diagonal is not a provably computable total procedure because it differs from every other provably computable total procedure.

The above contradiction completes the proof.

[Franzén 2004] argued that mathematics is inexhaustible because of inferential undecidability of mathematical theories. The above theorem that mathematics is open provides another independent argument for the inexhaustibility of mathematics.

uh

While I don't get your notation, I know that to use a diagonalization the elements have to be infinite.

If you posit that propositions are finite in length then you can't use that argument.

One might point out that while an infinite proposition might be true, one could never prove it unless it succumbs to induction - and that's only possible if there's a finite form of the same proposition.

Thomas claimed that you had a different argument

His claim was that you allowed your proofs to contain what one might call unconstructable reals. That you considered them the completion of infinite processes that involve randomness (and thus the program generating them can be finite).

re Thomas Claimed

No I didn't but I'll clarify.

DL is able to directly prove that proofs can't be enumerated. The argument is borrowed, I've read, from an observation Curry made in a communication to Goedel. (Hope I remember that correctly.)

If you could enumerate the proofs, that means you could write a procedure that given a natural number, 0, 1, 2, .... the procedure would return you the Nth proof. And furthermore, that every possible proof would eventually show up in this list.

Curry pointed out, and Hewitt formalizes, a contradiction that arises if proofs could be enumerated that way.

------------------

Hewitt has also mentioned that terms can be abstracted as actors and the existence of actors that do things like compute infinite lists of random bits proves there are uncountably many possible actors, hence terms.

I've elaborated on why that is potentially a practical observation.

------------------

DL does not insist that terms are abstracted as actors. There are not axioms that describe how to do this. (Perhaps there is a ("sub-")theory defined somewhere about this but not in the papers under discussion.)

DL is open and its axioms are general enough to permit terms to be abstracted as actors, which is another way to see that propositions (hence proofs) are not countable.

As I understand it, that paradox is invalid

A diagonal has to be a proposition of infinite length, but a countable infinity of propositions would only have propositions of finite length, so it's no contradiction that the diagonal is not a member.

If you start with the assumption that propositions can be have infinite length, then you could use a more abstract diagonal to prove that they have the order of the continuum.

I've addressed this before.

Explaining diagonalization

I'm sitting in a parking lot at Safeway and I'm and I'm almost frustrated enough to post a top level "start a new topic" post to explain Cantor's diagonalization to Thomas Lord and Hewitt.

I know how all this works because decades ago (or was it only a decade and a half) there was a 13 year old troll on USENET'S sci.math who misunderstood Cantor and was posting abuse every day saying that mathematics was wrong. Since sci.math was unmoderated, all people could do was to try to explain to him how he was wrong.

So between that and a short book on the subject I came out with a solid idea about how countable infinities work and differ from C

I'm going to assume that you know the outline. Start with a set of natural numbers, then generate a power set. All of the subsets, including the infinite ones.

From this, each number on the line from 0 to 1 can be constructed as a bijection of the subsets, for each subset the number is the sum of the 2^-1-n, where n is every natural number in a given subset.

To this you can apply the diagonaliztion argument to show that it can't be put in a 1 to one correspondence with the counting numbers.

But stop for a second and imagine what would happen if instead of constructing reals with sums of 2^-1-n you lost the -1 and the minus sign and constructed sum of 2^n. That's what this 13 year old did. After all, that's pretty much what binary integers look like.

Where he got confused is that while this DOES generate all of the natural numbers, it also generates another set, it's not a continuum because of the natural numbers, it's a continuum because of all of the numbers that are infinite that it also generates.

say that we're displaying a number backwards from 2^0 to 2^1, 2^2 ...

0000... where "..." contains an infinite number of 1s in it is an infinite number. 1000... is also an infinite number if it's continuation has an infinite number of 1s in it, it doesn't ever matter where they are. And so on.

If you want to generate only natural numbers then you have to restrict your subsets to only be FINITE subsets.

The difference between the power set and the set of all FINITE subsets is a set that interpreted this way is a set of only infinite numbers, it is uncountable.

If instead of using natural numbers as our base set, we took finite mathematical propositions as our base set, then the set of all finite subsets of this generates a countable set. But the power set generates a continuum.

A USELESS continuum. Every finite proposition is in a countable set. Difference set between the continuum and the finite sets of propositions are all infinite sets of propositions. If you wanted to prove proposition AND proposition AND proposition etc... you never could.

Like the infinite natural numbers, you can create a set with the order of the continuum, but it turns out to be an extension that loses the useful properties of the smaller set.

The useful integers are finite and countable.
The meaningful propositions are finite and countable.

To really explain this I'd have to give examples of counting sets.

I leave it to the student to refresh themselves on this.

re explaining diagonalization

DL doesn't have sets of the kind you are used to.

For example, I can tell you that "the powerset of the set of all natural numbers" is not a thing that exists in DL.

In fact, "the set of all natural numbers" is not a thing that exists in DL.

Natural numbers are objects whose type is ℕ

Sets of natural numbers are objects whose type is Set◁ℕ▷

The axioms for sets in DL don't provide any way to define the "universal" set of some type.

Sigh, you learned nothing

I might as well be typing at a rock.

Do you concede that your idea of "proposition" can be infinitely long?

If so it was trivially uncountable. Also trivially useless as a proposition.

If you don't admit those things, then you never understood the terms you use. I see little sign that you or Hewitt do.

I give up. How long until everyone else does?

re sigh

I might as well be typing at a rock.

Your comments are very confused.

I give up.

Good.

Of course he was talking about actors not propositions.

But are you claiming both? That actors and proposition can be made of an infinite number of symbols AND that the symbols can include unconstructable reals?

Diagonal and ProvableComputableTotal

Diagonal is a function, so it is definitely in ProvableComputableTotal. The assumption that it is different from any other function is wrong.

If ProvableComputableTotal enumerates all finite streams symbols of the Unicode alphabet used in constructing programs, and rejects any syntactically invalid programs, then:

ProvableComputableTotal.[Diagonal]

Is simply true, with no contradictions.

re Diagonal and ProvableComputableTotal

ProvableComputableTotal.[Diagonal]

Is simply true,

The codomain of ProvableComputableTotal is , not Boolean.

It's a predicate too, somehow

I was confused by that myself, but Keean's one dot away from using the same notation as Hewitt's "¬ProvableComputableTotal[Diagonal] because..."

Unserializable proofs

Diagonal is a function, so it is definitely in ProvableComputableTotal. The assumption that it is different from any other function is wrong.

Suppose it's in there, like you say. Then Diagonal is provably total, and it's ProvableComputableTotal.[i] for some i. Now if we try to compute Diagonal.[i] according to the definition of Diagonal, the only possible integer result must satisfy the equation (n = n + 1). So there can't be any result but nontermination -- but Diagonal is provably total, so it can't be nontermination either!

Hence, we get a contradiction, and the proof by contradiction continues to go through as planned.

If ProvableComputableTotal enumerates all finite streams symbols of the Unicode alphabet used in constructing programs, and rejects any syntactically invalid programs [...]

The proof assumes nothing about how proofs are encoded. If you add a new assumption that proofs are encoded as finite strings in a finite alphabet, you get a contradiction.

Among all the assumptions we could reject to avoid that contradiction, I think the two most contentious assumptions are these:

  • Proofs can be encoded as finite strings in a finite alphabet.
  • If we know there's a proof of something, we know it's true.

Formal mathematics tends to reject the second assumption. Classical Direct Logic rejects the "finite alphabet" part instead.

I prefer Classical Direct Logic's choice here. The first assumption has limited value since I'm willing to use network addresses and cryptography, and I'll need something like the second assumption if I want to "eval" newly received proof objects on the fly.

Enumerable != The Enumerating Function is Total.

I don't think I have understood this correctly:

Diagonal[i:ℕ]:ℕ ≡ 1+(ProvableComputableTotal.[i]).[i]

ProvableComputableTotal is type (N -> Proposition), so given (i:N), it returns the (i)th proposition. Assuming . takes precedence over +, we must assume that all propositions take a single argument of type N and return an N, which is no problem, and then we add one to it?

In the case that Diagonal is in ProvableComputableTotal, then for some (i) ProvableComputableTotal.[i] is Diagonal, so in that case the above reduces to:

Diagonal[i:ℕ]:ℕ ≡ 1+Diagonal.[i]

So the paradox would appear to be that if Diagonal is provable total, then it is not provable total? The problem is "provable total" is an impossible function due to the halting problem?

Is it possible the ProvableComputableTotal, can enumerate all total-functions, but is itself not a total-function? Then the definition of Diagonal is not total by definition (as it calls a non-total-function, and total-functions must only call other total-functions). It would seem enumerable and total are different things. Isn't the above just a proof that enumerable and total are different concepts.

Further total functions are a subset of partial functions (they are partial functions that just happen to be totally defined). Partial functions are countable (as that is pretty much the definition of a Turing machine), therefore total functions are countable (the subset of a countable set is countable).

Given that it is easy to see that ProvableComputableTotal must be a non-total function, but that does not stop the set of total-functions being countable, therefore enumerable != the enumerating function is total.

Edit: In fact I think it is possible to enumerate all functions, and that a subset of those are total, so must be enumerable, but we cannot know which due to the halting problem.

Misreadings

Edit: I was correcting some misconceptions, but it looks like you've got it now. Reformulating my response...

It would seem enumerable and

It would seem enumerable and total are different things. Isn't the above just a proof that enumerable and total are different concepts.

"Recursively enumerable set" is a term with a particular meaning. I think you'd fare better looking it up than listening to me try to describe it in full, but basically: If an algorithm can generate a sequence that eventually lists every element in the set, but doesn't list any element outside the set, then the set is recursively enumerable.

So the paradox would appear to be that if Diagonal is provable total, then it is not provable total? The problem is "provable total" is an impossible function due to the halting problem?

The point of the proof is to show that Classical Direct Logic's theorems can't be recursively enumerated. By way of contradiction, the proof supposes that they can be. ProvableComputableTotal uses that enumeration of theorems to do its own recursive enumeration of provably total procedures. (Actually, I'm not quite sure how we're supposed to leap to having procedures if all we had before were the theorems, but I think it might involve parsing the theorem's syntax and reinterpreting it.)

Is it possible the ProvableComputableTotal, can enumerate all total-functions, but is itself not a total-function?

The proof Hewitt posted says it's total, but it says that without much explicit justification. I think the idea is that if you call it with index 4, it runs the recursive enumeration of theorems until it sees a fourth termination proof, which is always going to happen sooner or later.

Partial functions are countable (as that is pretty much the definition of a Turing machine)

I think they won't be countable in this system, if only because of embedded Actors.

Given that it is easy to see that ProvableComputableTotal must be a non-total function, but that does not stop the set of total-functions being countable, therefore enumerable != the enumerating function is total.

Hmm, I'm not sure I follow, but don't make the mistake of leaping from "countable" to "[recursively] enumerable."

The set of Turing machines is countable and recursively enumerable. The set of Turing machines that diverge is countable but not recursively enumerable.

Terminology

So I am saying that ProbablyComputableTotal is countable but not enumerable then, because we can enumerate all partial functions but we cannot define a terminating filter to find all total functions (as it is the halting problem).

If the partial functions of a Turing machine are countable, and as ActorScript is implementable on a turing machine, then they are countable (but may not be enumerable). If we cannot implement DL on a turing machine its a bit useless for any computational use.

Does CDL not allow partial functions then?

In any case is this relevant to the larger argument, because we can enumerate all the partial functions, which includes all the total functions. In which case we can Godel number all the partial functions, which mean all the total functions have Godel numbers, we are just not sure from the Godel number whether the function is total or partial.

Does that not mean the important property is countable, not enumerable?

gah I figured this out

In Hewitt's world every possible function has a distinct symbol, not just functions that you can specify in finite form, so just the fragment of a string f where f names a function has the order of the continuum.

The argument about propositions is a fallacy though, he's assuming that he starts with a list of proofs where every total function (over naturals) is in the list, and tries a diagonal on it. But who would think that you could have a proof for every unspecifiable function? That's an odd assumption.

He's not talking about propositions in the normal sense, he's not talking about symbols in the normal sense.

He's defined a new definition of both that is so expansive that he's reasoning not about mathematics as normal people understand it, because in it, mathematical objects aren't specified by their properties and reasoned about, they're all mystically addressed directly.

For him the string "f" specifying one function can't be stored in finite form. Not even a single function can be named in finite space.

re terminology

By hypothesis, ProbablyComputableTotal and Diagonal are procedures, not functions.

Does CDL not allow partial functions then?

A function f:ℕ ↦ ℕ is in some sense a partial function of the type ℕ ⨆ ℝ.

The DL paper gives typing rules for functions but does not axiomitize them otherwise.

Suppose we define a function f in DL, in terms of a non-total procedure.

One implication is that if we construct an object x in the domain of f, we can not automatically say we have constructed f.[x]. (This would require proof.)

What if we can in fact prove that the procedure defining f does not terminate for input x? Does that mean we have proved f is "not a function?"

The basic answer appears to be "no" for the reason that there are no axioms of totality for functions.

Conceptually, we could say that "f" is a function but our definition of it is incomplete.

Diagonalise partial functions

So we can diagonalise partial functions, and partial functions are countable, irrespective of support for Real numbers by numbering their bit-vector representation.

enumerating partial procedures

You can enumerate all deterministic procedures (including non-total procedures).

Given Diagonal ≡ DetProcedure.[i]

DetProcedure.[i].[i] must diverge.

You can enumerate the definitions of non-deterministic procedures but some of these denote an uncountable number of actual procedures.

Procedures are their definitions.

This is where we disagree, there are not uncountable procedures. A procedure us its definition. There are procedures that return possibly infinite streams, but returning a different stream is not necessarily a different procedure. A procedure that returns a random bitstream is a single procedure.

Well Keean

One of the "proofs" in this thread assumed that there is a function for every possible ℕ->ℕ mapping.

That's not a list of functions that can be defined in finite source, that's a whole order of the continuum. The real numbers can be mapped to a subset of these mappings.

All that crap about mapping reals to unfinished streams has nothing on this. Hewitt assumes that the symbol for a single function has to have as much information as a completed, unspecifiable real.

This is not mathematics as we're used to because here every function has a concrete address and the set of functions are not definable functions. This is way off the deep end for that.

What they're calling math is much much larger than math. They don't name functions or numbers and reason about them, they mystically address them directly.

mappings N -> N are countable?

I think functions from N -> N might be countable. If we start with all functions that only change the first element of N, there is a single function which is the identity. If we consider functions that only change the first two you only have swap and id.

So for functions that only change the first 'm' elements of N, leaving the rest unchanged, there are (m!) permutations. So if we only consider functions that change the first 'm' elements we have:

sum (n = 1 .. m) n!

possible functions. As m increases the sum is ever increasing but always finite, so doesn't this make the members of N -> N countable? For any finite 'm' we know how many unique functions mapping [0 .. m] -> [0 .. m] there are.

They can't be

take the real number line from 0-1

Excuse me for using a notation that's more familiar to me

(define (real_to_natural_function r)
    (lambda (i) (nth-digit r i)))

There, now you can turn the reals into unique ℕ->ℕ mappings, and of course the reals aren't countable.

What's wrong with your countable set is

that it misses an entire class of functions.

You NEVER get to a single function that changes an infinite number of inputs.

With that caveat, yes you do get a countable set. This is subset in a similar way the fact that set of FINITE subsets of the set of natural is a countable set, but the whole power set give you the continuum and a way to construct the reals.

I talked about this in another post.

There is a continuum of functions that change an infinite number of inputs, and you don't generate A SINGLE ONE OF THEM, EVER.

Not one-to-one

Yes, you are right, its not a one-to-one correspondence. Lets stick to programming languages that can describe programs that can actually exist.

re a procedure is its definition

A procedure that returns a random bitstream is a single procedure.

It is hard to reconcile that with the denoted range of the procedure:

GenF.[] → Memoize.[[x:ℕ] → (0 either 1)]

GenF always returns a procedure actor. With probability 1, no two of the procedure actors it returns are the same. The procedure actors that are semantically permissible to return from GenF are uncountable.

This is the first time I've seen a memoized version used

Yes, that changes it.

The old version had no state.

Weird Semantics

Well, you can say I am not allowed to look at the source code for a function when I compare them for identity, yet I can see the source code above.

So you return a function that randomly chooses 1 or 0, but because of memoisation sticks to that random choice for any given number.

So you memoise a potentially infinite number of functions?

Actually it does not matter if you compare the source code, or the binary object code output by the compiler. Hence we do not have uncountable functions, as we can count the binary object code.

But his functions can't be defined in finite code.

I just finished explaining that.

You will never finish the comparison.

Use the source

I can compare the source code, which is finite.

An expandable arbitrary precision version is realizable

It occurs to me that you can still enumerate all functions out to an arbitrary high N, and if you use this sort of stream thing, you can change your mind and add more entries later.

If you get rid of this uncountable stuff, is what's left useful?

er, except for his non determinist ones

where the code doesn't predict output anyway.

Non determinism

I think non-determinism is fine, I have no problem with (0 either 1), but infinite memorisation is just a fantasy, you could never implement this. This is not a programming language.

re infinite memorisation

infinite memorisation is just a fantasy,

We usually model our programming languages as turing machines.

You are suggesting that all of computer science be refounded exclusively on something less powerful, such as linear bounded automata.

That might be interesting but you will have to do a lot of work to make a case that its a useful approach.

Intel have done the work.

Intel have already done the work for me. That web browser you are using to post, and this site itself are the result of finite-state-machines. There are no infinities anywhere in there. The Turing machine is bearable because it only has an 'infinite' tape, with finite symbols. We can allow this because it models the fact that a computer can always have more (but a finite amount) of memory.

To get into uncountable functions, its just not possible to implement, so you will write some code, and think you have done a wonderful job, and have no model to understand why it fails. It is a poor model for programming if it allows unrealisable programs to be expressed in its type system. In fact it is unsound, as the definition of soundness is that the type system prevents programs that will fail from compiling. Your memoisation function will fail, and so a type system that accepts it is unsound.

people do things

Intel ... There are no infinities anywhere in there.

Nothing in the specification or implementation of an intel processor imposes an upper bound on the amount of storage it can access.

Unbounded but finite.

The processor is limited to 64bit addresses due to the width of the address register in the programmers model, although real chips currently only use the bottom 48 bits. This limits the amount of physical memory directly connectable. You could use paging tricks to page in more, for example using (2^64 - 1) bytes to select an external address, and write a byte to one of 2^(2^64 - 1) addresses. However there is never infinite memory, just a unbounded finite amount.

In reality people are not going to build custom hardware like that, so for normal use, the limit is the number of memory slots on the PC motherboard, and the maximum memory addressable in each slot. The dual Xeon motherboards that can address 96Gb have probably the largest addressable memory commonly available at the moment.

unbounded PCs

A PC can use an unbounded amount of mass storage devices. Humans can manufacture storage devices much, much, much faster than a PC can fill them. A thorough model of what a PC can do is, very literally, a turing machine.

You don't gain anything by saying "that's not infinite, that's unbounded!" In constructive math, infinities are unbounded constructions.

At this point, you are complaining about the word choices of the technical vocabulary of math, rather than thinking about what the math is actually saying.

Uncountable is worse than unbounded, by the way

if you're just unbounded then you don't have infinite elements just unbounded finite ones. If you have uncountable elements then you have mostly infinite elements. :/

Countable infinities are made of finite elements. When you say "infinity is not a number" then you're in a countable realm. When you say "almost all of my numbers are infinite" then you're in the uncountable realm.

re countable, uncountable

If you have uncountable elements then you have mostly infinite elements.

If we define a suitable equivalence relation, each real number can be identified with an infinite list of bits.

The Real procedure in Hewitt's paper returns an object that occupies finite *space* but that satisfies a useful set of axioms for an infinite list of bits. The GenF function I gave has similar properties.

We can not exhaustively enumerate or even just count the lists Real.[] might return. They are uncountable.

finite elements

Mathematics is agnostic as to whether the universe contains a finite or an infinite amount of information.

When you say "infinity is not a number" then you're in a countable realm. When you say "almost all of my numbers are infinite" then you're in the uncountable realm.

Those are OK hand-waves but of course those concepts can be made rigorous and programs can compute with certain objects drawn from an uncountable type.

Hand waves

I deliberately gave rules of thumb, they help intuition.

Direct Logic or whatever you call it needs to split off a useabl

Direct Logic or whatever you call it needs to split off a useable subset - or, alternatively, it needs to be totally limited to a usable subset - it's not clear to me if the continuum part buys you anything. It needs normal mathematics, Mathematics for mortals and machines, it needs propositions that can be proved or disproved.

Mathematics is agnostic as to whether the universe contains a finite or an infinite amount of information.

It's possible to prove a proposition that has a finite amount of information.

It is impossible to prove a proposition that has an infinite amount of information - if you think you've done so, then you've actually found a finite form for that proposition ie, it never actually had an infinite amount of information.

I'm going to use the word "realizable" to indicate mathematical objects that can be specifically referred to with some sort of finite specification. I'm going to use the word "unrealizable" to refer to all other mathematical objects, those objects from continuua each of which requires an uncompressable infinity of information to refer to

By defining mathematics as having propositions and proofs and functions that can not be specified without an infinite amount of information you've created something new, something with the form of mathematics but which is not mathematics.

Now it's not unusual for mathematics to reason about unrealizable objects, but it does so by relying directly on realizable propositions and only realizable propositions.

By defining a "mathematics" that is full of unrealizable propositions, unrealizable functions, unrealizable "actors" you've expanded "mathematics" to something new that, for the most part, you can never use. If you want to use this mathematics, you'll have to reason about it indirectly, without actually using it, in just the same way that people reason about unrealizable real numbers without specifying any individual unrealizable member of that set.

Or maybe you don't need that unrealizable part at all. Maybe there isn't a use for unrealizable propositions. Can anyone find a motivating example?

i thought you quit, Josh.

Direct Logic or whatever you call it needs to split off a useable subset

Since you know next to nothing about DL, it's motivations, the history, what the theory says, and why, and since by your own admission you can not follow discussions of it, why are you opinionating on it?

it's not normal mathematics either because

he's combining the computer science idea that everything has an address and can be referred to directly with uncountable things.

And then he's doing metamathematics talking about these propositions that as you said, are useless because they can't be realized.

But metatheory about things you can't realize is fine. So maybe there's something useful in it.

But Hewitt doesn't distinguish between the various classes of realizable and unrealizable when he posts, he just says "Actors" this and "Actors" that, so it's confusing and makes you wonder if he's confused.

Existing on the roadmap

So I am saying that ProbablyComputableTotal is countable but not enumerable then [...]

[...many paragraphs later...]

Does that not mean the important property is countable, not enumerable?

ProvableComputableTotal only exists as part of the contradiction we build during the proof-by-contradiction. If you agree with the conclusion that the theorems/proofs quantified over by CDL can't be recursively enumerated, then ProvableComputableTotal doesn't exist.

It sounds like you would still disagree with Hewitt's use of uncountable syntax. Frankly, I don't know why Hewitt's using uncountable syntax. (I know why I am: I attribute each piece of syntax to an authoritative person/organization who maintains its meaning, and I don't intend to restrict these authorities to being countable.)

[...] because we can enumerate all partial functions but we cannot define a terminating filter to find all total functions (as it is the halting problem).

Hmm, that reasoning doesn't work for me, since we can recursively enumerate the halting Turing machines. I find the ProvableComputableTotal argument more convincing.

If the partial functions of a Turing machine are countable, and as ActorScript is implementable on a turing machine, then they are countable (but may not be enumerable).

The proof is showing it's not possible to recursively enumerate the theorems/proofs that CDL lets us quantify over. For any given closed formalization (e.g. a finite implementation on a Turing machine), the theorems/proofs we can construct are recursively enumerable... but we can still quantify over more theorems/proofs than that.

I think this is where you'd say those extra theorems/proofs don't exist. I can't say that's wrong, but perhaps you could get on the same page as Hewitt if you thought of these as features that "exist" on the project roadmap, rather than existing in any one particular version of the implementation.

Does CDL not allow partial functions then?

Hewitt gave Factorial as an example of a partial function.

The proof Hewitt posted says

The proof Hewitt posted says [ProvableComputableTotal] is total, but it says that without much explicit justification.

It hypothesizes that ProvableComputableTotal is computable and total in order to obtain a contdradiction from that hypothesis.

ProvableComputableTotal

ProvableComputableTotal is not computable, as it is equivalent to the halting problem. If you do this instead with ProvableComputablePartial (a superset of PCT) you can diagonalise this without encountering the problem.

you can diagonalise this without encountering the problem.

Yes, if you are talking only about deterministic procedures.

If i is the number of the resulting Diagonalize, then Diagonalize.[i] must diverge.

Even non-deterministic procedures have a definition

If we have a procedure that takes a number of different actions based on a random primitive, that makes it non-deterministic, but it is still a single procedure, with a single definition.

Constructing ProvableComputableTotal

It hypothesizes that ProvableComputableTotal is computable and total in order to obtain a contdradiction from that hypothesis.

What the proof rejects is the original hypothesis that the theorems/proofs of CDL can be computationally enumerated.

I don't see ProvableComputableTotal being quite the same as that hypothesis. ProvableComputableTotal is just one alleged consequence of it, and I only barely follow the reasoning in between. Here are two ways I'm thinking the gap might be filled in:

  • Perhaps I can parse a termination theorem to extract the definition of the terminating procedure, and then perhaps I can construct the procedure from that definition.
  • Perhaps I can somehow recursively enumerate all procedures along with their corresponding termination propositions (which may or may not be provable). Then perhaps I can test equality of these propositions to the recursively enumerated provable theorems.
    • (This one seems unlikely. I think procedures are supposed to be uncountable, right?)

the deliberately incomplete structure of CDL

AFAICT,

CDL gives a theory of types with primitive types Boolean, ℕ Sentence, Proposition, Proof, and Theory. From previously constructed types, new types can be constructed for disjoint unions, functions, and so forth.

Axioms of construction are given for ℕ

A strong axiom of induction is given for ℕ and as a result it has but one model.

Axioms of construction for typed sets are given but there is no set-specific strong axiom of induction. Sets are established by a characteristic function.

(Therefore, btw, I was wrong, earlier, when I said in a nearby thread that DL lacked a universal set of for ℕ but I think I was still correct that it lacks a universal set for Sets◁ℕ▷. We don't have a provably universal powerset of the naturals because we have no categorical induction axiom for Set◁ℕ▷, We do have a universal set for ℕ itself because the categorical induction axiom for naturals lets us prove that all naturals are found in the set given by the characteristic function that always returns true. Hope I have that right now.)

The paper doesn't give (or I missed) axioms for Boolean but these are not too difficult to come up with. :-)

Rules of construction are given for typed expressions and sentences, which can be abstracted to form propositions. No axiom of induction is given for these.

A conservative but natural set of rules of inference are given.

Here is one big point:

Unlike what people are used to:

CDL does not give a strong axiom of induction for types. The set of types is open-ended. You can extend DL with new types.

CDL does not define any particular construction rules for functions, families, etc. You can invent new ways to introduce objects of this kind.

CDL is rigorously open-ended/incomplete.

One crude reason you can not enumerate the proofs of CDL is because CDL does not give you enough specification of proofs to be able to do that! (At the same time, you can enumerate all of the natural numbers in CDL, because the axioms for natural numbers give you enough to do that.)

The core of CDL is deliberately ambigous in an unusual way. It is not simply that there are multiple possible models of CDL. The ambiguity is deeper than that. There is no model for CDL -- only for CDL plus additional axioms (which we could call the "sub-theories" of CDL).

The "propositions of CDL are uncountable" because it is possible to define theories within CDL in which, for example, we have a kind of constructive way to choose a truly random real number.

If you find uncountable propositions unpalatable, stick to some other sub-theory of CDL.

CDL is not the kind of "foundation" wherein we have specified, once and for all, all the axioms we need.

CDL is the kind of "foundation" wherein we have axioms atop which we expect to add additional axioms and definitions.

Yes, but the gap

Reading this, I see I'm already on pretty much the same page as you are, and I hope others can take heed...

But what does all of that have to do with the reasoning gap I'm concerned about?

re Yes, but the gap

what does all of that have to do with the reasoning gap I'm concerned about?

I take you to refer to this alleged gap:

What the proof rejects is the original hypothesis that the theorems/proofs of CDL can be computationally enumerated.

I don't see ProvableComputableTotal being quite the same as that hypothesis.

I assumed you were just momentarily confused since the hypothesis about ProvableComputableTotal is exactly a statement that the theorems/proofs of CDL can be computationally enumerated.

Why do you believe "ProvableComputableTotal [is not] quite the same as that hypothesis."?

ProvableComputableTotal

ProvableComputableTotal enumerates provably total procedures, not theorems/proofs. If all we assume is a way to recursively enumerate are theorems/proofs, then where do we get the procedures from?

re ProvableComputableTotal

If all we assume is a way to recursively enumerate are theorems/proofs, then where do we get the procedures from?

Any simple axiomization of a turing-complete, enumerable type of procedures of type [ℕ]↦ℕ

We could create a sub-theory for a very restricted lisp-dialect, for example.

I think that just makes

I think that just makes things more complicated. If you're talking about defining a closed theory, then don't we have to prove that it lets us define all the provably total procedures of CDL (completeness)? Secondly, how do we turn theorems in that sub-theory into total procedures that we can apply to get contradictions in the main theory (soundness)?

I'd rather avoid coming up with a new programming language with full completeness and soundness proofs (if that's even possible) just to fill in a gap in one proof. Instead, how about using the built-in procedure-defining mechanism Hewitt's been showing us?

I half-expect there's some way to extract a total procedure when we have a termination theorem. If so, then how do we recursively enumerate those termination theorems, when all we've assumed is that we can recursively enumerate the theorems of CDL? After all, We can't trust a subset of an r.e. set to be r.e.

you only need some axioms (re I think that just...)

I'd rather avoid coming up with a new programming language with full completeness and soundness proofs

You don't need any of that to fully formalize the proof from Church. Just a few uncontroversial axioms.

Damn it, I figured it out ... a falacy and ?metahypermathematics

Since I didn't know the symbols you were using I assumed a meaning from what you were claiming the result was... I assumed you were taking a direct route to what you wanted to prove.

Ok I understand now, and I am sure this is a fallacy.

Your assumption that a list of all provably total functions from ℕ->ℕ covers all functions from ℕ->ℕ is incompatible with the assumption that proofs are of finite length!!!! You can't even specify those functions in a finite form let alone prove that they're total!

Doh. That says nothing about whether provable propositions of finite length are countable, because your set of propositions wasn't, as you intended a subset of finite provable propositions, it's from a larger set. You just assumed that every possible total function has a proof that it's total. Doh doh doh.

Of course a list of all mappings from ℕ->ℕ has the order of the continuum, that was obvious because a subset of those functions can be mapped to the real number line [take i as the nth digit of the real].

If you're claiming that all possible ℕ->ℕ function is an actor, then you're claiming that actors can't be specified in a finite form at all. They're not programs at all. Well, yes, if you start with the assumption that your set of functions forms a continuum, then you can't even specify any arbitrary unique one any more than you specify any more than you can specify an arbitrary unique real.

But since you can't even specify members of a continuum in finite form, you can't make proofs about them in a finite form either!!!

Statements that you can't make aren't theorems! This isn't any different than if you replaced functions with reals.

God!

In your system just the fragment of a string "f" where f is a function that you want to talk about requires the order of the continuum, it requires more information than a member of a countable infinity.

But there has never been a mathematical proof addressing a specific object that can't be specified in some sort of finite form. And if all you specify is there exists an f (which is what we do) then you haven't actually had to address it.

What you call "mathematics" is something that has never existed, can never exist.

Also the fact that you assume that there can be a proof of totality for every function is an assumption of completeness that isn't warrented. And that's a separate issue!

Now if you want a metamathematics that isn't about mathematics, but about a hypermathematics that is a superset of actual mathematics, that's interesting, but it has all of these strange properties only because you assumed your way into a larger territory than what mathematicians have ever been able to work with, by definition!

What you call mathematics is uhm metahypermathematics?

Goedel's first theorem

Goedel's first theorem meta-mathematically proves the incompleteness (or else inconsistency) of any axiom set strong enough do arithmetic.

That seems reasonably safe to say, although one has to watch out for what one means by the phrase "do arithmetic". So far I haven't gotten the impression we were having a problem over the first theorem, though, so it may be safe to let sleeping dogs lie, on that point.

Goedels second theorem, which is really kind of a hand-wave pseudo-theorem, imagines a logic in which the first theorem is not done meta-mathematically, but is instead done directly in the system we are talking about.

The technical part of this, I have no problem with; it's one of the key points brought out by my blog post. The part about "hand-wave pseudo-theorem" is, on consideration, interesting. If someone wants to believe the second theorem is on shaky ground, they'd likely favor such a characterization. If someone sees the theorem as an extremely robust result, i.e., its validity not sensitive to minor variations in formulation, they would be likely to use different words that almost mean the same thing but, not.

To accomplish this, the second [theorem] requires that we construct a sentence which can formally prove, within the system, that one of its terms [is] its own integer "encoding".

Note particularly that I've corrected "its its" to "is its"; though I'm confident enough to make the correction, I'm also wary enough to flag it out.

The system doesn't have to "know" which arithmetic function coincides with the system's own reasoning about arithmetic functions.

I conjectured, when you had that violent allergic reaction to my blog post on the subject, that the trouble might have been caused by my linking you into the middle of the treatment so that you missed the earlier part that explained the objectives of the treatment. I'm now dubious of that conjecture. It now seems more plausible that the trouble arose because I chose to explore those facets of the situation that would clarify things for me, and the set of facets needing clarification to achieve clarify for you, whatever overlap there is, involves some facets outside the set I needed. I focused on minimizing the assumptions needed, and minimizing the reasoning used; and I was particularly interested in avoiding assumptions about how the system being studied actually works. Even when treating the second theorem, which (as you point out) moves key reasoning into the system being meta-mathematically reasoned about, the assumptions I used are of the form 'if the system deduces this, then it also deduces that', which is indifferent to how the system deduces either. That treatment asks that if the system deduces that the arithmetic function confirms a proposition (reminder: although the function conincides with the system, the system doesn't have to "know" that) — if the system deduces that the function confirms a proposition, then the system confirms the proposition. Were that not so, the system would be deducing things about the function that aren't true.

The thing that's really striking about Gödel's approach, the reason he's so very famous, is its robustness — that he completely cut the ground out from under any attempt to make self-proof of consistency "safe" by any type-like means, by pointing out that the system under study doesn't have to reason self-referentially at all. It only has to reason about the behavior of arithmetic functions. Entirely non-self-referentially.

Gödel's incompleteness "proofs" are incorrect beause of types

Gödel's incompleteness "proofs" are incorrect because they make use of a nonexistent "self-referential" proposition allegedly constructed using a nonexistent (because of types) fixed point . The confusion is that although strings can be encoded as Gödel numbers; sentences and propositions cannot be so encoded because of types (and also because there are too many of them).

The Church/Turing inferential incompleteness theorem for ℕ and similar closed mathematical theories is very robust. Note that the Church/Turing proof of inferential incompleteness does not make use of nonexistent "self-referential" propositions.

re Goedel's robustness

The thing that's really striking about Gödel's approach, the reason he's so very famous, is its robustness

If Goedel were still with us, I'm not sure he would agree with you (and this has nothing to do with Hewitt per se).

In the 30s he hand-waved the second theorem and promised a rigorous presentation later. He never delivered.

By 1964 he's nodding to Turing for having produced a more general and simpler result and is at least a bit clearer that systems can not provide self-consistency proofs *of a very particular kind*.

Hewitt's account of the

Hewitt's account of the history of these ideas would necessarily be colored by his fundamental misconceptions.

Shutt: put up?

would necessarily be colored by his fundamental misconceptions.

That tautology applies to anyone and everyone. It can be used as an underhanded way to discredit someone without actually saying anything substantive about them.

If you are going to suggest that Hewitt has "fundamental misconcpetions" I hope you will lay them out specifically and clearly.

I wrote a long and detailed

I wrote a long and detailed post, which you completely ignored in favor of seizing on the high-level observation at the end of it.

re: long and detailed post, which you completely ignored

I hope you have seen I gave a detailed reply to that in reaction to your fair complaint.

thx

I have seen, and appreciate, and mean to reciprocate. (I'm painfully aware I need to budget my time on LtU better than I've done just lately.)

History requires viewpoints

History requires viewpoints. The view from "nowhere" does not exist.

However, histories can be improved!

In particular, suggestions from improvement are greatly appreciated for the histories in Inconsistency Robustness in Foundations.

John Shutt: what do mean about my "fundamental misconceptions"?

John,

Exactly what do you think are my (in your words) "fundamental misconceptions."

Thanks!
Carl

A fair question fairly

A fair question fairly asked, and for the first time in months I find myself inclined to reply to you. Also a question whose answer should not be dashed off quickly or casually.

detailed reply to John S. (re Goedel's first theorem)

The technical part of this, I have no problem with; it's one of the key points brought out by my blog post. The part about "hand-wave pseudo-theorem" is, on consideration, interesting. If someone wants to believe the second theorem is on shaky ground, they'd likely favor such a characterization. If someone sees the theorem as an extremely robust result, i.e., its validity not sensitive to minor variations in formulation, they would be likely to use different words that almost mean the same thing but, not.

In the seminal paper ("On formally undecidable propositions of Principia Mathematica and related systems") the second theorem is presented in section "{4}". A proof is given only in outline, not in detailed construction. At the end of the paper a rigorous version is promised in later paper but this was never to appear.

(I am here using the version from the Martin Davis book.)

The theorem is described in relation to a recursively enumerable set of formula, χ. The theorem holds that "The SENTENCE which asserts that χ is consistent is not χ-provable."

The vagueness in this informal treatment of the alleged proof concerns the question of what it means for a proposition in χ to "assert that χ is consistent".

Let w be the alleged proposition in χ that asserts that χ is consistent.

Let u be a proposition which is unprovable in w where u is of the sort that if it had a proof, we could encode that proof as a number and the number would be an existence proof that u is false. (In Goedel's notation, u is called "¬ Bew (17 Gen r)").

The outline of the proof then assumes a theorem can be proven. Namely that:

  w → u

From this, Goedel concludes that a proof of w gives rise to an obvious proof of u. Since u is known to be unrpovable, if we prove u we can then go on to derive a contradiction. So, concludes Goedel in the 2nd theorem, w can not be proved.

The handwave comes on full force when Goedel tries to informally convince us that u can be constructed in χ in such a way that we can prove u ⇔ ¬ u.

Goedel has earlier in the paper, for the 1st theorem, shown how to construct that u as a fixpoint we could informally notate as:

u ≡ ∀ n (n does not encode a proof of the proposition encoded as y) where the existence of an n that encodes a proof for the proposition encoded as y implies ¬ u

Goedel's 2nd theorem hand-wave runs into trouble if we can have a system in which:

1) That definition of u is illegal.

2) The system is able to prove its own self-consistency.

So to speak: a system can prove w so long as it can not express u.

Even when treating the second theorem, which (as you point out) moves key reasoning into the system being meta-mathematically reasoned about, the assumptions I used are of the form 'if the system deduces this, then it also deduces that', which is indifferent to how the system deduces either.

Hewitt's stance, and I see no problem with it, is that if you can construct a proposition u such that w → u (as above) that your system is broken.

As evidence that it is broken: you are forced to somehow not admit the trivial (even if not dispositive) self-consistency proof that DL has.

Non-recursive

u ≡ ∀ n (n does not encode a proof of the proposition encoded as y) where the existence of an n that encodes a proof for the proposition encoded as y implies ¬ u

I'm not sure what you're suggesting is wrong with this. It's a non-recursive definition, after all:

u ≡ ∀ n (n does not encode a proof of the proposition encoded as y)

Using this definition, we can immediately conclude the following:

the existence of an n that encodes a proof for the proposition encoded as y implies ¬ u

re non-recursive

u ≡ ∀ n (n does not encode a proof of the proposition encoded as y)

Using this definition, [....]

The definition contains a free variable, y. Therefore:

Using this definition, we can immediately conclude the following:

  the existence of an n that encodes a proof for the proposition
  encoded as y implies ¬ u

is not false so much as meaningless.

y

Yes, your definition contains a free variable, y. I didn't put it there.

We're probably better off debating from my similar post here, where you did provide full definitions for most of the free variables.

re y

Looking back at the earlier thread, you looked at this notation and said it was not a recursive definition... you say that there was no fixpoint here:

X := q ∈ K where (X = Rq(q))

The fixpoint semantics of this definition are the only thing that give q and X a specific meaning. Do you not see that X is defined in terms of itself there? That is is not some arbitrary q from the set K but is instead a very specific example of q in K?

Not replying here

Sorry, I'm not going to reply to this here. Dragging pieces of that thread's discussion here will have the effect (if not the intention) of censoring the on-topic conversation in this thread.

re not replying here

I hope you will consider that while you could have simply responded sustantively in the thread where you say that conversation belongs, but instead you decided to act like a state censor and scold me for responding to you where you most recently commented on the topic.

(You'll get this in 20 years, probably): What are you, 20 something:?!?!?

Thread maintenance

Sorry, as soon as I logged off I realized my comment looked like it was entirely directed at you. I had been short on time, and I had wanted to make up for my mistake of replying here by stating my reason for me, you, and others to consider replying on the other thread instead of this one. But I effectively used your post as an example! Oops.

Meanwhile, I didn't have time to post a reply to the technical content. I do still hope to do that at some point (in the other thread).

decided to act like a state censor

If I encourage some future comments to move from one thread to a more relevant one that's a click away... there's hardly any censorship damage in that. It might even improve the accessibility of both threads' information.

Turnstile

I think what you are missing here is that Direct Logic is reflective in a higher order way

I understand that it wants to be reflective. I see two alternatives for what the naked turnstile could mean:

1. It refers to provability in DirectLogic. The problem with that is that by Godel you then can't expose a full working understanding of this fact to the logic. And, also, Hewitt seems to prove within DL that the axioms of the naked turnstile aren't recursively enumerable. But the axioms of DL, whatever it is, will be recursively enumerable. So there's a soundness issue, too.

2. It refers to provability in a nebulously defined "mathematics" which includes all things people might eventually reason to be true. For example, maybe it includes the conclusions of ZFC, plus the Godel sentence of ZFC, plus the Godel sentence of that system, plus the Godel sentences of that sequence of systems, etc. etc. In that case, you might as well take the relation to be truth, instead of provability. In fact, I think I've heard the result stated that if you were to continue adding Godel statements for every ordinal, you would have completeness. Does anyone recognize that theorem?

Can you tell me a bit about how one might prove the non-existance of a first-order model?

Well, first order logic is complete, by an earlier theorem of Godel. So if you manage to prove that there doesn't exist a first order model, you will have proven the logic inconsistent :).

But Hewitt has made it clear that he intends a second order model and I'm not specifically interested in a first order model. I've seen constructions of second order theories that are consistent but don't have a model. They were all constructions that I don't think would be very illuminating for approaching DL, though.

"If a system is powerful and proves itself consistent, it is inconsistent!"

I don't think DirectLogic is a counterexample to even this informal statement, because it doesn't have enough axioms to pin down the turnstile enough to let us know what it's supposed to mean. Again, this is rather flip, but if you start with Arithmetic and add the string "This system is consistent" as an "axiom" (in the form of a English comment that doesn't interact with any of the other rules), that obviously doesn't break the consistency of the system. Surely you wouldn't let me get away with claiming that such a system proved its own consistency.

And this is why I'm concerned with models of his naked turnstile. Figuring out what the model could be lets you understand what his axioms tell you the symbol means, rather than relying on the documentation telling you what it's supposed to mean. And in this regard, Godel was much more careful than Hewitt seems to have been.

I understand that it wants

I understand that it wants to be reflective.

I recall at our annual town meeting a few years ago, the planning board was pushing a poorly thought out bylaw amendment; someone raised a concern about what it would do, and someone from the board responded (not for the first time that meeting) with something along the lines of 'the intent is ...'. My mumbled remark to my father, which he (unlike me) had the nerve to get up and say, was, 'I'm not worried about what you intend it to do; I'm worried about what it'll do that you don't intend it to do.' (As I recall, that particular motion was defeated.)

if you start with Arithmetic and add the string "This system is consistent" as an "axiom" (in the form of a English comment that doesn't interact with any of the other rules), that obviously doesn't break the consistency of the system. Surely you wouldn't let me get away with claiming that such a system proved its own consistency.

Actually, it doesn't matter how the system establishes its own consistency, as long as it does; an axiom is just as good as any other means. The problem is with what it does after establishing it.

I'm not sure if you caught

I'm not sure if you caught my meaning. I didn't put quotes around "This system is consistent" to signify translating that string into quantifiers. I mean literally the rule is:

-------------------------------------
    |- This system is consistent

This inference rule does not interact with the rest of the logic at all, so you can continue to prove all of the other things you could in PA [and nothing more].

Now, can I claim that this system proves its own consistency? Of course, not. But what Hewitt has done looks to me like a more sophisticated form of this construction.

(There were two ways I could read your post. One of them called for this reply and the other didn't need a reply. Sorry if you meant the other one :).

I'm actually still not sure

I'm actually still not sure if I've correctly read your reading of what I wrote, but... if you're saying that a proof of consistency doesn't cause a problem if the system is unable to reason from it — well, yeah, if the system is unable to reason from it then it's not all all clear that it's really a proof of consistency.

well, yeah, if the system is

well, yeah, if the system is unable to reason from it then it's not all[at] all clear that it's really a proof of consistency.

Right. In my flip example, no further inference is possible from the alleged consistency assertion. Because you can make no inferences, there's no reason to interpret those English words as asserting anything at all, even if an assertion of consistency was the meta-level intention.

By analogy, in Hewitt's DirectLogic, he's chosen the turnstile with the meta-level intention that it should represent provability. In DirectLogic, there are evidently things you're allowed to do with the turnstile. So it's not meaningless like the assertion in my flip example. But my point is that we still have to look at the axioms of DirectLogic to figure out what it means rather than appeals to the meta level intention.

Not formalizable?

It is not formalizable.

It was formalized by Hilbert and Bernays in the 1939 Grundlagen der Mathematik, Vol II. Russel O'Connor, in his paper on his formalization of the first theorem in Coq, addresses the feasibility of continuing on with the formalization of the second theorem; it doesn't look like there's any good reason to think this could not be done, with some work. These points would seem to contradict you, unless I'm reading your evaluation of Gödel's proof too broadly.

What do you mean by "the most generous reading anyone has come up with"? Hewitt states without qualification that Gödel's proofs were invalid. This is a more generous reading than those of others who filled in the details and generalized the results?

Also, could you elaborate on Hewitt "quite correctly agreeing with" Wittgenstein? I tried to engage him on this point but got nowhere; if you understand what he's saying perhaps you could explain it to me? Back when I addressed Hewitt about this I read up on some Wittgenstein scholarship pertaining to this point. There seems to be two camps: one saying Wittgenstein disagreed with Gödel's first theorem, and was wrong, and the other saying that he agreed with Gödel and was making a subtle/insightful observation. Hewitt is the only person I've seen make the claim that Wittgenstein disagreed with Gödel's first theorem, and was correct. I didn't think you saw eye-to-eye with Hewitt on the first theorem, so I'm quite surprised that you would say he was correct to agree with Wittgenstein. Could you elaborate?

Gödel's "proofs" formalized doesn't make them apply to Math

The fact that Gödel's "proofs" have been formalized doesn't mean that they apply to Mathematics or even closed mathematical theories like ℕ.

The only way to preserve Gödel's proofs is to retreat to strange first-order modal provability logics that do not provide a suitable mathematical foundation for Computer Science. This retreat was adopted by Gödel in his response to Wittgenstein's criticism that "self-referential" propositions lead to inconsistency in Mathematics (in addition to insinuating that Wittgenstein was "crazy").

Gödel's proof are in standard non-modal first-order logic

You better wake up to the fact that Gödel's proofs work for what essentially everybody else treats as standard logics and is used for computer science — classical first-order logic with Peano arithmetic. Its problems are well-known, people have their reasons to use it anyway.
So, in the best case, you can say that "Gödel's theorem apply for logics that satisfy standard assumptions, but I give up these assumptions".
(In fact, lots of PL people prefer intuitionistic higher-order logic, but Gödel's theorems still apply, and those people are aware of the dominance of classical first-order logic).

Gödel's results hold only in modal provability logic (not Math)

Gödel's results hold only in modal first-order provability logic (not Mathematics).

Please see Provability Logic

Wittgenstein correct: "self-ref" prop mean inconsistency in Math

Wittgenstein was correct in writing that "self-referential" propositions mean inconsistency in Mathematics. In particular, he wrote that allowing the proposition "I am not provable." into mathematics results in an inconsistency:

Let us suppose [that Gödel's results are correct and that
therefore]  I prove  the improvability (in Russell’s system) of
[Gödel's “self-referential” proposition ] P; [i.e., ⊢⊬P where P⇔⊬P]
then by this proof I have proved P [i.e., ⊢P].
    Now if this proof were one in Russell’s system [i.e., ⊢⊢P]—I
should in this case have proved at once that it belonged [i.e., ⊢P]
and did not belong [i.e., ⊢¬P because ¬P⇔⊢P] to Russell’s system.
    But there is a contradiction here! [i.e., ⊢P and ⊢¬P]

re Grundlagen der Mathematik

It was formalized by Hilbert and Bernays in the 1939 Grundlagen der Mathematik, Vol II.

A bit of trivia: Goedel cites that in his 1964 postscript to the translation of "On Undecidable oropositions of formal mathematical systems" in the Martin Davis book ("The undecidable").

I don't read German and don't have a translation at hand but here is key from a deep overview of the Grundlagen (emphasis added):

4.3 Unprovability of consistency. For a formalism F that is consistent and satisfies the restrictive conditinos, the proof of the first incompleteness theorem shows the formula ~B(m,q) to be unprovable. However, it also shows that the sentence ~B(m,q) holds and is provable in F, for each numeral m. The second incompleteness theorem is obtained by formalizing these considerations, i.e. by proving in F the formula ~B(m,q) from the formal expression C of F's consistency. That is possible, however, only if F satisfies certain additional conditions, the so-called derivability conditions. Hilbert and Bernays conclude immediately "in case the formalism F is consistent no formalized proof of this consistency, i.e. no derivation of that formula C, can exist in F." (p. 284)

In the notation being used there, this amounts to proving, within the system itself, that q is the numbering of the proposition ~B(m,q) itself.

"David Hilbert and Paul Bernays, Grundlagen der Mathematik I and II; a landmark", Wilfred Sieg & Mark Ravaglia [2004] (Technical Report No. CMU-PHIL-154]

"I am not provable." is allowed in first-order provability logic

The sentence "I am not provable." is allowed in first-order provability logic (which is not suitable for the mathematical foundations of Computer Science).

re 1st theorem, Hewitt, Wittgenstein

Also, could you elaborate on Hewitt "quite correctly agreeing with" Wittgenstein? Back when I addressed Hewitt about this I read up on some Wittgenstein scholarship pertaining to this point. There seems to be two camps: one saying Wittgenstein disagreed with Gödel's first theorem, and was wrong, and the other saying that he agreed with Gödel and was making a subtle/insightful observation. Hewitt is the only person I've seen make the claim that Wittgenstein disagreed with Gödel's first theorem, and was correct. I didn't think you saw eye-to-eye with Hewitt on the first theorem, so I'm quite surprised that you would say he was correct to agree with Wittgenstein. Could you elaborate?

I think I can do some justice to all parties here:

1) Goedel Thm 1 gives a method for constructing an unprovable propostion. The general form of the proposition is an assertion that no number exists which satisfies a certain equation. If we could prove the proposition to be true, the proof itself could be "encoded" as a number. The number would actually satisfy the equation. The system under study would contain a contradiction: A proof that no number of a certain sort exists, along with an existence proof that such a number exists.

2) In theorem 1 we are standing outside the system we're talking about. The conversion of a hypothetical proof into a number is something we see from the outside. The system itself is not assumed to prove the relation between a hypothesized proof and its numbering.

3) In theorem 2, Goedel asks us to consider systems that, internally, prove the relationship between a proof, and a numbering of that proof that occurs w/in the proof itself. (i.e. the target theory has variables that range over its own propositions and we can construct a fixpoint sentence that logically asserts that one of its terms is its own encoding).

4) Wittgenstein looking at theorem 2 said (I paraphrase): Well, duh, if you allow sentences to refer to themselves that way you get all kinds of paradoxes. "Everything I write today is false". Yadda yadda. We know that. Such self-referential sentences are basically meaningless. Don't act so surprised if you get contradictions after deciding to use such sentences.

5) Hewitt shows that self-referential sentences of the sort entailing theorem 2 of Goedel can be excluded by types -- by restricting how sentences can be constructed to exclude overt self-reference. Conceptually, this is a formal way to take Wittgenstein's advice to avoid those kinds of sentences.

6) Hewitt correctly points out that Goedel's 1st theorem proof is wrong if you try to express it in terms of Direct Logic domains like Sentences and Propositions. It's wrong because, even meta-mathematically, the rules for constructing Sentences still apply. I appear to differ from Hewitt slightly because I also correctly point out that Goedel's 1st theorem proof doesn't literally have to use those domains of Direct Logic. It could operate over strings.

It's not much of a difference. In my view, the fidelity of the first theorem's meta-math to the subject system is an informal, empirical truth only. It is not some essential formal math result. You can tell that Goedel's first theorem tells you something about a subject system because it looks right, not because there's a really formalized proof.

My take on Goedel's 1st is a generous reading of the 1st theorem contrasted with Hewitt's reading which I think is probably closer to what Goedel himself believed about it.

Thanks for your reply,

Thanks for your reply, which, on the whole, was very reasonable. I can see what you mean now by saying Hewitt has a point in agreeing with Wittgenstein. I don't agree that Wittgenstein was responding to the 2nd theorem, but I also don't really care to argue about what his intended meaning really was. When Hewitt invokes Wittgenstein, it's to back up the claim about the 1st theorem you outlined in (6), as far as I can tell. This is why I was confused when you said he had a point about Wittgenstein. Your response here clears things up for me. A quick point: while I tend to think having a computer-verified formalized proof should increase our confidence in a result, that wasn't my point in bringing such things up; my point was that no one is likely to call such a proof hand-wavy or complain about lack of rigor.
Edit: On second thought, Wittgenstein's quotation does touch on the 2nd theorem, but my point about Hewitt's use of it to argue about the 1st theorem stands, I think.

Point 3's wrong, point 6 is interesting

Thanks for the analysis. This is helpful.
I agree with your point 6 — if your logic has a "syntax" (sentences form a datatype) you can encode them with strings/integers and prove the first theorem using integers.
In point 3, you say "the target theory has variables that range over its own propositions". That's not what Gödel does, though maybe it makes sense in direct logic. You only need integer variables and integer functions — so points 4 and 5 don't apply.
The conversion from sentences to Gödel numbers is still done in the metatheory. In the object theory you only have arithmetic functions on Gödel numbers, like the IsProof predicate (originally called Bew) —
https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems#Construction_of_a_statement_about_.22provability.22. The proof that IsProof describes proof-checking is done in the metatheory, so it is typically informal (unless you formalize the metatheory in Coq).

If you are concerned about the metatheoretic steps (how do you trust the metatheory), we can discuss why people do finitist metatheory at a foundational level. That discussion would also explains why logics with uncountable sets of sentences wouldn't be accepted as foundations on their own. This is all indeed somewhat hand-wavy, but to do any sort of math you need some starting point, the same way our discussion presupposes some shared understanding of English; Wittgenstein also discussed why this works well enough. Nowadays, I'd say that finitist metatheory is metatheory you can run on a computer, with concepts simple enough to be safe from paradoxes or strange phenomena.

Finally, Wikipedia also discusses Wittgenstein's criticism — most people don't buy its criticism, even though some subtler points seem to be under discussion — see the discussion of ω-incompleteness. But that's much fancier and subtler than anything we've discussed here.
https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems#Wittgenstein

If s:String◅Sentence▻, then ⌈s⌉:Sentence

If s:String◅Sentence▻, then ⌈s⌉:Sentence.

However, the Y fixed-point procedure doesn't exist in Mathematics because of types. Consequently Gödel's alleged construction of a "self-referential" proposition fails in Mathematics. If they existed, "self-referential" sentences lead to contradictions in Mathematics that can result in security holes.

Unfortunately, the Wikipedia article is incorrect on many counts and there is no way to correct it :-(

Russell Paradox

I am not really sure there is a (russel) paradox at all. We can interpret the sentence "the set of all sets that are not members of themselves" as: we have sets of pointers to member sets, and we have a set of pointers to all sets. Search the set of all sets and create a new set that contains a pointer to any set found that does not contain a pointer to itself (it cannot include itself in the search as it does not exist when the search happens). There is a distinction between mathematical objects and the algorithms by which they are constructed, and nothing can exist which you cannot construct.

The claim is mathematical objects can be represented by data-structures, just like we already know algorithms can be represented by programs.

The claim is mathematical objects can be represented by data-str

The claim is mathematical objects can be represented by data-structures

Including all the terms of formalized mathematical discourse.

Including Algorithms

Which means we can treat algorithms as an object that can be manipulated, as well as execute it. This suggests meta-programming is fundamental. We should have a way to manipulate the syntax tree of the language itself as a data-type.

I am not clear if the restriction on fixed points is overly restrictive. There is no reason an algorithm cannot refer to its own output from the previous timestep.

Only syntax

Mathematical objects that can be represented by data structures are the exception, not the rule. All of those are "just syntax", and lots of math isn't syntax. However, the few mathematicians caring about foundations have created "formalized mathematical discourse" and crafted it so that it's just syntax.

Mathematical Objects as Data Structures

What mathematical objects cannot be represented as Data structures? If you can write it down you can represent it as a data structure.

Mathematical objects as data structures defined where?

Are you talking about data structures defined in the metatheory or in the object language?

In the metatheory, of course, all object language entities are just syntax - a particular data structure in the metalangage. The price that you must pay for this convenience is the potential for manipulating syntax in ways that don't respect the rules that define the object language.

However, from within the object language, you can't observe the fact there's a metalanguage in the first place - let alone the fact the object language is itself defined in terms of metalanguage syntax. For example, from within the lambda calculus, the only thing you can do with a function is apply it. Application itself is "God-given", in the sense that it works in mysterious ways that you really can't fiddle with. (Even though there would be nothing mysterious about it if you could "jump" into the metalanguage and see how application is implemented in terms of substitution and yadda yadda.)

Computer Science needs unified foundations (no meta-system)

Computer Science needs unified foundations that can be understood and operated by computer systems. There is no magical "meta-system" that can only be understood and operated by humans.

In terms of digital computation, it's Actors all the way up and down ;-)

Metalanguages

The type system can prevent the metalanguage breaking the rules of the embedded language. Given lambda calculus we could implement a representation of lambda calculus along with an interpreter for it in lambda calculus.

That's right, but...

As far as I can tell, this is only possible when either:

(0) The embedded language is less expressive than the metalanguage, e.g., implementing the simply typed lambda calculus in Agda. Which still leaves you with the problem of formally defining the metalanguage.

(1) The metalanguage is inconsistent as a logic, e.g., this recent LtU post. I'm okay with an inconsistent metamathematical language (capable of carrying out arbitrary computation), but not with inconsistent mathematics.

Turing Complete.

The metalanguage is not a logic, it is a computational language. What is consistency for a computational language? Technically computational types are finite, and so consistency can be exhaustively proved. Completeness can also be checked exhaustively for finite integer rings.

The embedded language can be equally expressive, consider an interpreter for lambda calculus in lambda calculus. Once a language is Turing complete, it can embed any Turing complete language.

It seems you are confusing mathematical logic with programming languages?

My bad

You're right. I wasn't precise in my use of language. Here's what I meant:

What's consistency for a computational language?

I was thinking of something along the lines of "only consistent logics can be embedded and interpreted in it". For example, in the embedded STLC example, it was possible to define an eval function in Agda. You couldn't do the same if you tried to embed the untyped lambda calculus. So Agda is an example of what I meant by "consistent metalanguage", whereas the language of rules of inference is an example of what I meant by "inconsistent metalanguage" (because you can define formal systems that entail contradictions from nothing).

Once a language is Turing complete, it can embed any Turing complete language.

<strike>Yep, but if you make a type system for it, that type system is inconsistent as a logic. Which was my intended point.</strike>

EDIT: No, my last paragraph didn't make sense. Sorry. (And, apparently, I can't use the <strike> tag.)

Finite Integer Rings and Consistent Type Systems

We can prove both completeness and consistency by exhaustive methods for finite integer rings.

I am not sure about the assertion that the type system would be inconsistent, it seems odd. If we start with simply typed lambda calculus, surely we can still implement a data type that represents lambda calculus (which itself could be simply typed) and also an interpreter for that lambda calculus.

First, its clear that implementing something in a language does not make its type system inconsistent as a logic. So if Haskell had a type system consistent as a logic, and we can implement a Haskell interpreter in it, then I don't see that the type system would become inconsistent.

I am not sure I see the relevance of Agda. I think Agda is overly restrictive. We are not trying to make the meta-language a logic. The meta-language should be a Turing complete computational language and could be strongly typed without causing problems. I think the point is mathematicians are computational not axiomatic, if that makes any sense.

Haskell's type system is inconsistent as a logic

Most of this subthread does assume you're trying to use Curry-Howard (like in Agda).

Haskell (and any Turing-complete functional language) allows defining a fixpoint combinator fix : forall T. (T -> T) -> T. If you read this type as a proposition via Curry-Howard, it allows proving everything, hence makes Haskell type-system inconsistent as a logic. Agda, instead, isn't Turing-complete.

We are not trying to make the meta-language a logic.

"Finitist methods" are about having a computational metalanguage that can be used for proofs, at least informal ones. If you want to use "computerized finitist methods", you want a metalanguage that is a logic so that you can do proofs about your object language.

You can try to allow recursion only in programs but not in proofs, but this is not so easy to get right; there are some approaches to this, but I think each has its own disadvantages.

If we start with simply typed lambda calculus, surely we can still implement a data type that represents lambda calculus (which itself could be simply typed) and also an interpreter for that lambda calculus.

Careful there. Simply-typed lambda calculus (without general recursion) is not Turing complete, and cannot implement an interpreter for itself. This is a general limit — Eduardo is right, even though he might have skipped a couple of steps (I'm sure I did too). The best discussion I know of these issues is in "Typed Syntactic Meta-programming".

So, implementing a self-interpreter does not make a typesystem inconsistent as a logic — it just shows (indirectly) that the inconsistency was already there.

Metalanguage does not need to a logic

I think a Turing complete mata language is all that is required. I am not even sure it needs to be typed (perhaps it can be typed, but think that's a distraction). You can write an axiomatic solver in the Turing complete language. This is much like the mathematician that interprets axioms is "Turing complete" and exists in time (the mathematician has a state before and after reading/inputting each axiom). For example Agda might be initially written in 'C'. This is the "real metalanguage".

-

duplicate.

-

duplicate.

So there are no paradoxes

if you only have algorithms and nothing by definition or declaration.

Mathematical Objects

I would have thought it obvious that there are no paradoxes in computation. X = X + 1 would be a paradox declaratively, but as an algorithm its fine.

You can of course declaratively define data structures (mathematical objects, including algorithms themselves).

I'd much rather hear about practical computational logic

from you than about Gödel from Hewitt.

Seriously, nothing on LTU seems to be practical in the slightest.

It's as if they're a bunch of mathematicians who only pretend to care about computation.

Y-Combinator Type Inference

Types are obviously mathematical objects, as are algorithms. We can define an algorithm mapping algorithms to types (type inference), and one for comparing algorithms and types (type checking). I wonder what should happen if you execute a type? I presume like the algorithmic expression '1' it evaluates to itself (at the next time step).

More abstractly, if we have some data types at time T, and some other data types at time T+1 and algorithm that generates the data at T+1 from data at T. This sounds a lot like category theory, except we don't allow endofunctors.

It also seems clear that equality is an algorithm (is that extensional equality?)

I still don't see a problem with the Y combinator it is a potentially non-terminating algorithm that takes a mathematical object representing an algorithm as its argument. The type of the Y combinator is a mathematical object, derived from the definition if the Y combinator by the type inference algorithm.