When are Actors appropriate?

So there have been conversations about what is appropriate talk on LtU. Personally I am not offended by brusque commentary. On the other hand, while I respect Hewitt, I do think that the 'actors everywhere' thing has not gotten any better, only worse. I posted about process calculi and the first post was to railroad it towards actors, and sure enough enough people took the bait. :-(

I believe that conversations should be allowed to go off on to tangents e.g. talking about quantum. We all have subjective opinions like Justice Stewart about when it has gone from respectable tangent over into frankly trolling beavhiour, even if the troller doesn't do it to be a troll.

I'm asking everybody who is getting involved in each and every actor thread to please stop when they are on topics that did not in fact start out talking about actors. Personally I think it is OK if you go and start a new parallel topic where you draw the link and leave the conversation there. I do not need anybody to stop talking, I just would very much appreciate it if we could all try to see if we can keep the torrent of actor threads contained in their own places, so that other threads can live or die on their own.

(At the very least, realize that by forcing a thread to go onto 2 pages you have broken my ability to get to the new posts. Hardy har-har.)

Thanks for your support.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I see it the same way. At

I see it the same way. At some level it's understandable if people have particular standpoints they bring with them everywhere, because they're not going to learn a new standpoint overnight, but Hewitt's papers have been especially derailing.

the way you see it is wrong

In a very short time I've greatly expanded my knowledge of process calculi and their semantic theories thanks to actually taking Hewitt's comments seriously.

Re: the way you see it is wrong

In a very short time I've greatly expanded my knowledge of process calculi and their semantic theories thanks to actually taking Hewitt's comments seriously.

You might or might not remember all this, but when Hewitt started posting again recently, I read the papers and started defending them. The papers now have some pretty interesting arguments I hadn't seen a couple of years ago, and people were dismissing them in ways that I didn't find fair or convincing.

Now three things have happened:

1) You've been taking up that defense mantle with such vigor that I find barely anything to add on that side.

2) Thanks to that defense I was doing, I've expanded my knowledge of direct logic (the Ketonen and Weyhrauch kind), Gödel's biography, finitary foundationalism, ordinal analysis, and weak arithmetics. If I want an end-all, be-all foundation for mathematics, at this point I consider F arithmetic to be a very promising seed. I'm not sure I have a purpose in mind for Classical Direct Logic anymore, since it does not prove the consistency of its own axiomatization (only of its Proposition objects).

3) Some serious challenges to Hewitt's claims have come up that I haven't seen defenses against. I'm thinking in particular of our constructions of a diagonalization and a fixpoint combinator.

Now that I've had less reason to be an eager consumer of Hewitt's papers, I see you and Hewitt continue to post bursts of comments to half-related topics, which then spark the same subthreads we've seen everywhere else. It's repetitive (due to repeated terminology mistakes and our stubbornness from fatigue) and hard to follow (due to extremely long and interrelated threads).

I look forward to when these threads fall out of fashion again, and I sincerely hope we aren't driving too many people away in the process. Countably Infinite's departure last time still sticks with me.

Ross Angle, let's zoom in on the interesting bit.

3) Some serious challenges to Hewitt's claims have come up that I haven't seen defenses against. I'm thinking in particular of our constructions of a diagonalization and a fixpoint combinator.

Show me what you are talking about, please. I must have missed that. It sounds wrong from the vague description you give there but what is it specifically?

The challenges I mentioned

Thanks for asking. To keep the conversation in context, I've posted replies to those threads:

A diagonalization (flawed, but which makes me suspect there's a validity or design flaw in the non-existence theorem Hewitt was putting forward).

A fixpoint combinator that still seems like it should work.

Maybe you'll convince me yet. :)

re challenges you mentioned

I replied over there. You have to look at your version of Y but reconcile it with the rules of construction for Sentences on page 27 of the Direct Logic paper. Unless I am missing something, you can not use your defintion to create a Goedel-style Sentence.

also ross:

Keep in mind that Direct Logic has no problem proving (without need for a meta-logic!) it's own inferential incompleteness.

The problem with Goedel's second-theorem hand-wave is that it would seem to imply that DL's proof of self consistency can be leveraged to produce a contradiction in DL.

Alternative sources

As said, I've tried reading some of Hewitt's papers (in 2011), and gave up. What I didn't try was finding other sources on the same topics. Would somebody (not Hewitt) recommend good alternative (not from Hewitt) sources on direct logic?

More recent information on ActorScript and Direct Logic

More recent information on ActorScript and Direct Logic can be found
here.

Comments and suggestions for improvement are greatly appreciated.

So…

… that would be a "no", then?

Additional information in "Inconsistency Robustness"

There is additional information in "Inconsistency Robustness" by College Publications.

For example, I highly recommend the articles on inconsistency robust logic by John Woods and Eric Kao.

The book is in press and will be available soon, e.g., on Amazon, etc.

unavailable

The web site for that doesn't make it look like it is available at all. I emailed them to ask. Meanwhile I did turn up a document on IR by Woods.

Wood's article online is a draft preprint

Wood's article online is a draft preprint.

It differs significantly from his chapter in the book.

Bad

Not a good intro, bordering on the embarassing. Yet, somehow, trying to make sense of it helped me understand some interesting points.

The consistency proofs on page 2 seem especially ludicrous — the "contradiction" step only has a contradiction if you assume consistency in the first place. (|- T) and (|- not T) don't contradict each other. That's a standard problem with Hewitt's proof, so this paper won't help. I haven't found it discussed in the LtU thread on his proof.

In fact, I've realized what might be the confusing thing: Direct Logic has too much reflective power! You can get the contradiction by adding the step that `(|- T) => T` (called either integrity [1] or reiteration [2]), but that's broken. You can only prove this rule sound if you're assuming consistency in the first place, so this proof boils down to a convoluted variant of "since Direct Logic assumes consistency, it proves consistency". Not so interesting. This also doesn't provide a counterexample of Gödel's theorems, not without an external soundness proof for the logic.

Also, typically reflection principles tend to threaten consistency by Gödel's theorems. But at least, now we know that those theorems don't apply because Direct Logic lacks a syntax in the standard sense.

What's funny is that Hewitt seems to include this "integrity" rule in his appendix on "Classical Natural Deduction". OK, that's maybe not wrong, since he uses Jaśkowski's instead of Gentzen's; but please, title it "Jaśkowski's Natural Deduction", since Gentzen's the usual standard: good writing doesn't require jumping around so much to make sense of it.

[1] https://docs.google.com/file/d/0B79uetkQ_hCKbkFpbFJQVFhvdU0/edit
[2] http://arxiv.org/pdf/0812.4852v103.pdf

Use HAL. arXiv articles are obsolete :-(

Use HAL. The arXiv articles are obsolete e and there is no way to update the articles or even alert readers that they are out of date :-(

Consistency proof of Math: no formal assumption of consistency

By inspection, the consistency proof (using proof by contradiction) of Mathematics does not formally assume consistency in the proof.
Of course, Jaśkowski natural deduction is the one that mathematicians use in practice and it is commonly taught in Computer Science education.
Gentzen "natural deduction" is a philosophical curiosity that is not fundamental to Computer Science.

In Computer Science, soundness of Mathematics is a very important principle that must be available for use by computer systems for a multitude of purposes.
In practice, Computer systems need all the power of Direct Logic.

The nonexistent "self-referential" sentence (I am not provable. used by Gödel in his results) leads to contradictions in Mathematics.
Fortunately, Gödel's attempted construction of a "self-referential" sentence fails for Mathematics in Direct Logic because its notation is rigorously defined using types in the articles on HAL.

Another post about Godel, with apologies to LtU

You can only prove this rule sound if you're assuming consistency in the first place, so this proof boils down to a convoluted variant of "since Direct Logic assumes consistency, it proves consistency"

I don't think this is the full story. You're not supposed to be able to get around Godel by just adding an axiom that your logic is consistent. If you carry out such a plan of adding such an axiom to an ordinary encoding, you'll end up with an inconsistent system. (Note: Of course, you can start with system L and add an axiom Con(L) that asserts the consistency of L, ending up with a new system L' that is only inconsistent if L was. I'm talking about carefully constructing L' = L+Con(L').)

I think the reason that there's no contradiction in DirectLogic is because it DL doesn't fully axiomitize what provability means inside DL. So there is no way to get a handle on it as asserting the existence of something recursively enumerable from inside the logic. It also means an onlooker is allowed to interpret turnstile as meaning something else. Here's a relevant passage from the Standard Encyclopedia of Philosophy:

Giving a rigorous proof of the second theorem in a more general form that covers all such sentences, however, has turned out to be very complicated. The basic reason for this is that, unlike in the first theorem, not just any, merely extensionally adequate provability predicate works for the formalization of the consistency claim. The manner of presentation makes all the difference. For example, Rosser's provability predicate mentioned above would not do; one can prove the “consistency” of F in F, if consistency is expressed in terms of Rosser's provability predicate. One must thus add some further conditions for the provability predicate in order for the proof of the second incompleteness theorem to go through. Following Feferman (1960), it is customary to say that whereas the first theorem and its relatives are extensional results, the second theorem is intensional: it must be possible to think that Cons(F) in some sense expresses the consistency of F—that it really means that F is consistent.

It would be nice if there was a simple variant of Godel's theorem that formalized these ideas.

I agree that this doesn't seem particularly interesting. What's the point of postulating a nebulously defined "mathematics" and proving that its theorems are not recursively enumerable? That doesn't seem to model "the real situation" in any useful way. The broad goals of DirectLogic seem admirable, but this aspect seems rather useless.

Here's a test for the usefulness of a logic that can purportedly prove its own consistency: Can you use this feature to prove correct an implementation of the logic on general purpose hardware? That would be useful, as it would allow you to "cross compile" to another architecture in an environment where you're only allowed to deliver proven assembly code. I claim that you cannot do that with DirectLogic. The problem is that while DirectLogic has symbols in its grammar for provability, it doesn't internalize the meaning of those symbols as provability. It therefore won't be able to fully reason about how a data structure representing terms of the logic would map onto its internal concept of provability. And that's why this approach shouldn't count as bypassing Godel. It's just word games and doesn't actually bypass the practical consequences of Godel's second theorem.

[This post was aimed at Tom as much as you]

Direct Logic formalizes what is known about Mathematics (⊢)

Direct Logic formalizes what is known about Mathematics (⊢) on a rigorous basis so that it can be incorporated in the foundations of Computer Science.

The problems with Gödel's work stem from his assumption of the existence of a non-existent "self-referential" proposition. Consequently, his alleged proof of the inference incompleteness theorem (First Incompleteness Theorem) is invalid. Church/Turing provided a valid proof using computational undecidability.

Variants of the 2nd theorem

Matt, you might be interested in some recent work by Harvey Friedman on the 2nd theorem, e.g., here. He's interested in finding a good way to formalize the notion of "a sentence that adequately formalizes the consistency of a system," and proposes an interesting necessary condition on such sentences for a handful of systems. For PA, the condition on a candidate consistency sentence A is that PRA+A proves all the theorems PA proves, where PRA is PA with the induction schema restricted to quantifier-free formulas. All of the standard constructions satisfy this criteria, and there's a direct proof that any such A cannot be provable in PA. I don't have any idea if these notions can be extended to a broader context than first-order logic, but I thought they were interesting nonetheless. I especially liked the resemblance to Hilbert's program: the theorems of a stronger system are "safe" (provable in a weaker system) under the knowledge of consistency of that system.

Thanks for the link!

Cool, thanks for the link! I need to be careful not too look too hard at it lest I end up down another rabbit hole, but it is an interesting question as to whether this approach might generalize. I don't really see how to do it, either.

Friedman's work is *only* for first-order logic :-(

Friedman's work is *only* for first-order logic, the limitations of which are crucial to his results :-(

The Peano/Dedekind categorical axiomatization is much stronger than first-order PA.

In my experience, Hewitt

In my experience, Hewitt comes across as not participating in discussions here at all, but as spamming about his ideas in a purely one-way proselytizing. Which would leave no room for him to learn anything, in exchange for making it difficult to use LtU for anything except discussing Hewitt. As I've remarked before (in some words or other), if he's going to be a useful contributor to LtU, he has to either change how he comes across (if it's a misimpression) or change how he is (if it's not).

Mostly agree

Mostly agree; I find myself avoiding LtU because of the incessant Actor / Inconsistency Robustness tangents.

don't universalize your experience

spamming about his ideas in a purely one-way proselytizing

He is terse and witty and interactive. I do not mean to deny that your impression is otherwise, only that you are (understandably) wrong.

Which would leave no room for him to learn anything, in exchange for making it difficult to use LtU for anything except discussing Hewitt.

In Berkeley, some real big jerks blame the struggling performance of retail businesses on freaks and travellers, who generally operate outside of consumerism whenever they can.

The jerks are observably idiots because retail struggles because it has so little to offer the actual community.

I think there is an analogy here. I don't see LtU overflowing with a lot of deep, interesting stuff, arguably with Hewitt as an exception if you are ever patient enough to figure out what he's talking about.

Most of what appears to be going on in academic PLT appears to be treading water, bluffing while working obscure and uninteresting technicalities, etc. The field, not just the web site, is largely stagnant.

By contrast, the deeper I dig into Hewitt's work, the richer and more fertile it appears.

BTW, not really a direct reply, but this paper he recently mentioned is pretty tasty so far (I'm half way through) and might(?) help you better appreciate the structure of the abstrations he's bringing to bear and what the motivation for those abstractions is.

http://www.pcs.usp.br/~coin-aamas06/10_commitment-43_16pages.pdf

I'd love to see a real serious critique of Hewitt's work. So far, you Hewitt critics are offering anti-intellectual garbage, instead.

dis/agree

Yes, I want to learn about wonderful new things. If you can get into Hewitt's stuff and figure it out for humanity, I would seriously appreciate that. No kidding. Some folks have done that before e.g. how actors turned into E and I for one am very grateful for all involved.

In the meanwhile, his style of posting might be seen as killing off a lot of other good things on LtU.

You tend to do a better job than Hewitt in terms of style, but the horse has left the barn in terms of some not-insignificant portion of the LtU population being able to put up with anything related to Hewitt posts, I suspect. Again, that is very unfortunate, I wish it weren't so.

The point is not that I think Hewitt should shut up ever. I don't. I just think the final result of his style is trollish, and unfortunately people seem to fall for it, and things snowball.

Every Kant needs a Fichte

After all, we owe Scheme to Hewitt's lack of the "gift for exposition" that Kant regretted he did not have.

Here is what happened.

raould you posted a link to a description of a book you say you checked out of the library and could not much understand.

The book does not appear to be available on-line.

You quoted an excerpt that vaguely categorizes the book but gives no substantive hint at its original contributions.

Nevermind, we can at least make some vague guesses that, OK, this is coming from a Scott-Milner-Parker tradition and is going to work out some technicalities of some damn eddy of that line of thought.

The work in question doesn't appear to have been widely cited or used. It's been a while. It's really not too clear why we should care about it but...

The way you described it made it sound like you keep hearing about this "process calculi" stuff and wanted to dig in and started with this book. Probably(?) you have had some exposure to process calculi before but you wanted to learn more.

Process calculi arise out of some contesting lines of theory starting in the 70s and continuing to today.

Awareness of the history of that thought is pretty damn useful to wrapping your head around books like the one you posted about.

One of the principle researchers from the foundational era (just a few decades back) of the theory in that book offered some comments relating it to the alternative approach to which Milner was responding.

That's pretty nice of that guy if you ask me but, anwyay....

Then some jerky comments came along in response and Hewitt defended himself succinctly and with some wit.

Then you confused your non-understanding of the material at hand with some kind of flaw of Hewitt and publicly called for his exile from society.

I'm sorry, but, shame on you.

it is to sigh, as they say in franch

I appreciate my failure to present my other OT in any clear manner. My fault, I accept it.

But I think it has become pretty clear that even when nothing mentions actors, suddenly a post appears that pushes the conversation towards actors.

I suspect most folks on LtU have reached the point where they see that as trolling, even if not done with any malicious intent -- the results are the same, overall.

Certainly this is all subjective, but I am guessing that most folks have put the Hewitt actor threads into their mental kill file. Unfortunately with the user interface we have here with this old Drupal site, we don't have a nice way to personally avoid stuff that has gone overboard. I don't think actors should never be mentioned, I just do personally think that the topic has been rather ruined by the Hewitt habit on LtU, much as I respect him and his work. I mean the world of CS is better off with the meat of his contributions, than with anything I have ever done or will do.

But LtU is being drowned in it, and I see that as wrong. That's why I respectfully proposed people voluntarily moving all Hewitt-actor stuff to their own threads. I hope that those who are involved in the Hewitt-actor threads can understand why I suggested this. It keeps the H-a conversations alive, without driving most other people away.

thanks for your support.

p.s. I don't see where I've called for anybody to be banished from society. I do see where I've asked for people to consider what is going on here on LtU and how to try to adjust it. Hyperbole isn't really going to help at the moment.

p.p.s. I do see your point of there being an actual historical connection between actors and process calculi. The sad thing is that Hewitt's posts on LtU have broken most anybody's interest. I for one have a really hard time not seeing it as yet another railroading [1] [2]. That's overall very sad, really, for everyone involved.

p.p.p.s.

It isn't just what you say. It is how you say it. God knows I'm enough of an a*hole to know that (as in I always say it wrong). If Hewitt would flip the style from "I will post a long thing only to actively re/de rail the conversion into talking only about actors, and I will only respond with terse koans to everything else" into "I will be as terse as Marco when I want to point out connections with actors, and I will be helpfully educationally kindly usefully verbose when people are trying to engage with me further" then there really would be no problem.

that's petulance

Society does not conform to your demands.

"even when nothing mentions actors"

But I think it has become pretty clear that even when nothing mentions actors, suddenly a post appears that pushes the conversation towards actors.

Are you also opposed to mentions of set theories in response to posts that don't explicitly mention them?

nope

One last try from my side :-)

I think I understand your perspective as I've had it many times before, and probably will find myself in situations where I will have it again.

Personally I think there are nuances of usability that are involved in deciding when and how to communicate. To try to stuff it down to a 2D spectrum, I'd suggest that one one side is e.g. somebody who only writes "FUCK" every other word, and not because they have a condition which causes them to do it other than pure disestablishmentarianism. On the far other side is somebody who always says please and thankyou and is a clear concise communicator. In the middle all all sorts of pathologies. Each is more or less subjectively bearable based on who is reading it and where they are reading it. Overall the way Hewitt has posted stuff has, unfortunately I think for several people here, been not good. It hasn't furthered much constructively. A person might be right, but they might also be poorly communicating, and perhaps even outright accidentally destructive in the way they communicate.

I see LtU overall, for a long time, as having suffered from the way in which Hewitt has communicated. I wish it were likely that the style would respectfully adjust, not 'conform', but I don't expect it.

I don't expect that what I'm saying will gain traction with you, and that you will continue perceive it as me only trying to shut people up or make them conform. I get it, I've felt that way sometimes too. That isn't my main intent.

I'm sorry for the gulf.

Excellent work here; crack the related censorship at Wikipedia?

Thomas,

Since you have done such an excellent job of defeating censorship here, would you like to try cracking the corresponding censorship at Wikipedia in the article Incompleteness Theorems ;-)

Cheers,
Carl

re wikipedia

I am not so much a masochist. :-P

For what it's worth …

(I originally started with a paragraph characterising people's intentions which was necessarily speculative and, I think, didn't necessarily add anything.)

For what it's worth, I agree with raould that, when I see one of Hewitt's posts, I stop reading. That is an entirely personal, non-expert reaction. It is not to say that they are not worthwhile contributions; it is entirely possible that the fault is (some or all) mine for not finding their worth.

However, from my (again) entirely non-expert point of view, it seems to me that Hewitt's posts aggressively advance his own point of view and push aside others'. Whether they are meant to do so or not, I read them that way, as un-civil discourse (regardless of their technical merit). I take Thomas Lord's point that it may seem that the theoretical folks are just as enthusiastic about pushing their own agenda; but, while I have seen many comments in which a theoretical point of view is offered, I have never seen one that seems to suggest that a practical point of view is not also worthwhile. That is, the theoretical comments that I have seen seem almost always to be intended additively, and I think that is an important difference.

I take Thomas Lord's point

I take Thomas Lord's point that it may seem that the theoretical folks are just as enthusiastic about pushing their own agenda

Pushing your own agenda is fine since we want to hear an author's view, but how you push it matters. In the 9 years I've been here, I don't recall any paper's author act like Hewitt. I would expect his terse, possibly unhelpful replies from an author who is understandably too busy to provide a more thorough explanation, but Hewitt's high posting rate suggests that isn't the issue. Better to have 1 or 2 long posts elaborating on a few concrete issues, than 50 terse 1 line posts simply repeating what you've already said many times.

Which isn't a commentary on Hewitt's work, which I haven't read, simply his PR, which is unhelpful at best. I'm glad Thomas has taken up that mantle, as at least he interprets and elaborates in a helpful manner. Hopefully it can be settled one way or another once and for all.

It is what it is

Proof by reflexivity.

I think at least half of the problem is that Hewitt speaks a different language than most everyone else, so that one of his short comments can spark a 20 post subthread of comic misunderstanding.

Not to deny the phenomenon,

Not to deny the phenomenon, but I doubt that 'more than half'. Looks to me like just one of several obsctacles obscuring others' view of errors in his reasoning; poor communication of misguided ideas can be much more of a problem than poor communication of correct ones.

It's an impish MIT style

The 70s, man, you had to be there or at least close to it.

judgements like "poor communication".

Judgements like "poor communication" a problematic. For example, one way to look at it is that the effectiveness of the communication is not necessarily the same at all time or space scales.

I subjectively have had the experience of being pretty sure he was advanced into dotage, followed by a retrospective recognition that I was completely wrong in that impression. The experience of that flip was an experience of a whole lot of previously said things that seemed crazy suddenly now being pretty darn concise and lucid.

That is the way the style I dubbed an MIT style works.

"Words are like fingers, pointing at the moon," the saying goes.

Effective Communication

Effective communication gets people to understand. If the audience does not understand, the fault is entirely with the speaker, to try and blame the audience for the speakers failure to get them to understand is never the right thing to do.

religious issues

If the audience does not understand, the fault is entirely with the speaker,

I don't subscribe to that religion.

Untenable position

I do not understand how anyone interested in educating others could have that point of view. It is so fundamentally self-evident that it is the duty of the teacher to explain their ideas to others that I don't believe you really think that, and are being rhetorical.

Clearly the definition of effective communication is that the audience understand, and I note you do not dispute this, but that you effectivly say you do not subscribe to effective communication?

The people who explain their ideas best are the ones society remembers, and the ideas of which become wide-spread. If Hewitt's ideas do not gain popularity it is nobody's fault but his own, for not explaining them well enough.

To blame the audience for not understanding is like a failed politician blaming the voters for not voting for them.

re If the audience does not understand, the fault is entirely wi

If the audience does not understand, the fault is entirely with the speaker,

the duty of the teacher to explain their ideas to others

I do not believe that "fault" and "duty" are really useful concepts to bring in here. (How about "sin"? "virtue"? Where do we stop?)

To blame the audience for not understanding

Nobody is doing that, either.

"Academic politics" = good writing

Indeed, it can be reasonable to leave the responsibility of understanding with the producers or with the consumers — you can argue either way. However, once you focus on modern scientific communities, you have less freedom. Hewitt complained at times of complicated "academic politics" interfering with his publications. I'm sure clarity is an issue, and who should do what is a matter of "academic politics".

Generally, the job of a scientist is to discover and communicate. Today in particular, readers are overloaded by information, hence simplifying the readers' job through good writing is overall more important — readers will abandon you for stuff which is written better. The US dominance might also have helped — the modern style we learn sits seems unique to the Anglo-Saxon world. Finally, and luckily, the whole system evolved to favor good writing — unlike some other disciplines, in CS it's hard to publish (at venues worth reading) without decent writing, and writing training is part of PhD training.

Program committees and call for papers explicitly enforce a high level of clarity — which also makes sense for a paper which might be read a lot, if you weight equally the time of writers and readers.
Moreover, to publish you have to make sure your reviewers get your paper, instead of rejecting it because they misunderstood something you didn't explain well, or because they have important questions that you forgot to answer. It's the same with proofs.

Of course, in your training as a PhD student you also get taught that all of this is a good thing. Reality is more complex: Researchers of older generations have more varied opinions, some research schools still act differently, and not everything that is published is so clear — some things are only clear to experts. I also understand that things were probably different when Hewitt started.

Still, Hewitt's writing is an extreme case, because he combines unclear writing with surprising claims, and seem to resent that his work is obscure in ways which I can't recognize by other obscure writers.

For instance, Gödel's theorems aren't trivial, but they are well-understood, and we know very well that you can only elude them with a logic where you can't do enough math (or where proof-checking is undecidable).
Similarly, first-order logic has been used for tons of applications in CS. Maybe this was a hack, OK, but claiming it's inadequate without comment won't get your paper accepted.

Overall, scientific communities define rules and conventions to do good science; if you reject those conventions, you're rejecting that community, and they'll typically reject you back.

re "Academic politics" = good writing

No. Hewitt's writing is pretty good. It's dense. Parts of it are generationally anachronistic (older people get his references better).

The big thing, there's a lot crammed in there. He has a fairly encyclopedic knowledge of some key branches in the history mathematical thought and he explains a bunch of stuff by locating it with reference to that history.

For instance, Gödel's theorems aren't trivial, but they are well-understood,

We have seen, right here on LtU, ample evidence to the contrary.

and we know very well that you can only elude them with a logic where you can't do enough math (or where proof-checking is undecidable).

It's easy to go wrong trying to informally paraphrase what Goedel allegedly proved.

First-order logic is an inadequate foundation for CS

First-order logic is an inadequate foundation for Computer Science.

Please see section on "Limitations of first-order logic" in Inconsistency Robustness in Foundations

I don't think there's

I don't think there's anything controversial about "poor communication" in this context. If almost everybody finds him hard to understand, he's not commmunicating well. The subjective experience you describe, even if you're correct in your perception, doesn't interfere with the judgement.

re If almost everybody

If almost everybody finds him hard to understand, he's not commmunicating well.

"Almost everybody" is a very large set, especially if we take a 4 dimensional view. I'm not buying your certainty.

In this context? It looks

In this context? It looks pretty straightforward. You believe you understand what he's saying and he's right. Do you think there's anyone else around here who believes those two things (of themselves)? They haven't been much in evidence; and you'd think they would be, because he's making claims that if true would be a big deal.

Shutt: an invitation. (re he's making claims that if true...)

he's making claims that if true would be a big deal.

I don't disagree with that but I also think that is a vague generalization.

Listen, you personally have really impressed me with some of the stuff you've written that has nothing to do with Hewitt. It's a thin wire and a long distance between us but at least in your presence on LtU I think I do recognize an intellectual.

I will be happy to contemplate and respond to any specific criticism of his work that you want to go over. For me, this is a good way to approach the texts and the ideas -- to see for myself how they hold up to critical "attack". Bring it on. You believe I'm wrong in my judgment about this work? Great! I'll be happy to join you in that belief if you can show me, specifically, what you mean.

LtU is for discussing papers, after all ;-)

You believe you understand what he's saying and he's right. Do you think there's anyone else around here who believes those two things (of themselves)?

It's ambiguous. It's certainly possible and plausible. I don't worry about that kind of question.

Not necessarily

he's making claims that if true would be a big deal.

I don't think that's necessarily true. I'm not sure any of the claims he's making about DirectLogic are wrong. His claims about Godel's results being invalid because of "fixed points" seem wrong. But it wouldn't surprise me that the stuff about math proving its consistency or uncountability of actors turns out to be right if properly interpreted. However, I suspect that they also won't be particularly interesting when properly interpreted, either. For example, I still suspect you can interpret his top level turnstile as more-or-less an assertion of truth, rather than provability. Of course, when I went to investigate, I got bogged down on the uncountability of sentences (which still appears to me to be a gimmick). I also think the documentation around this project is still too lacking to efficiently get to the bottom of issues like this.

nekkid turnstile

, I still suspect you can interpret his top level turnstile as more-or-less an assertion of truth,

I think what you are missing here is that Direct Logic is reflective in a higher order way. As Hewitt says: "DL is its own meta-logic."

That's a pretty compelling idea for a proposed "foundation". We don't need an infinite tower of meta-logics. DL is, among other things, about itself.

We can do things directly in DL, without stepping outside to some separate meta-logic, like talk about various classes of propositions, about proofs, and so on. Propositions and proofs are first class objects built-in to the axioms of DL.

Given that, well, you are well aware that there is a deep distinction between the provable and the true.

For example, it may be true that a given program will never halt even if no proof of that fact exists.

Just styalistically that's what nekkid turnstile is for: to allow reflection specifically on the provable.

But more deeply: I think(??) I understand you to still wonder if there is a first-order theory model of DL, at least for all intents and purposes.

I am very naive about model theory. Can you tell me a bit about how one might prove the non-existance of a first-order model? Then I can try to figure out if DL can be proved not to have a first-order model.

His claims about Godel's results being invalid because of "fixed points" seem wrong.

The meat of his claims about Goedel's fixpoint concern the second incompleteness theorem. I recommend you find a translation of the full paper and read that part of the paper. It's a complete handwave. It is utterly non-rigorous. It is sloppy as hell. It is not formalizable. It's an informal handwave at best. Read it (it's only a few pages) and get back to me. What Hewitt is doing there is giving it the most generous reading anyone has come up with and, quite correctly agreeing with but being more precise than Wittgenstein, dispelling the hand-wave.

The problem with Goedel's second is that people have this slogan that informally goes "If a system is powerful and proves itself consistent, it is inconsistent!"

The slogan is false. It is an over-generous interpretation of Goedel's second.

Hewitt's contribution in this area is .... rigor.

For a more formal treatment of Godel's 2nd

See here.

(Going a bit Hewitt myself on it: To me it proves that "discrete thought" is effective, an evolutionary trait, but flawed in the limit. Which tells us, since we assume some kind of consistency of reality, that reality is a continuum. For as far as these approximations apply to "continuous" thought.)

(Well. That's how I read it. Yeah, I am pretty weird. I haven't even started yet.)

it's worse than that (he's dead, Jim)

I agree it pretty much arises from the discreteness of formal systems. But you can't avoid it by announcing that your discrete elements have magical immunity to being counted. My view in my blog post on this stuff was, and so far still is, that to get around the discreteness aspect of Gödel you'd have to tamper with the concept of discrete propositions, or something equally shattering; rather than tampering merely with what the propositions are about.

[...] it's more than a little mind-bending to try to construct a notion of truth that's inherently not separable into discrete propositions; indeed, one might wonder if the consequences of the attempt could be rather Lovecraftian. So for the nonce I'll keep looking for profundity around Gödel's results without abandoning the concept of a proposition.

I think he made it worse, actually

My reasoning is that I can restrict myself to a subset of his syntax which is countable and try the Godel argument.

So, having uncountably many sentences doesn't help, it actually makes his position worse.

re it's worse than that

John if you believe that Goedel's 2nd theorem applies to Direct Logic then you can rigorously prove it by producing a contradiction. You won't be able to do so because the rules for constructing Sentences (which I've linked several times now. Page 27 of that paper.)

If you merely believe that DL is inferentially incomplete, good news, you can prove that pretty simply in DL itself.

if you believe that Goedel's

if you believe that Goedel's 2nd theorem applies to Direct Logic then you can rigorously prove it by producing a contradiction.

It should be clear, on a moment's consideration, that that doesn't follow. It requires the additional stipulation that Direct Logic be rigorously defined, lack of which was in the past sharply criticized; whether it's now well-defined is one of the things I'd expect to look for. Of course, it also requires that DL be 'sufficiently powerful', which from some past discussions I suppose could also warrant keeping a weather eye out.

Confusion about language and model

Additionally, there seems to be some confusion about language and model. If the model is uncountable it doesn't imply the terms of the language are. Right?

(I really want to get out of this discussion by now. Someone else, better informed, can debate this.)

DL is well defined.

It requires the additional stipulation that Direct Logic be rigorously defined, lack of which was in the past sharply criticized; whether it's now well-defined is one of the things I'd expect to look for. Of course, it also requires that DL be 'sufficiently powerful', which from some past discussions I suppose could also warrant keeping a weather eye out.

DL is well defined in the appendices of the damn paper

Re formal treatment of Goedel's 2nd

If you want to formalize Goedel's second, you need a formal system in which Sentences (which may contain free variables) and Propositions (which may not) are first class objects.

Hewitt's system (see page 27) imposes strict rules on how Sentences may be constructed.

There does not appear to be any way to carry out Goedel's construction without violateing those rules.

DL is inferentially incomplete and can prove itself so. It can do so more directly than Goedel's first theorem because it does not need to "encode" sentences. But you can not build a proposition of the sort Goedel suggestions in the 2nd theorem, a proposition that would lead to a contradiction.

If you say so

God. That's fine with me. If you can block it that manner that could imply that the construction of a Godel proof is impossible within the logic. (Remains a bit of a question what you could do if you try to do a Godel argument outside of the logic. Or within the logic with an encoding of sentences.)

But after claims about Y combinators (for which I don't think I'll find supporting evidence in literature) and claims about uncountability (which I think doesn't exclude the existence of a countable core such that a Godel construction could apply; also seems to confuse language and model), I am personally not going to read through it anymore. (I have to admit I am also not that interested in DL.)

But be my guest and keep discussing it with Matt M.

If you want to formalize

If you want to formalize Goedel's second, you need a formal system in which Sentences (which may contain free variables) and Propositions (which may not) are first class objects.

Actually, no, you don't. It suffices that integers are first class objects.

re if you want to formalize

If you want to formalize Goedel's second, you need a formal system in which Sentences (which may contain free variables) and Propositions (which may not) are first class objects.

Actually, no, you don't. It suffices that integers are first class objects.

You simply don't understand Goedel's second theorem. If you want to read it together I'd be happy to go over it with you.

You're mistaken. Though I

You're mistaken. Though I admit to a trivial misstatement, in that one has to reason about integers and also functions on integers (or something equivalent).

no I'm not. re You're mistaken

Goedel's second requires you to construct a sentence which you can prove to be unprovable, but that formally, within the system at hand, implies the sentence and so constitutes a proof of the sentence.

Goedel's second requires a reflective logic.

Since Goedel did not think in terms of types, he was unable to contemplate how a logic could be safely reflective as well as Hewitt, who decades later thinks in terms of types, can think about reflection. Goedel couldn't imagine how to avoid certain kinds of recursive definitions of a sentence. Hewitt points that out in retrospect.

As for Hewitt's general approach to categorical construction: Have you ever read any of Conway's "On numbers and games"? I strongly recommend, in this context, the first couple of chapters plus the bit between parts 0 and 1... where Conway addresses foundations.

Conway freely makes use of transfinite inductions (for fun!) and Hewitt freely excludes them (for computer science!) but those two guys are on the same page in terms of how to build foundations.

If the formal system can

If the formal system can reason about the behavior of arithmetic functions, and if some arithmetic function happens, coincidentally, to mirror the behavior of the system reasoning about arithmetic functions, the system can forbid anything else, like reflective sentences or fixpoints, all it wants; they're irrelevant. If it can't reason about arithmetic functions, it's pretty lame. If its behavior in doing so doesn't happen to conincide with the behavior of some arithmetic function, it's uncomputable, which is a differnet kind of lame.

re if the formal system can

Hi Shutt.

Goedel's first theorem meta-mathematically proves the incompleteness (or else inconsistency) of any axiom set strong enough do arithmetic.

DL, importing proof methods from Turing or Church proves its own incompleteness (or else inconsistency).

Goedels second theorem, which is really kind of a hand-wave pseudo-theorem, imagines a logic in which the first theorem is not done meta-mathematically, but is instead done directly in the system we are talking about. To accomplish this, the second therom requires that we construct a sentence which can formally prove, within the system, that one of its terms its its own integer "encoding".

Goedel's second theorem also assumes the system is self-provably self-consistent.

From the two assumptions (self-provably self-consistent and containing a self-referential sentence in that sense) Goedel describes how to derive a contradiction.

What Goedel could not see, because he lacked awareness of how types could be used here, is the possibility of a system that could formally (even if not dispositively) could prove its own self-consistency, yet in which the self-referential sentence was exluded by a static syntax check.

Hewitt has pointed out that such a system is possible, and is actually (once you get a few simple things), natural, simple, powerful, ample to formalize classical maths, etc.

Part IV of Goedel's paper is a hand-wave. Hewitt sorted it out. He's not kidding (and is right) when he points at a few sentences from Wittgenstein's record that show Wittgenstein kind of grasped how to sort it out, even if he didn't manage to formalize it rigorously as Hewitt has done.

p.s.: this is all "preliminary" in the sense, yeah yeah, Hewitt has a clean maths foundation, but really the practical applicability of this to engineering distributed, concurrent, and decentralized computing infrastructure appears quite fertile.

Seems utterly wrong

What Goedel could not see, because he lacked awareness of how types could be used here, is the possibility of a system that could formally (even if not dispositively) could prove its own self-consistency, yet in which the self-referential sentence was exluded by a static syntax check.

This is the kind of statement Hewitt makes with which I most strongly disagree. This characterization seems totally wrong to me. Godel's theorem doesn't require the kind of fixpoint Hewitt blocks with types! See my post here. The reason that proof can't be made to go through in DirectLogic has nothing to do with types. Look at the types involved - they're trivial.

The only obstruction to carrying out that proof in DirectLogic is that DL doesn't understand that propositions, formulas and theorems (of the naked turnstile) are enumerable. And of course Godel's theorem falls through if you take away those elements. The enumerability of theorems is the crux of the idea.

Also, one point on which you and John may be talking past each other: Godel's second theorem still applies to DirectLogic in this case, but using a homebrew provability construct defined within Nat instead of the naked turnstile. If it applied to the naked turnstile, I gather that would lead to inconsistency in DirectLogic.

ℕ does not allow "self-referential" props because of types

The Peano/Dedekind categorical axiomatization of ℕ does not allow "self-referential" propositions because they cannot be constructed using nonexistent fixed points that are disallowed by types. Also note that because of types, ℕ is much stronger than what can be axiomatized in any first-order theory.

Church/Turing provided the first correct proof that the closed mathematical theory ℕ is inferentially incomplete using the convention that the theorems of the closed mathematical theory ℕ are computationally enumerable. Because of Church's Paradox, the theorems of Mathematics are not computationally enumerable.

Strings are computationally enumerable. However, for generality in Computer Science, sentences and propositions are not countable in order to accommodate sentence and propositions that incorporate Actors.

It is very important for Computer Science that the following are rigorously defined and axiomatized using types:

Strings
Sentences
Propositions

ℕ does not need "self-referential" props

The Peano/Dedekind categorical axiomatization of ℕ does not allow "self-referential" propositions because they cannot be constructed using nonexistent fixed points that are disallowed by types

But Godel's theorem doesn't require "self-referential" propositions. See the link I provided above. Your response in that thread was that formulas aren't enumerable. I agree that's an effective response, but it also gets at the heart of my concern for your approach: you haven't effectively axiomitized the notion of provability in DL. Thus meta-level claims that DL proves its own consistency fail because the DL turnstile doesn't actually correspond to provability as it is usually understood.

Gödel's incompleteness "proofs" rely on a "self-ref" propositon

Gödel's incompleteness "proofs" manifestly rely crucially on the existence of a nonexistent "self-referential" propositon constructed using a nonexistent fixed point.

Consequently, Gödel's incompleteness "proofs" are incorrect.

I don't think so

Your presentation of Godel that I've seen you post several times relies on the existence of fixpoints. Godel's original presentation and the presentation I linked to in my post above rely on a diagonalization technique instead. The existence of an inconsistency proving fixpoint, in the presence of an assumed self-consistency proof, was a conclusion rather than an assumption.

Gödel's "proofs" relied on existence of "I am not provable."

Gödel's incompleteness "proofs" relied on existence of the self-referential proposition "I am not provable." constructed using a nonexistent fixed point (sometimes called the Diagonal Lemma).

re I don't think so

I have an idea for cracking this confusion:

Your presentation of Godel that I've seen you post several times relies on the existence of fixpoints. Godel's original presentation and the presentation I linked to in my post above rely on a diagonalization technique instead. The existence of an inconsistency proving fixpoint, in the presence of an assumed self-consistency proof, was a conclusion rather than an assumption.

There are two ways to understand Goedel's 1st theorem.

Way 1: Goedel gives you a computational procedure. If you perform that computation, you can produce an unprovable proposition or else a contradiction. The proposition asserts that a certain equation has no solutions. If you had a proof of the proposition, that no number is a solution to that equation, you would be able to use Goedel's computation to translate the proof into a number that... satisfies the equation. If your system is free from contradictions, no proof of that proposition is forthcoming.

Way 2: You have, let's say, built such a proposition. You have an equation with no solution (hopefully) and a proposition that asserts the equation has no solution. You know that if a proof were found a contradiction would arise. So, you conclude that the proposition is true. I don't mean that you informally, looking at the model, believe it faithful to the target system and so say the proposition is informally true. I mean you have a formal result that links the model in the equation to the truth or falsity of the proposition containing the equation.

Nothing Hewitt or I are saying detracts from understanding the 1st incompleteness theorem in "way 1". Of course, interpreting it that way is unsatisfying compared to "way 2".

The problem with "way 2" is that even though you are reasoning metamathematically, you are still reasoning over domains like Sentences and Propositions in the target system. You are still constrained by their rules of construction. The construction rules for Sentences in DL don't give you any way to formalize "way 2".

In your earlier post (other topic) about the diagonalization you think should go through, essentially you are asking to relax the rules for constructing sentences.

Basically, you are asking for syntax "trees" that contain cycles because, given those, and feeding some particular ones to rules of inference, you can do all kinds of Goedelian things. The domain of trees used here, though, doesn't allow cycles.

Nearing exhaustion

Nothing Hewitt or I are saying detracts from understanding the 1st incompleteness theorem in "way 1".

I think you're speaking for yourself here. Hewitt has been pretty clear, I think, that he doesn't see either of Godel's incompleteness theorems as valid.

Basically, you are asking for syntax "trees" that contain cycles because, given those, and feeding some particular ones to rules of inference, you can do all kinds of Goedelian things.

No, not at all. I'm asking for syntax "trees" that can be inspected as data because that's what expressions are. I similarly want proofs that can be inspected as data, because that's what proofs are. Without adding axioms that allow you to understand turnstile in terms of the existence of enumerable proofs, how can you claim in the meta-level that it means provability?

Note that syntax trees will not contain cycles. Godel doesn't use some oddball variant of PA that allows cycles. Rather he uses ordinary PA that doesn't include cycles and shows that the assumption of provability predicate would lead to a problematic cycle. That problematic cycle doesn't exist though, because PA is consistent and cannot encode its own provability relation. That's why using types to prevent self-reference doesn't make any sense - PA doesn't allow self-reference!

I think we may have to agree to put this aside at some point. I don't feel that we're making progress towards a resolution. Do you see what I'm saying about the meaning of turnstile being so important? Do you agree with my point that if you don't include enough axioms to pin down its meaning then you aren't warranted to claim at a meta-level that it represents provability?

re nearing exhaustion

I think you're speaking for yourself here. Hewitt has been pretty clear, I think, that he doesn't see either of Godel's incompleteness theorems as valid.

"Way 1" of understanding Goedel's 1st doesn't require formally valid proofs about the subject system. That's what distinguishes it from "way 2".

I similarly want proofs that can be inspected as data, because that's what proofs are. Without adding axioms that allow you to understand turnstile in terms of the existence of enumerable proofs, how can you claim in the meta-level that it means provability?

The DL rules of inference.

Note that syntax trees will not contain cycles. Godel doesn't use some oddball variant of PA that allows cycles.

In the second theorem he hand-waves concerning an imaginary variant of PA that would allow such cycles. Throughout his life he apparently never made this result precise.

Do you agree with my point that if you don't include enough axioms to pin down its meaning then you aren't warranted to claim at a meta-level that it represents provability?

You can introduce a naked turnstile before proposition P by showing a proof of P. If you assume the provability of P you can assume P.

I'm not sure what else you would mean by provability.

Hiatus

"Way 1" of understanding Goedel's 1st doesn't require formally valid proofs about the subject system. That's what distinguishes it from "way 2".

Right, but Hewitt doesn't seem to acknowledge way 1.

The DL rules of inference.

Sure, OK. We're on the same page here. Most inference rules can be viewed as axioms or vice versa. But it's important that the axioms + inference rules pin down the meaning of provability.

In the second theorem he hand-waves concerning an imaginary variant of PA that would allow such cycles.

So you're interpreting all of Hewitt's claims about the invalidity of Godel's theorems as referring exclusively to some commentary Godel published about a kind of general theorem he never actually proved? I have a hard time reading him that way.

You can introduce a naked turnstile before proposition P by showing a proof of P. If you assume the provability of P you can assume P.

Well, to really pin it down, you need that provability is equivalent to the existence of a certain finite proof object.

I am getting exhausted by this thread, so I'm going to take a break for a while.

Every proof has a finite number of steps

Of course, every proof has a finite number of steps and furthermore there is a total computable procedure Proof?:

∀[p:Proof, s:Sentence]→  ├p ⌊s⌋ ⇔ Proof?.[p, s]=True

Gödel's "I am not provable" comes from a nonexistent fixed point

Gödel's alleged Mathematical sentence "I am not provable" comes from a nonexistent fixed point (sometimes called the Diagonal Lemma) that doesn't exist because of types. His results were for Principia Mathematica, which was intended as the foundation of all of Mathematics. Unfortunately, Principia Mathematica had some defects in its types that have been corrected in Direct Logic.

Church/Turing correctly proved inferential incompleteness (sometimes called the "First Incompleteness Theorem") without using a nonexistent "self-referential" proposition. The Church/Turing theorem and its proof are very robust.

Of course syntax trees (e.g. Actors of type Sentence) in Direct Logic can be inspected as data. However, "self-referential" sentences cannot be constructed because of types.

Bare turnstile (i.e. ⊢ without a sub-scripted theory) is very important in Direct Logic where it means what it is supposed to mean, namely, provability in Mathematics.

Church's Paradox means theorems of Math not enumerable

Church's Paradox means that the theorems of Mathematics are not computationally enumerable. According to Church:

in the case of any system of symbolic logic, the set of all provable
theorems is [computationally] enumerable...  any system of symbolic
logic not hopelessly inadequate ... would contain the formal theorem
that this same system ... was either insufficient [theorems are not
computationally enumerable] or over-sufficient [that theorems are
computationally enumerable means that the system is inconsistent]...
    This, of course, is a deplorable state of affairs...
    Indeed, if there is no formalization of logic as a whole, then
there is no exact description of what logic is, for it in the very
nature of an exact description that it implies a formalization. And
if there no exact description of logic, then there is no sound basis
for supposing that there is such a thing as logic.

Uncountable = uncomputable

You should agree that (most) other presentations of logic have countable sets of sentences/propositions — even when types are involved. Moreover, Gödel's theorems only hold if proof-checking is decidable. If proof-checking is decidable, those sets are countable (more precisely, a terminating program only looks at a finite amount of data — so it either only has finite input, or it looks only at a finite part of its input).

Strings are countable; sentences and propositions are not

Strings are countable, but sentences and propositions are not countable.

Proof checking is nevertheless computationally decidable.

5 words

When Hewitt makes these assertions he never bothers with even five more words to say why they would be the case.

A person who isn't hip to Thomas Lord's claim that this is the "70's way" would be forgiven for assuming that Hewitt doesn't know the definition of countably infinite.

Theorems of Math are not computationally enumerable

Mathematics proves that it is open in the sense that it can prove that its theorems cannot be provably computationally enumerated:
Theorem ⊢Mathematics is Open
Proof. Suppose to obtain a contradiction that it is possible to prove closure, i.e., there is a provably computable total procedure Theorem:[ℕ]↦Proposition such that it is provable that

∀[Ψ:Proposition]→  (⊢ Ψ)  ⇔  ∃[i:ℕ]→TheoremsEnumerator.[i]=Ψ

A subset of the theorems are those proving that certain procedures [ℕ]↦ℕ are total.
Consequently, there is ProvedTotalsEnumerator:[ℕ]↦([ℕ]↦ℕ) that enumerates the provably total computable procedures [ℕ]↦ℕ that can be used in the implementation of the following procedure:

Diagonal.[i:ℕ]:ℕ ≡ 1+(ProvedTotalsEnumerator.[i]).[i]

However,

• Diagonal is a provably computable total procedure because it is implemented using provably computable total procedures.
• Diagonal is not a provably computable total procedure because it differs from every other provably computable total procedure.

The above contradiction completes the proof.

[Franzén 2004] argued that mathematics is inexhaustible because of inferential undecidability of mathematical theories. The above theorem that mathematics is open provides another independent argument for the inexhaustibility of mathematics.

uh

While I don't get your notation, I know that to use a diagonalization the elements have to be infinite.

If you posit that propositions are finite in length then you can't use that argument.

One might point out that while an infinite proposition might be true, one could never prove it unless it succumbs to induction - and that's only possible if there's a finite form of the same proposition.

Thomas claimed that you had a different argument

His claim was that you allowed your proofs to contain what one might call unconstructable reals. That you considered them the completion of infinite processes that involve randomness (and thus the program generating them can be finite).

re Thomas Claimed

No I didn't but I'll clarify.

DL is able to directly prove that proofs can't be enumerated. The argument is borrowed, I've read, from an observation Curry made in a communication to Goedel. (Hope I remember that correctly.)

If you could enumerate the proofs, that means you could write a procedure that given a natural number, 0, 1, 2, .... the procedure would return you the Nth proof. And furthermore, that every possible proof would eventually show up in this list.

Curry pointed out, and Hewitt formalizes, a contradiction that arises if proofs could be enumerated that way.

------------------

Hewitt has also mentioned that terms can be abstracted as actors and the existence of actors that do things like compute infinite lists of random bits proves there are uncountably many possible actors, hence terms.

I've elaborated on why that is potentially a practical observation.

------------------

DL does not insist that terms are abstracted as actors. There are not axioms that describe how to do this. (Perhaps there is a ("sub-")theory defined somewhere about this but not in the papers under discussion.)

DL is open and its axioms are general enough to permit terms to be abstracted as actors, which is another way to see that propositions (hence proofs) are not countable.

As I understand it, that paradox is invalid

A diagonal has to be a proposition of infinite length, but a countable infinity of propositions would only have propositions of finite length, so it's no contradiction that the diagonal is not a member.

If you start with the assumption that propositions can be have infinite length, then you could use a more abstract diagonal to prove that they have the order of the continuum.

I've addressed this before.

Explaining diagonalization

I'm sitting in a parking lot at Safeway and I'm and I'm almost frustrated enough to post a top level "start a new topic" post to explain Cantor's diagonalization to Thomas Lord and Hewitt.

I know how all this works because decades ago (or was it only a decade and a half) there was a 13 year old troll on USENET'S sci.math who misunderstood Cantor and was posting abuse every day saying that mathematics was wrong. Since sci.math was unmoderated, all people could do was to try to explain to him how he was wrong.

So between that and a short book on the subject I came out with a solid idea about how countable infinities work and differ from C

I'm going to assume that you know the outline. Start with a set of natural numbers, then generate a power set. All of the subsets, including the infinite ones.

From this, each number on the line from 0 to 1 can be constructed as a bijection of the subsets, for each subset the number is the sum of the 2^-1-n, where n is every natural number in a given subset.

To this you can apply the diagonaliztion argument to show that it can't be put in a 1 to one correspondence with the counting numbers.

But stop for a second and imagine what would happen if instead of constructing reals with sums of 2^-1-n you lost the -1 and the minus sign and constructed sum of 2^n. That's what this 13 year old did. After all, that's pretty much what binary integers look like.

Where he got confused is that while this DOES generate all of the natural numbers, it also generates another set, it's not a continuum because of the natural numbers, it's a continuum because of all of the numbers that are infinite that it also generates.

say that we're displaying a number backwards from 2^0 to 2^1, 2^2 ...

0000... where "..." contains an infinite number of 1s in it is an infinite number. 1000... is also an infinite number if it's continuation has an infinite number of 1s in it, it doesn't ever matter where they are. And so on.

If you want to generate only natural numbers then you have to restrict your subsets to only be FINITE subsets.

The difference between the power set and the set of all FINITE subsets is a set that interpreted this way is a set of only infinite numbers, it is uncountable.

If instead of using natural numbers as our base set, we took finite mathematical propositions as our base set, then the set of all finite subsets of this generates a countable set. But the power set generates a continuum.

A USELESS continuum. Every finite proposition is in a countable set. Difference set between the continuum and the finite sets of propositions are all infinite sets of propositions. If you wanted to prove proposition AND proposition AND proposition etc... you never could.

Like the infinite natural numbers, you can create a set with the order of the continuum, but it turns out to be an extension that loses the useful properties of the smaller set.

The useful integers are finite and countable.
The meaningful propositions are finite and countable.

To really explain this I'd have to give examples of counting sets.

I leave it to the student to refresh themselves on this.

re explaining diagonalization

DL doesn't have sets of the kind you are used to.

For example, I can tell you that "the powerset of the set of all natural numbers" is not a thing that exists in DL.

In fact, "the set of all natural numbers" is not a thing that exists in DL.

Natural numbers are objects whose type is ℕ

Sets of natural numbers are objects whose type is Set◁ℕ▷

The axioms for sets in DL don't provide any way to define the "universal" set of some type.

Sigh, you learned nothing

I might as well be typing at a rock.

Do you concede that your idea of "proposition" can be infinitely long?

If so it was trivially uncountable. Also trivially useless as a proposition.

If you don't admit those things, then you never understood the terms you use. I see little sign that you or Hewitt do.

I give up. How long until everyone else does?

re sigh

I might as well be typing at a rock.

Your comments are very confused.

I give up.

Good.

Of course he was talking about actors not propositions.

But are you claiming both? That actors and proposition can be made of an infinite number of symbols AND that the symbols can include unconstructable reals?

Diagonal and ProvableComputableTotal

Diagonal is a function, so it is definitely in ProvableComputableTotal. The assumption that it is different from any other function is wrong.

If ProvableComputableTotal enumerates all finite streams symbols of the Unicode alphabet used in constructing programs, and rejects any syntactically invalid programs, then:

ProvableComputableTotal.[Diagonal]

Is simply true, with no contradictions.

re Diagonal and ProvableComputableTotal

ProvableComputableTotal.[Diagonal]

Is simply true,

The codomain of ProvableComputableTotal is , not Boolean.

It's a predicate too, somehow

I was confused by that myself, but Keean's one dot away from using the same notation as Hewitt's "¬ProvableComputableTotal[Diagonal] because..."

Unserializable proofs

Diagonal is a function, so it is definitely in ProvableComputableTotal. The assumption that it is different from any other function is wrong.

Suppose it's in there, like you say. Then Diagonal is provably total, and it's ProvableComputableTotal.[i] for some i. Now if we try to compute Diagonal.[i] according to the definition of Diagonal, the only possible integer result must satisfy the equation (n = n + 1). So there can't be any result but nontermination -- but Diagonal is provably total, so it can't be nontermination either!

Hence, we get a contradiction, and the proof by contradiction continues to go through as planned.

If ProvableComputableTotal enumerates all finite streams symbols of the Unicode alphabet used in constructing programs, and rejects any syntactically invalid programs [...]

The proof assumes nothing about how proofs are encoded. If you add a new assumption that proofs are encoded as finite strings in a finite alphabet, you get a contradiction.

Among all the assumptions we could reject to avoid that contradiction, I think the two most contentious assumptions are these:

  • Proofs can be encoded as finite strings in a finite alphabet.
  • If we know there's a proof of something, we know it's true.

Formal mathematics tends to reject the second assumption. Classical Direct Logic rejects the "finite alphabet" part instead.

I prefer Classical Direct Logic's choice here. The first assumption has limited value since I'm willing to use network addresses and cryptography, and I'll need something like the second assumption if I want to "eval" newly received proof objects on the fly.

Enumerable != The Enumerating Function is Total.

I don't think I have understood this correctly:

Diagonal[i:ℕ]:ℕ ≡ 1+(ProvableComputableTotal.[i]).[i]

ProvableComputableTotal is type (N -> Proposition), so given (i:N), it returns the (i)th proposition. Assuming . takes precedence over +, we must assume that all propositions take a single argument of type N and return an N, which is no problem, and then we add one to it?

In the case that Diagonal is in ProvableComputableTotal, then for some (i) ProvableComputableTotal.[i] is Diagonal, so in that case the above reduces to:

Diagonal[i:ℕ]:ℕ ≡ 1+Diagonal.[i]

So the paradox would appear to be that if Diagonal is provable total, then it is not provable total? The problem is "provable total" is an impossible function due to the halting problem?

Is it possible the ProvableComputableTotal, can enumerate all total-functions, but is itself not a total-function? Then the definition of Diagonal is not total by definition (as it calls a non-total-function, and total-functions must only call other total-functions). It would seem enumerable and total are different things. Isn't the above just a proof that enumerable and total are different concepts.

Further total functions are a subset of partial functions (they are partial functions that just happen to be totally defined). Partial functions are countable (as that is pretty much the definition of a Turing machine), therefore total functions are countable (the subset of a countable set is countable).

Given that it is easy to see that ProvableComputableTotal must be a non-total function, but that does not stop the set of total-functions being countable, therefore enumerable != the enumerating function is total.

Edit: In fact I think it is possible to enumerate all functions, and that a subset of those are total, so must be enumerable, but we cannot know which due to the halting problem.

Misreadings

Edit: I was correcting some misconceptions, but it looks like you've got it now. Reformulating my response...

It would seem enumerable and

It would seem enumerable and total are different things. Isn't the above just a proof that enumerable and total are different concepts.

"Recursively enumerable set" is a term with a particular meaning. I think you'd fare better looking it up than listening to me try to describe it in full, but basically: If an algorithm can generate a sequence that eventually lists every element in the set, but doesn't list any element outside the set, then the set is recursively enumerable.

So the paradox would appear to be that if Diagonal is provable total, then it is not provable total? The problem is "provable total" is an impossible function due to the halting problem?

The point of the proof is to show that Classical Direct Logic's theorems can't be recursively enumerated. By way of contradiction, the proof supposes that they can be. ProvableComputableTotal uses that enumeration of theorems to do its own recursive enumeration of provably total procedures. (Actually, I'm not quite sure how we're supposed to leap to having procedures if all we had before were the theorems, but I think it might involve parsing the theorem's syntax and reinterpreting it.)

Is it possible the ProvableComputableTotal, can enumerate all total-functions, but is itself not a total-function?

The proof Hewitt posted says it's total, but it says that without much explicit justification. I think the idea is that if you call it with index 4, it runs the recursive enumeration of theorems until it sees a fourth termination proof, which is always going to happen sooner or later.

Partial functions are countable (as that is pretty much the definition of a Turing machine)

I think they won't be countable in this system, if only because of embedded Actors.

Given that it is easy to see that ProvableComputableTotal must be a non-total function, but that does not stop the set of total-functions being countable, therefore enumerable != the enumerating function is total.

Hmm, I'm not sure I follow, but don't make the mistake of leaping from "countable" to "[recursively] enumerable."

The set of Turing machines is countable and recursively enumerable. The set of Turing machines that diverge is countable but not recursively enumerable.

Terminology

So I am saying that ProbablyComputableTotal is countable but not enumerable then, because we can enumerate all partial functions but we cannot define a terminating filter to find all total functions (as it is the halting problem).

If the partial functions of a Turing machine are countable, and as ActorScript is implementable on a turing machine, then they are countable (but may not be enumerable). If we cannot implement DL on a turing machine its a bit useless for any computational use.

Does CDL not allow partial functions then?

In any case is this relevant to the larger argument, because we can enumerate all the partial functions, which includes all the total functions. In which case we can Godel number all the partial functions, which mean all the total functions have Godel numbers, we are just not sure from the Godel number whether the function is total or partial.

Does that not mean the important property is countable, not enumerable?

gah I figured this out

In Hewitt's world every possible function has a distinct symbol, not just functions that you can specify in finite form, so just the fragment of a string f where f names a function has the order of the continuum.

The argument about propositions is a fallacy though, he's assuming that he starts with a list of proofs where every total function (over naturals) is in the list, and tries a diagonal on it. But who would think that you could have a proof for every unspecifiable function? That's an odd assumption.

He's not talking about propositions in the normal sense, he's not talking about symbols in the normal sense.

He's defined a new definition of both that is so expansive that he's reasoning not about mathematics as normal people understand it, because in it, mathematical objects aren't specified by their properties and reasoned about, they're all mystically addressed directly.

For him the string "f" specifying one function can't be stored in finite form. Not even a single function can be named in finite space.

re terminology

By hypothesis, ProbablyComputableTotal and Diagonal are procedures, not functions.

Does CDL not allow partial functions then?

A function f:ℕ ↦ ℕ is in some sense a partial function of the type ℕ ⨆ ℝ.

The DL paper gives typing rules for functions but does not axiomitize them otherwise.

Suppose we define a function f in DL, in terms of a non-total procedure.

One implication is that if we construct an object x in the domain of f, we can not automatically say we have constructed f.[x]. (This would require proof.)

What if we can in fact prove that the procedure defining f does not terminate for input x? Does that mean we have proved f is "not a function?"

The basic answer appears to be "no" for the reason that there are no axioms of totality for functions.

Conceptually, we could say that "f" is a function but our definition of it is incomplete.

Diagonalise partial functions

So we can diagonalise partial functions, and partial functions are countable, irrespective of support for Real numbers by numbering their bit-vector representation.

enumerating partial procedures

You can enumerate all deterministic procedures (including non-total procedures).

Given Diagonal ≡ DetProcedure.[i]

DetProcedure.[i].[i] must diverge.

You can enumerate the definitions of non-deterministic procedures but some of these denote an uncountable number of actual procedures.

Procedures are their definitions.

This is where we disagree, there are not uncountable procedures. A procedure us its definition. There are procedures that return possibly infinite streams, but returning a different stream is not necessarily a different procedure. A procedure that returns a random bitstream is a single procedure.

Well Keean

One of the "proofs" in this thread assumed that there is a function for every possible ℕ->ℕ mapping.

That's not a list of functions that can be defined in finite source, that's a whole order of the continuum. The real numbers can be mapped to a subset of these mappings.

All that crap about mapping reals to unfinished streams has nothing on this. Hewitt assumes that the symbol for a single function has to have as much information as a completed, unspecifiable real.

This is not mathematics as we're used to because here every function has a concrete address and the set of functions are not definable functions. This is way off the deep end for that.

What they're calling math is much much larger than math. They don't name functions or numbers and reason about them, they mystically address them directly.

mappings N -> N are countable?

I think functions from N -> N might be countable. If we start with all functions that only change the first element of N, there is a single function which is the identity. If we consider functions that only change the first two you only have swap and id.

So for functions that only change the first 'm' elements of N, leaving the rest unchanged, there are (m!) permutations. So if we only consider functions that change the first 'm' elements we have:

sum (n = 1 .. m) n!

possible functions. As m increases the sum is ever increasing but always finite, so doesn't this make the members of N -> N countable? For any finite 'm' we know how many unique functions mapping [0 .. m] -> [0 .. m] there are.

They can't be

take the real number line from 0-1

Excuse me for using a notation that's more familiar to me

(define (real_to_natural_function r)
    (lambda (i) (nth-digit r i)))

There, now you can turn the reals into unique ℕ->ℕ mappings, and of course the reals aren't countable.

What's wrong with your countable set is

that it misses an entire class of functions.

You NEVER get to a single function that changes an infinite number of inputs.

With that caveat, yes you do get a countable set. This is subset in a similar way the fact that set of FINITE subsets of the set of natural is a countable set, but the whole power set give you the continuum and a way to construct the reals.

I talked about this in another post.

There is a continuum of functions that change an infinite number of inputs, and you don't generate A SINGLE ONE OF THEM, EVER.

Not one-to-one

Yes, you are right, its not a one-to-one correspondence. Lets stick to programming languages that can describe programs that can actually exist.

re a procedure is its definition

A procedure that returns a random bitstream is a single procedure.

It is hard to reconcile that with the denoted range of the procedure:

GenF.[] → Memoize.[[x:ℕ] → (0 either 1)]

GenF always returns a procedure actor. With probability 1, no two of the procedure actors it returns are the same. The procedure actors that are semantically permissible to return from GenF are uncountable.

This is the first time I've seen a memoized version used

Yes, that changes it.

The old version had no state.

Weird Semantics

Well, you can say I am not allowed to look at the source code for a function when I compare them for identity, yet I can see the source code above.

So you return a function that randomly chooses 1 or 0, but because of memoisation sticks to that random choice for any given number.

So you memoise a potentially infinite number of functions?

Actually it does not matter if you compare the source code, or the binary object code output by the compiler. Hence we do not have uncountable functions, as we can count the binary object code.

But his functions can't be defined in finite code.

I just finished explaining that.

You will never finish the comparison.

Use the source

I can compare the source code, which is finite.

An expandable arbitrary precision version is realizable

It occurs to me that you can still enumerate all functions out to an arbitrary high N, and if you use this sort of stream thing, you can change your mind and add more entries later.

If you get rid of this uncountable stuff, is what's left useful?

er, except for his non determinist ones

where the code doesn't predict output anyway.

Non determinism

I think non-determinism is fine, I have no problem with (0 either 1), but infinite memorisation is just a fantasy, you could never implement this. This is not a programming language.

re infinite memorisation

infinite memorisation is just a fantasy,

We usually model our programming languages as turing machines.

You are suggesting that all of computer science be refounded exclusively on something less powerful, such as linear bounded automata.

That might be interesting but you will have to do a lot of work to make a case that its a useful approach.

Intel have done the work.

Intel have already done the work for me. That web browser you are using to post, and this site itself are the result of finite-state-machines. There are no infinities anywhere in there. The Turing machine is bearable because it only has an 'infinite' tape, with finite symbols. We can allow this because it models the fact that a computer can always have more (but a finite amount) of memory.

To get into uncountable functions, its just not possible to implement, so you will write some code, and think you have done a wonderful job, and have no model to understand why it fails. It is a poor model for programming if it allows unrealisable programs to be expressed in its type system. In fact it is unsound, as the definition of soundness is that the type system prevents programs that will fail from compiling. Your memoisation function will fail, and so a type system that accepts it is unsound.

people do things

Intel ... There are no infinities anywhere in there.

Nothing in the specification or implementation of an intel processor imposes an upper bound on the amount of storage it can access.

Unbounded but finite.

The processor is limited to 64bit addresses due to the width of the address register in the programmers model, although real chips currently only use the bottom 48 bits. This limits the amount of physical memory directly connectable. You could use paging tricks to page in more, for example using (2^64 - 1) bytes to select an external address, and write a byte to one of 2^(2^64 - 1) addresses. However there is never infinite memory, just a unbounded finite amount.

In reality people are not going to build custom hardware like that, so for normal use, the limit is the number of memory slots on the PC motherboard, and the maximum memory addressable in each slot. The dual Xeon motherboards that can address 96Gb have probably the largest addressable memory commonly available at the moment.

unbounded PCs

A PC can use an unbounded amount of mass storage devices. Humans can manufacture storage devices much, much, much faster than a PC can fill them. A thorough model of what a PC can do is, very literally, a turing machine.

You don't gain anything by saying "that's not infinite, that's unbounded!" In constructive math, infinities are unbounded constructions.

At this point, you are complaining about the word choices of the technical vocabulary of math, rather than thinking about what the math is actually saying.

Uncountable is worse than unbounded, by the way

if you're just unbounded then you don't have infinite elements just unbounded finite ones. If you have uncountable elements then you have mostly infinite elements. :/

Countable infinities are made of finite elements. When you say "infinity is not a number" then you're in a countable realm. When you say "almost all of my numbers are infinite" then you're in the uncountable realm.

re countable, uncountable

If you have uncountable elements then you have mostly infinite elements.

If we define a suitable equivalence relation, each real number can be identified with an infinite list of bits.

The Real procedure in Hewitt's paper returns an object that occupies finite *space* but that satisfies a useful set of axioms for an infinite list of bits. The GenF function I gave has similar properties.

We can not exhaustively enumerate or even just count the lists Real.[] might return. They are uncountable.

finite elements

Mathematics is agnostic as to whether the universe contains a finite or an infinite amount of information.

When you say "infinity is not a number" then you're in a countable realm. When you say "almost all of my numbers are infinite" then you're in the uncountable realm.

Those are OK hand-waves but of course those concepts can be made rigorous and programs can compute with certain objects drawn from an uncountable type.

Hand waves

I deliberately gave rules of thumb, they help intuition.

Direct Logic or whatever you call it needs to split off a useabl

Direct Logic or whatever you call it needs to split off a useable subset - or, alternatively, it needs to be totally limited to a usable subset - it's not clear to me if the continuum part buys you anything. It needs normal mathematics, Mathematics for mortals and machines, it needs propositions that can be proved or disproved.

Mathematics is agnostic as to whether the universe contains a finite or an infinite amount of information.

It's possible to prove a proposition that has a finite amount of information.

It is impossible to prove a proposition that has an infinite amount of information - if you think you've done so, then you've actually found a finite form for that proposition ie, it never actually had an infinite amount of information.

I'm going to use the word "realizable" to indicate mathematical objects that can be specifically referred to with some sort of finite specification. I'm going to use the word "unrealizable" to refer to all other mathematical objects, those objects from continuua each of which requires an uncompressable infinity of information to refer to

By defining mathematics as having propositions and proofs and functions that can not be specified without an infinite amount of information you've created something new, something with the form of mathematics but which is not mathematics.

Now it's not unusual for mathematics to reason about unrealizable objects, but it does so by relying directly on realizable propositions and only realizable propositions.

By defining a "mathematics" that is full of unrealizable propositions, unrealizable functions, unrealizable "actors" you've expanded "mathematics" to something new that, for the most part, you can never use. If you want to use this mathematics, you'll have to reason about it indirectly, without actually using it, in just the same way that people reason about unrealizable real numbers without specifying any individual unrealizable member of that set.

Or maybe you don't need that unrealizable part at all. Maybe there isn't a use for unrealizable propositions. Can anyone find a motivating example?

i thought you quit, Josh.

Direct Logic or whatever you call it needs to split off a useable subset

Since you know next to nothing about DL, it's motivations, the history, what the theory says, and why, and since by your own admission you can not follow discussions of it, why are you opinionating on it?

it's not normal mathematics either because

he's combining the computer science idea that everything has an address and can be referred to directly with uncountable things.

And then he's doing metamathematics talking about these propositions that as you said, are useless because they can't be realized.

But metatheory about things you can't realize is fine. So maybe there's something useful in it.

But Hewitt doesn't distinguish between the various classes of realizable and unrealizable when he posts, he just says "Actors" this and "Actors" that, so it's confusing and makes you wonder if he's confused.

Existing on the roadmap

So I am saying that ProbablyComputableTotal is countable but not enumerable then [...]

[...many paragraphs later...]

Does that not mean the important property is countable, not enumerable?

ProvableComputableTotal only exists as part of the contradiction we build during the proof-by-contradiction. If you agree with the conclusion that the theorems/proofs quantified over by CDL can't be recursively enumerated, then ProvableComputableTotal doesn't exist.

It sounds like you would still disagree with Hewitt's use of uncountable syntax. Frankly, I don't know why Hewitt's using uncountable syntax. (I know why I am: I attribute each piece of syntax to an authoritative person/organization who maintains its meaning, and I don't intend to restrict these authorities to being countable.)

[...] because we can enumerate all partial functions but we cannot define a terminating filter to find all total functions (as it is the halting problem).

Hmm, that reasoning doesn't work for me, since we can recursively enumerate the halting Turing machines. I find the ProvableComputableTotal argument more convincing.

If the partial functions of a Turing machine are countable, and as ActorScript is implementable on a turing machine, then they are countable (but may not be enumerable).

The proof is showing it's not possible to recursively enumerate the theorems/proofs that CDL lets us quantify over. For any given closed formalization (e.g. a finite implementation on a Turing machine), the theorems/proofs we can construct are recursively enumerable... but we can still quantify over more theorems/proofs than that.

I think this is where you'd say those extra theorems/proofs don't exist. I can't say that's wrong, but perhaps you could get on the same page as Hewitt if you thought of these as features that "exist" on the project roadmap, rather than existing in any one particular version of the implementation.

Does CDL not allow partial functions then?

Hewitt gave Factorial as an example of a partial function.

The proof Hewitt posted says

The proof Hewitt posted says [ProvableComputableTotal] is total, but it says that without much explicit justification.

It hypothesizes that ProvableComputableTotal is computable and total in order to obtain a contdradiction from that hypothesis.

ProvableComputableTotal

ProvableComputableTotal is not computable, as it is equivalent to the halting problem. If you do this instead with ProvableComputablePartial (a superset of PCT) you can diagonalise this without encountering the problem.

you can diagonalise this without encountering the problem.

Yes, if you are talking only about deterministic procedures.

If i is the number of the resulting Diagonalize, then Diagonalize.[i] must diverge.

Even non-deterministic procedures have a definition

If we have a procedure that takes a number of different actions based on a random primitive, that makes it non-deterministic, but it is still a single procedure, with a single definition.

Constructing ProvableComputableTotal

It hypothesizes that ProvableComputableTotal is computable and total in order to obtain a contdradiction from that hypothesis.

What the proof rejects is the original hypothesis that the theorems/proofs of CDL can be computationally enumerated.

I don't see ProvableComputableTotal being quite the same as that hypothesis. ProvableComputableTotal is just one alleged consequence of it, and I only barely follow the reasoning in between. Here are two ways I'm thinking the gap might be filled in:

  • Perhaps I can parse a termination theorem to extract the definition of the terminating procedure, and then perhaps I can construct the procedure from that definition.
  • Perhaps I can somehow recursively enumerate all procedures along with their corresponding termination propositions (which may or may not be provable). Then perhaps I can test equality of these propositions to the recursively enumerated provable theorems.
    • (This one seems unlikely. I think procedures are supposed to be uncountable, right?)

the deliberately incomplete structure of CDL

AFAICT,

CDL gives a theory of types with primitive types Boolean, ℕ Sentence, Proposition, Proof, and Theory. From previously constructed types, new types can be constructed for disjoint unions, functions, and so forth.

Axioms of construction are given for ℕ

A strong axiom of induction is given for ℕ and as a result it has but one model.

Axioms of construction for typed sets are given but there is no set-specific strong axiom of induction. Sets are established by a characteristic function.

(Therefore, btw, I was wrong, earlier, when I said in a nearby thread that DL lacked a universal set of for ℕ but I think I was still correct that it lacks a universal set for Sets◁ℕ▷. We don't have a provably universal powerset of the naturals because we have no categorical induction axiom for Set◁ℕ▷, We do have a universal set for ℕ itself because the categorical induction axiom for naturals lets us prove that all naturals are found in the set given by the characteristic function that always returns true. Hope I have that right now.)

The paper doesn't give (or I missed) axioms for Boolean but these are not too difficult to come up with. :-)

Rules of construction are given for typed expressions and sentences, which can be abstracted to form propositions. No axiom of induction is given for these.

A conservative but natural set of rules of inference are given.

Here is one big point:

Unlike what people are used to:

CDL does not give a strong axiom of induction for types. The set of types is open-ended. You can extend DL with new types.

CDL does not define any particular construction rules for functions, families, etc. You can invent new ways to introduce objects of this kind.

CDL is rigorously open-ended/incomplete.

One crude reason you can not enumerate the proofs of CDL is because CDL does not give you enough specification of proofs to be able to do that! (At the same time, you can enumerate all of the natural numbers in CDL, because the axioms for natural numbers give you enough to do that.)

The core of CDL is deliberately ambigous in an unusual way. It is not simply that there are multiple possible models of CDL. The ambiguity is deeper than that. There is no model for CDL -- only for CDL plus additional axioms (which we could call the "sub-theories" of CDL).

The "propositions of CDL are uncountable" because it is possible to define theories within CDL in which, for example, we have a kind of constructive way to choose a truly random real number.

If you find uncountable propositions unpalatable, stick to some other sub-theory of CDL.

CDL is not the kind of "foundation" wherein we have specified, once and for all, all the axioms we need.

CDL is the kind of "foundation" wherein we have axioms atop which we expect to add additional axioms and definitions.

Yes, but the gap

Reading this, I see I'm already on pretty much the same page as you are, and I hope others can take heed...

But what does all of that have to do with the reasoning gap I'm concerned about?

re Yes, but the gap

what does all of that have to do with the reasoning gap I'm concerned about?

I take you to refer to this alleged gap:

What the proof rejects is the original hypothesis that the theorems/proofs of CDL can be computationally enumerated.

I don't see ProvableComputableTotal being quite the same as that hypothesis.

I assumed you were just momentarily confused since the hypothesis about ProvableComputableTotal is exactly a statement that the theorems/proofs of CDL can be computationally enumerated.

Why do you believe "ProvableComputableTotal [is not] quite the same as that hypothesis."?

ProvableComputableTotal

ProvableComputableTotal enumerates provably total procedures, not theorems/proofs. If all we assume is a way to recursively enumerate are theorems/proofs, then where do we get the procedures from?

re ProvableComputableTotal

If all we assume is a way to recursively enumerate are theorems/proofs, then where do we get the procedures from?

Any simple axiomization of a turing-complete, enumerable type of procedures of type [ℕ]↦ℕ

We could create a sub-theory for a very restricted lisp-dialect, for example.

I think that just makes

I think that just makes things more complicated. If you're talking about defining a closed theory, then don't we have to prove that it lets us define all the provably total procedures of CDL (completeness)? Secondly, how do we turn theorems in that sub-theory into total procedures that we can apply to get contradictions in the main theory (soundness)?

I'd rather avoid coming up with a new programming language with full completeness and soundness proofs (if that's even possible) just to fill in a gap in one proof. Instead, how about using the built-in procedure-defining mechanism Hewitt's been showing us?

I half-expect there's some way to extract a total procedure when we have a termination theorem. If so, then how do we recursively enumerate those termination theorems, when all we've assumed is that we can recursively enumerate the theorems of CDL? After all, We can't trust a subset of an r.e. set to be r.e.

you only need some axioms (re I think that just...)

I'd rather avoid coming up with a new programming language with full completeness and soundness proofs

You don't need any of that to fully formalize the proof from Church. Just a few uncontroversial axioms.

Damn it, I figured it out ... a falacy and ?metahypermathematics

Since I didn't know the symbols you were using I assumed a meaning from what you were claiming the result was... I assumed you were taking a direct route to what you wanted to prove.

Ok I understand now, and I am sure this is a fallacy.

Your assumption that a list of all provably total functions from ℕ->ℕ covers all functions from ℕ->ℕ is incompatible with the assumption that proofs are of finite length!!!! You can't even specify those functions in a finite form let alone prove that they're total!

Doh. That says nothing about whether provable propositions of finite length are countable, because your set of propositions wasn't, as you intended a subset of finite provable propositions, it's from a larger set. You just assumed that every possible total function has a proof that it's total. Doh doh doh.

Of course a list of all mappings from ℕ->ℕ has the order of the continuum, that was obvious because a subset of those functions can be mapped to the real number line [take i as the nth digit of the real].

If you're claiming that all possible ℕ->ℕ function is an actor, then you're claiming that actors can't be specified in a finite form at all. They're not programs at all. Well, yes, if you start with the assumption that your set of functions forms a continuum, then you can't even specify any arbitrary unique one any more than you specify any more than you can specify an arbitrary unique real.

But since you can't even specify members of a continuum in finite form, you can't make proofs about them in a finite form either!!!

Statements that you can't make aren't theorems! This isn't any different than if you replaced functions with reals.

God!

In your system just the fragment of a string "f" where f is a function that you want to talk about requires the order of the continuum, it requires more information than a member of a countable infinity.

But there has never been a mathematical proof addressing a specific object that can't be specified in some sort of finite form. And if all you specify is there exists an f (which is what we do) then you haven't actually had to address it.

What you call "mathematics" is something that has never existed, can never exist.

Also the fact that you assume that there can be a proof of totality for every function is an assumption of completeness that isn't warrented. And that's a separate issue!

Now if you want a metamathematics that isn't about mathematics, but about a hypermathematics that is a superset of actual mathematics, that's interesting, but it has all of these strange properties only because you assumed your way into a larger territory than what mathematicians have ever been able to work with, by definition!

What you call mathematics is uhm metahypermathematics?

Goedel's first theorem

Goedel's first theorem meta-mathematically proves the incompleteness (or else inconsistency) of any axiom set strong enough do arithmetic.

That seems reasonably safe to say, although one has to watch out for what one means by the phrase "do arithmetic". So far I haven't gotten the impression we were having a problem over the first theorem, though, so it may be safe to let sleeping dogs lie, on that point.

Goedels second theorem, which is really kind of a hand-wave pseudo-theorem, imagines a logic in which the first theorem is not done meta-mathematically, but is instead done directly in the system we are talking about.

The technical part of this, I have no problem with; it's one of the key points brought out by my blog post. The part about "hand-wave pseudo-theorem" is, on consideration, interesting. If someone wants to believe the second theorem is on shaky ground, they'd likely favor such a characterization. If someone sees the theorem as an extremely robust result, i.e., its validity not sensitive to minor variations in formulation, they would be likely to use different words that almost mean the same thing but, not.

To accomplish this, the second [theorem] requires that we construct a sentence which can formally prove, within the system, that one of its terms [is] its own integer "encoding".

Note particularly that I've corrected "its its" to "is its"; though I'm confident enough to make the correction, I'm also wary enough to flag it out.

The system doesn't have to "know" which arithmetic function coincides with the system's own reasoning about arithmetic functions.

I conjectured, when you had that violent allergic reaction to my blog post on the subject, that the trouble might have been caused by my linking you into the middle of the treatment so that you missed the earlier part that explained the objectives of the treatment. I'm now dubious of that conjecture. It now seems more plausible that the trouble arose because I chose to explore those facets of the situation that would clarify things for me, and the set of facets needing clarification to achieve clarify for you, whatever overlap there is, involves some facets outside the set I needed. I focused on minimizing the assumptions needed, and minimizing the reasoning used; and I was particularly interested in avoiding assumptions about how the system being studied actually works. Even when treating the second theorem, which (as you point out) moves key reasoning into the system being meta-mathematically reasoned about, the assumptions I used are of the form 'if the system deduces this, then it also deduces that', which is indifferent to how the system deduces either. That treatment asks that if the system deduces that the arithmetic function confirms a proposition (reminder: although the function conincides with the system, the system doesn't have to "know" that) — if the system deduces that the function confirms a proposition, then the system confirms the proposition. Were that not so, the system would be deducing things about the function that aren't true.

The thing that's really striking about Gödel's approach, the reason he's so very famous, is its robustness — that he completely cut the ground out from under any attempt to make self-proof of consistency "safe" by any type-like means, by pointing out that the system under study doesn't have to reason self-referentially at all. It only has to reason about the behavior of arithmetic functions. Entirely non-self-referentially.

Gödel's incompleteness "proofs" are incorrect beause of types

Gödel's incompleteness "proofs" are incorrect because they make use of a nonexistent "self-referential" proposition allegedly constructed using a nonexistent (because of types) fixed point . The confusion is that although strings can be encoded as Gödel numbers; sentences and propositions cannot be so encoded because of types (and also because there are too many of them).

The Church/Turing inferential incompleteness theorem for ℕ and similar closed mathematical theories is very robust. Note that the Church/Turing proof of inferential incompleteness does not make use of nonexistent "self-referential" propositions.

re Goedel's robustness

The thing that's really striking about Gödel's approach, the reason he's so very famous, is its robustness

If Goedel were still with us, I'm not sure he would agree with you (and this has nothing to do with Hewitt per se).

In the 30s he hand-waved the second theorem and promised a rigorous presentation later. He never delivered.

By 1964 he's nodding to Turing for having produced a more general and simpler result and is at least a bit clearer that systems can not provide self-consistency proofs *of a very particular kind*.

Hewitt's account of the

Hewitt's account of the history of these ideas would necessarily be colored by his fundamental misconceptions.

Shutt: put up?

would necessarily be colored by his fundamental misconceptions.

That tautology applies to anyone and everyone. It can be used as an underhanded way to discredit someone without actually saying anything substantive about them.

If you are going to suggest that Hewitt has "fundamental misconcpetions" I hope you will lay them out specifically and clearly.

I wrote a long and detailed

I wrote a long and detailed post, which you completely ignored in favor of seizing on the high-level observation at the end of it.

re: long and detailed post, which you completely ignored

I hope you have seen I gave a detailed reply to that in reaction to your fair complaint.

thx

I have seen, and appreciate, and mean to reciprocate. (I'm painfully aware I need to budget my time on LtU better than I've done just lately.)

History requires viewpoints

History requires viewpoints. The view from "nowhere" does not exist.

However, histories can be improved!

In particular, suggestions from improvement are greatly appreciated for the histories in Inconsistency Robustness in Foundations.

John Shutt: what do mean about my "fundamental misconceptions"?

John,

Exactly what do you think are my (in your words) "fundamental misconceptions."

Thanks!
Carl

A fair question fairly

A fair question fairly asked, and for the first time in months I find myself inclined to reply to you. Also a question whose answer should not be dashed off quickly or casually.

detailed reply to John S. (re Goedel's first theorem)

The technical part of this, I have no problem with; it's one of the key points brought out by my blog post. The part about "hand-wave pseudo-theorem" is, on consideration, interesting. If someone wants to believe the second theorem is on shaky ground, they'd likely favor such a characterization. If someone sees the theorem as an extremely robust result, i.e., its validity not sensitive to minor variations in formulation, they would be likely to use different words that almost mean the same thing but, not.

In the seminal paper ("On formally undecidable propositions of Principia Mathematica and related systems") the second theorem is presented in section "{4}". A proof is given only in outline, not in detailed construction. At the end of the paper a rigorous version is promised in later paper but this was never to appear.

(I am here using the version from the Martin Davis book.)

The theorem is described in relation to a recursively enumerable set of formula, χ. The theorem holds that "The SENTENCE which asserts that χ is consistent is not χ-provable."

The vagueness in this informal treatment of the alleged proof concerns the question of what it means for a proposition in χ to "assert that χ is consistent".

Let w be the alleged proposition in χ that asserts that χ is consistent.

Let u be a proposition which is unprovable in w where u is of the sort that if it had a proof, we could encode that proof as a number and the number would be an existence proof that u is false. (In Goedel's notation, u is called "¬ Bew (17 Gen r)").

The outline of the proof then assumes a theorem can be proven. Namely that:

  w → u

From this, Goedel concludes that a proof of w gives rise to an obvious proof of u. Since u is known to be unrpovable, if we prove u we can then go on to derive a contradiction. So, concludes Goedel in the 2nd theorem, w can not be proved.

The handwave comes on full force when Goedel tries to informally convince us that u can be constructed in χ in such a way that we can prove u ⇔ ¬ u.

Goedel has earlier in the paper, for the 1st theorem, shown how to construct that u as a fixpoint we could informally notate as:

u ≡ ∀ n (n does not encode a proof of the proposition encoded as y) where the existence of an n that encodes a proof for the proposition encoded as y implies ¬ u

Goedel's 2nd theorem hand-wave runs into trouble if we can have a system in which:

1) That definition of u is illegal.

2) The system is able to prove its own self-consistency.

So to speak: a system can prove w so long as it can not express u.

Even when treating the second theorem, which (as you point out) moves key reasoning into the system being meta-mathematically reasoned about, the assumptions I used are of the form 'if the system deduces this, then it also deduces that', which is indifferent to how the system deduces either.

Hewitt's stance, and I see no problem with it, is that if you can construct a proposition u such that w → u (as above) that your system is broken.

As evidence that it is broken: you are forced to somehow not admit the trivial (even if not dispositive) self-consistency proof that DL has.

Non-recursive

u ≡ ∀ n (n does not encode a proof of the proposition encoded as y) where the existence of an n that encodes a proof for the proposition encoded as y implies ¬ u

I'm not sure what you're suggesting is wrong with this. It's a non-recursive definition, after all:

u ≡ ∀ n (n does not encode a proof of the proposition encoded as y)

Using this definition, we can immediately conclude the following:

the existence of an n that encodes a proof for the proposition encoded as y implies ¬ u

re non-recursive

u ≡ ∀ n (n does not encode a proof of the proposition encoded as y)

Using this definition, [....]

The definition contains a free variable, y. Therefore:

Using this definition, we can immediately conclude the following:

  the existence of an n that encodes a proof for the proposition
  encoded as y implies ¬ u

is not false so much as meaningless.

y

Yes, your definition contains a free variable, y. I didn't put it there.

We're probably better off debating from my similar post here, where you did provide full definitions for most of the free variables.

re y

Looking back at the earlier thread, you looked at this notation and said it was not a recursive definition... you say that there was no fixpoint here:

X := q ∈ K where (X = Rq(q))

The fixpoint semantics of this definition are the only thing that give q and X a specific meaning. Do you not see that X is defined in terms of itself there? That is is not some arbitrary q from the set K but is instead a very specific example of q in K?

Not replying here

Sorry, I'm not going to reply to this here. Dragging pieces of that thread's discussion here will have the effect (if not the intention) of censoring the on-topic conversation in this thread.

re not replying here

I hope you will consider that while you could have simply responded sustantively in the thread where you say that conversation belongs, but instead you decided to act like a state censor and scold me for responding to you where you most recently commented on the topic.

(You'll get this in 20 years, probably): What are you, 20 something:?!?!?

Thread maintenance

Sorry, as soon as I logged off I realized my comment looked like it was entirely directed at you. I had been short on time, and I had wanted to make up for my mistake of replying here by stating my reason for me, you, and others to consider replying on the other thread instead of this one. But I effectively used your post as an example! Oops.

Meanwhile, I didn't have time to post a reply to the technical content. I do still hope to do that at some point (in the other thread).

decided to act like a state censor

If I encourage some future comments to move from one thread to a more relevant one that's a click away... there's hardly any censorship damage in that. It might even improve the accessibility of both threads' information.

Turnstile

I think what you are missing here is that Direct Logic is reflective in a higher order way

I understand that it wants to be reflective. I see two alternatives for what the naked turnstile could mean:

1. It refers to provability in DirectLogic. The problem with that is that by Godel you then can't expose a full working understanding of this fact to the logic. And, also, Hewitt seems to prove within DL that the axioms of the naked turnstile aren't recursively enumerable. But the axioms of DL, whatever it is, will be recursively enumerable. So there's a soundness issue, too.

2. It refers to provability in a nebulously defined "mathematics" which includes all things people might eventually reason to be true. For example, maybe it includes the conclusions of ZFC, plus the Godel sentence of ZFC, plus the Godel sentence of that system, plus the Godel sentences of that sequence of systems, etc. etc. In that case, you might as well take the relation to be truth, instead of provability. In fact, I think I've heard the result stated that if you were to continue adding Godel statements for every ordinal, you would have completeness. Does anyone recognize that theorem?

Can you tell me a bit about how one might prove the non-existance of a first-order model?

Well, first order logic is complete, by an earlier theorem of Godel. So if you manage to prove that there doesn't exist a first order model, you will have proven the logic inconsistent :).

But Hewitt has made it clear that he intends a second order model and I'm not specifically interested in a first order model. I've seen constructions of second order theories that are consistent but don't have a model. They were all constructions that I don't think would be very illuminating for approaching DL, though.

"If a system is powerful and proves itself consistent, it is inconsistent!"

I don't think DirectLogic is a counterexample to even this informal statement, because it doesn't have enough axioms to pin down the turnstile enough to let us know what it's supposed to mean. Again, this is rather flip, but if you start with Arithmetic and add the string "This system is consistent" as an "axiom" (in the form of a English comment that doesn't interact with any of the other rules), that obviously doesn't break the consistency of the system. Surely you wouldn't let me get away with claiming that such a system proved its own consistency.

And this is why I'm concerned with models of his naked turnstile. Figuring out what the model could be lets you understand what his axioms tell you the symbol means, rather than relying on the documentation telling you what it's supposed to mean. And in this regard, Godel was much more careful than Hewitt seems to have been.

I understand that it wants

I understand that it wants to be reflective.

I recall at our annual town meeting a few years ago, the planning board was pushing a poorly thought out bylaw amendment; someone raised a concern about what it would do, and someone from the board responded (not for the first time that meeting) with something along the lines of 'the intent is ...'. My mumbled remark to my father, which he (unlike me) had the nerve to get up and say, was, 'I'm not worried about what you intend it to do; I'm worried about what it'll do that you don't intend it to do.' (As I recall, that particular motion was defeated.)

if you start with Arithmetic and add the string "This system is consistent" as an "axiom" (in the form of a English comment that doesn't interact with any of the other rules), that obviously doesn't break the consistency of the system. Surely you wouldn't let me get away with claiming that such a system proved its own consistency.

Actually, it doesn't matter how the system establishes its own consistency, as long as it does; an axiom is just as good as any other means. The problem is with what it does after establishing it.

I'm not sure if you caught

I'm not sure if you caught my meaning. I didn't put quotes around "This system is consistent" to signify translating that string into quantifiers. I mean literally the rule is:

-------------------------------------
    |- This system is consistent

This inference rule does not interact with the rest of the logic at all, so you can continue to prove all of the other things you could in PA [and nothing more].

Now, can I claim that this system proves its own consistency? Of course, not. But what Hewitt has done looks to me like a more sophisticated form of this construction.

(There were two ways I could read your post. One of them called for this reply and the other didn't need a reply. Sorry if you meant the other one :).

I'm actually still not sure

I'm actually still not sure if I've correctly read your reading of what I wrote, but... if you're saying that a proof of consistency doesn't cause a problem if the system is unable to reason from it — well, yeah, if the system is unable to reason from it then it's not all all clear that it's really a proof of consistency.

well, yeah, if the system is

well, yeah, if the system is unable to reason from it then it's not all[at] all clear that it's really a proof of consistency.

Right. In my flip example, no further inference is possible from the alleged consistency assertion. Because you can make no inferences, there's no reason to interpret those English words as asserting anything at all, even if an assertion of consistency was the meta-level intention.

By analogy, in Hewitt's DirectLogic, he's chosen the turnstile with the meta-level intention that it should represent provability. In DirectLogic, there are evidently things you're allowed to do with the turnstile. So it's not meaningless like the assertion in my flip example. But my point is that we still have to look at the axioms of DirectLogic to figure out what it means rather than appeals to the meta level intention.

Not formalizable?

It is not formalizable.

It was formalized by Hilbert and Bernays in the 1939 Grundlagen der Mathematik, Vol II. Russel O'Connor, in his paper on his formalization of the first theorem in Coq, addresses the feasibility of continuing on with the formalization of the second theorem; it doesn't look like there's any good reason to think this could not be done, with some work. These points would seem to contradict you, unless I'm reading your evaluation of Gödel's proof too broadly.

What do you mean by "the most generous reading anyone has come up with"? Hewitt states without qualification that Gödel's proofs were invalid. This is a more generous reading than those of others who filled in the details and generalized the results?

Also, could you elaborate on Hewitt "quite correctly agreeing with" Wittgenstein? I tried to engage him on this point but got nowhere; if you understand what he's saying perhaps you could explain it to me? Back when I addressed Hewitt about this I read up on some Wittgenstein scholarship pertaining to this point. There seems to be two camps: one saying Wittgenstein disagreed with Gödel's first theorem, and was wrong, and the other saying that he agreed with Gödel and was making a subtle/insightful observation. Hewitt is the only person I've seen make the claim that Wittgenstein disagreed with Gödel's first theorem, and was correct. I didn't think you saw eye-to-eye with Hewitt on the first theorem, so I'm quite surprised that you would say he was correct to agree with Wittgenstein. Could you elaborate?

Gödel's "proofs" formalized doesn't make them apply to Math

The fact that Gödel's "proofs" have been formalized doesn't mean that they apply to Mathematics or even closed mathematical theories like ℕ.

The only way to preserve Gödel's proofs is to retreat to strange first-order modal provability logics that do not provide a suitable mathematical foundation for Computer Science. This retreat was adopted by Gödel in his response to Wittgenstein's criticism that "self-referential" propositions lead to inconsistency in Mathematics (in addition to insinuating that Wittgenstein was "crazy").

Gödel's proof are in standard non-modal first-order logic

You better wake up to the fact that Gödel's proofs work for what essentially everybody else treats as standard logics and is used for computer science — classical first-order logic with Peano arithmetic. Its problems are well-known, people have their reasons to use it anyway.
So, in the best case, you can say that "Gödel's theorem apply for logics that satisfy standard assumptions, but I give up these assumptions".
(In fact, lots of PL people prefer intuitionistic higher-order logic, but Gödel's theorems still apply, and those people are aware of the dominance of classical first-order logic).

Gödel's results hold only in modal provability logic (not Math)

Gödel's results hold only in modal first-order provability logic (not Mathematics).

Please see Provability Logic

Wittgenstein correct: "self-ref" prop mean inconsistency in Math

Wittgenstein was correct in writing that "self-referential" propositions mean inconsistency in Mathematics. In particular, he wrote that allowing the proposition "I am not provable." into mathematics results in an inconsistency:

Let us suppose [that Gödel's results are correct and that
therefore]  I prove  the improvability (in Russell’s system) of
[Gödel's “self-referential” proposition ] P; [i.e., ⊢⊬P where P⇔⊬P]
then by this proof I have proved P [i.e., ⊢P].
    Now if this proof were one in Russell’s system [i.e., ⊢⊢P]—I
should in this case have proved at once that it belonged [i.e., ⊢P]
and did not belong [i.e., ⊢¬P because ¬P⇔⊢P] to Russell’s system.
    But there is a contradiction here! [i.e., ⊢P and ⊢¬P]

re Grundlagen der Mathematik

It was formalized by Hilbert and Bernays in the 1939 Grundlagen der Mathematik, Vol II.

A bit of trivia: Goedel cites that in his 1964 postscript to the translation of "On Undecidable oropositions of formal mathematical systems" in the Martin Davis book ("The undecidable").

I don't read German and don't have a translation at hand but here is key from a deep overview of the Grundlagen (emphasis added):

4.3 Unprovability of consistency. For a formalism F that is consistent and satisfies the restrictive conditinos, the proof of the first incompleteness theorem shows the formula ~B(m,q) to be unprovable. However, it also shows that the sentence ~B(m,q) holds and is provable in F, for each numeral m. The second incompleteness theorem is obtained by formalizing these considerations, i.e. by proving in F the formula ~B(m,q) from the formal expression C of F's consistency. That is possible, however, only if F satisfies certain additional conditions, the so-called derivability conditions. Hilbert and Bernays conclude immediately "in case the formalism F is consistent no formalized proof of this consistency, i.e. no derivation of that formula C, can exist in F." (p. 284)

In the notation being used there, this amounts to proving, within the system itself, that q is the numbering of the proposition ~B(m,q) itself.

"David Hilbert and Paul Bernays, Grundlagen der Mathematik I and II; a landmark", Wilfred Sieg & Mark Ravaglia [2004] (Technical Report No. CMU-PHIL-154]

"I am not provable." is allowed in first-order provability logic

The sentence "I am not provable." is allowed in first-order provability logic (which is not suitable for the mathematical foundations of Computer Science).

re 1st theorem, Hewitt, Wittgenstein

Also, could you elaborate on Hewitt "quite correctly agreeing with" Wittgenstein? Back when I addressed Hewitt about this I read up on some Wittgenstein scholarship pertaining to this point. There seems to be two camps: one saying Wittgenstein disagreed with Gödel's first theorem, and was wrong, and the other saying that he agreed with Gödel and was making a subtle/insightful observation. Hewitt is the only person I've seen make the claim that Wittgenstein disagreed with Gödel's first theorem, and was correct. I didn't think you saw eye-to-eye with Hewitt on the first theorem, so I'm quite surprised that you would say he was correct to agree with Wittgenstein. Could you elaborate?

I think I can do some justice to all parties here:

1) Goedel Thm 1 gives a method for constructing an unprovable propostion. The general form of the proposition is an assertion that no number exists which satisfies a certain equation. If we could prove the proposition to be true, the proof itself could be "encoded" as a number. The number would actually satisfy the equation. The system under study would contain a contradiction: A proof that no number of a certain sort exists, along with an existence proof that such a number exists.

2) In theorem 1 we are standing outside the system we're talking about. The conversion of a hypothetical proof into a number is something we see from the outside. The system itself is not assumed to prove the relation between a hypothesized proof and its numbering.

3) In theorem 2, Goedel asks us to consider systems that, internally, prove the relationship between a proof, and a numbering of that proof that occurs w/in the proof itself. (i.e. the target theory has variables that range over its own propositions and we can construct a fixpoint sentence that logically asserts that one of its terms is its own encoding).

4) Wittgenstein looking at theorem 2 said (I paraphrase): Well, duh, if you allow sentences to refer to themselves that way you get all kinds of paradoxes. "Everything I write today is false". Yadda yadda. We know that. Such self-referential sentences are basically meaningless. Don't act so surprised if you get contradictions after deciding to use such sentences.

5) Hewitt shows that self-referential sentences of the sort entailing theorem 2 of Goedel can be excluded by types -- by restricting how sentences can be constructed to exclude overt self-reference. Conceptually, this is a formal way to take Wittgenstein's advice to avoid those kinds of sentences.

6) Hewitt correctly points out that Goedel's 1st theorem proof is wrong if you try to express it in terms of Direct Logic domains like Sentences and Propositions. It's wrong because, even meta-mathematically, the rules for constructing Sentences still apply. I appear to differ from Hewitt slightly because I also correctly point out that Goedel's 1st theorem proof doesn't literally have to use those domains of Direct Logic. It could operate over strings.

It's not much of a difference. In my view, the fidelity of the first theorem's meta-math to the subject system is an informal, empirical truth only. It is not some essential formal math result. You can tell that Goedel's first theorem tells you something about a subject system because it looks right, not because there's a really formalized proof.

My take on Goedel's 1st is a generous reading of the 1st theorem contrasted with Hewitt's reading which I think is probably closer to what Goedel himself believed about it.

Thanks for your reply,

Thanks for your reply, which, on the whole, was very reasonable. I can see what you mean now by saying Hewitt has a point in agreeing with Wittgenstein. I don't agree that Wittgenstein was responding to the 2nd theorem, but I also don't really care to argue about what his intended meaning really was. When Hewitt invokes Wittgenstein, it's to back up the claim about the 1st theorem you outlined in (6), as far as I can tell. This is why I was confused when you said he had a point about Wittgenstein. Your response here clears things up for me. A quick point: while I tend to think having a computer-verified formalized proof should increase our confidence in a result, that wasn't my point in bringing such things up; my point was that no one is likely to call such a proof hand-wavy or complain about lack of rigor.
Edit: On second thought, Wittgenstein's quotation does touch on the 2nd theorem, but my point about Hewitt's use of it to argue about the 1st theorem stands, I think.

Point 3's wrong, point 6 is interesting

Thanks for the analysis. This is helpful.
I agree with your point 6 — if your logic has a "syntax" (sentences form a datatype) you can encode them with strings/integers and prove the first theorem using integers.
In point 3, you say "the target theory has variables that range over its own propositions". That's not what Gödel does, though maybe it makes sense in direct logic. You only need integer variables and integer functions — so points 4 and 5 don't apply.
The conversion from sentences to Gödel numbers is still done in the metatheory. In the object theory you only have arithmetic functions on Gödel numbers, like the IsProof predicate (originally called Bew) —
https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems#Construction_of_a_statement_about_.22provability.22. The proof that IsProof describes proof-checking is done in the metatheory, so it is typically informal (unless you formalize the metatheory in Coq).

If you are concerned about the metatheoretic steps (how do you trust the metatheory), we can discuss why people do finitist metatheory at a foundational level. That discussion would also explains why logics with uncountable sets of sentences wouldn't be accepted as foundations on their own. This is all indeed somewhat hand-wavy, but to do any sort of math you need some starting point, the same way our discussion presupposes some shared understanding of English; Wittgenstein also discussed why this works well enough. Nowadays, I'd say that finitist metatheory is metatheory you can run on a computer, with concepts simple enough to be safe from paradoxes or strange phenomena.

Finally, Wikipedia also discusses Wittgenstein's criticism — most people don't buy its criticism, even though some subtler points seem to be under discussion — see the discussion of ω-incompleteness. But that's much fancier and subtler than anything we've discussed here.
https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems#Wittgenstein

If s:String◅Sentence▻, then ⌈s⌉:Sentence

If s:String◅Sentence▻, then ⌈s⌉:Sentence.

However, the Y fixed-point procedure doesn't exist in Mathematics because of types. Consequently Gödel's alleged construction of a "self-referential" proposition fails in Mathematics. If they existed, "self-referential" sentences lead to contradictions in Mathematics that can result in security holes.

Unfortunately, the Wikipedia article is incorrect on many counts and there is no way to correct it :-(

Russell Paradox

I am not really sure there is a (russel) paradox at all. We can interpret the sentence "the set of all sets that are not members of themselves" as: we have sets of pointers to member sets, and we have a set of pointers to all sets. Search the set of all sets and create a new set that contains a pointer to any set found that does not contain a pointer to itself (it cannot include itself in the search as it does not exist when the search happens). There is a distinction between mathematical objects and the algorithms by which they are constructed, and nothing can exist which you cannot construct.

The claim is mathematical objects can be represented by data-structures, just like we already know algorithms can be represented by programs.

The claim is mathematical objects can be represented by data-str

The claim is mathematical objects can be represented by data-structures

Including all the terms of formalized mathematical discourse.

Including Algorithms

Which means we can treat algorithms as an object that can be manipulated, as well as execute it. This suggests meta-programming is fundamental. We should have a way to manipulate the syntax tree of the language itself as a data-type.

I am not clear if the restriction on fixed points is overly restrictive. There is no reason an algorithm cannot refer to its own output from the previous timestep.

Only syntax

Mathematical objects that can be represented by data structures are the exception, not the rule. All of those are "just syntax", and lots of math isn't syntax. However, the few mathematicians caring about foundations have created "formalized mathematical discourse" and crafted it so that it's just syntax.

Mathematical Objects as Data Structures

What mathematical objects cannot be represented as Data structures? If you can write it down you can represent it as a data structure.

Mathematical objects as data structures defined where?

Are you talking about data structures defined in the metatheory or in the object language?

In the metatheory, of course, all object language entities are just syntax - a particular data structure in the metalangage. The price that you must pay for this convenience is the potential for manipulating syntax in ways that don't respect the rules that define the object language.

However, from within the object language, you can't observe the fact there's a metalanguage in the first place - let alone the fact the object language is itself defined in terms of metalanguage syntax. For example, from within the lambda calculus, the only thing you can do with a function is apply it. Application itself is "God-given", in the sense that it works in mysterious ways that you really can't fiddle with. (Even though there would be nothing mysterious about it if you could "jump" into the metalanguage and see how application is implemented in terms of substitution and yadda yadda.)

Computer Science needs unified foundations (no meta-system)

Computer Science needs unified foundations that can be understood and operated by computer systems. There is no magical "meta-system" that can only be understood and operated by humans.

In terms of digital computation, it's Actors all the way up and down ;-)

Metalanguages

The type system can prevent the metalanguage breaking the rules of the embedded language. Given lambda calculus we could implement a representation of lambda calculus along with an interpreter for it in lambda calculus.

That's right, but...

As far as I can tell, this is only possible when either:

(0) The embedded language is less expressive than the metalanguage, e.g., implementing the simply typed lambda calculus in Agda. Which still leaves you with the problem of formally defining the metalanguage.

(1) The metalanguage is inconsistent as a logic, e.g., this recent LtU post. I'm okay with an inconsistent metamathematical language (capable of carrying out arbitrary computation), but not with inconsistent mathematics.

Turing Complete.

The metalanguage is not a logic, it is a computational language. What is consistency for a computational language? Technically computational types are finite, and so consistency can be exhaustively proved. Completeness can also be checked exhaustively for finite integer rings.

The embedded language can be equally expressive, consider an interpreter for lambda calculus in lambda calculus. Once a language is Turing complete, it can embed any Turing complete language.

It seems you are confusing mathematical logic with programming languages?

My bad

You're right. I wasn't precise in my use of language. Here's what I meant:

What's consistency for a computational language?

I was thinking of something along the lines of "only consistent logics can be embedded and interpreted in it". For example, in the embedded STLC example, it was possible to define an eval function in Agda. You couldn't do the same if you tried to embed the untyped lambda calculus. So Agda is an example of what I meant by "consistent metalanguage", whereas the language of rules of inference is an example of what I meant by "inconsistent metalanguage" (because you can define formal systems that entail contradictions from nothing).

Once a language is Turing complete, it can embed any Turing complete language.

<strike>Yep, but if you make a type system for it, that type system is inconsistent as a logic. Which was my intended point.</strike>

EDIT: No, my last paragraph didn't make sense. Sorry. (And, apparently, I can't use the <strike> tag.)

Finite Integer Rings and Consistent Type Systems

We can prove both completeness and consistency by exhaustive methods for finite integer rings.

I am not sure about the assertion that the type system would be inconsistent, it seems odd. If we start with simply typed lambda calculus, surely we can still implement a data type that represents lambda calculus (which itself could be simply typed) and also an interpreter for that lambda calculus.

First, its clear that implementing something in a language does not make its type system inconsistent as a logic. So if Haskell had a type system consistent as a logic, and we can implement a Haskell interpreter in it, then I don't see that the type system would become inconsistent.

I am not sure I see the relevance of Agda. I think Agda is overly restrictive. We are not trying to make the meta-language a logic. The meta-language should be a Turing complete computational language and could be strongly typed without causing problems. I think the point is mathematicians are computational not axiomatic, if that makes any sense.

Haskell's type system is inconsistent as a logic

Most of this subthread does assume you're trying to use Curry-Howard (like in Agda).

Haskell (and any Turing-complete functional language) allows defining a fixpoint combinator fix : forall T. (T -> T) -> T. If you read this type as a proposition via Curry-Howard, it allows proving everything, hence makes Haskell type-system inconsistent as a logic. Agda, instead, isn't Turing-complete.

We are not trying to make the meta-language a logic.

"Finitist methods" are about having a computational metalanguage that can be used for proofs, at least informal ones. If you want to use "computerized finitist methods", you want a metalanguage that is a logic so that you can do proofs about your object language.

You can try to allow recursion only in programs but not in proofs, but this is not so easy to get right; there are some approaches to this, but I think each has its own disadvantages.

If we start with simply typed lambda calculus, surely we can still implement a data type that represents lambda calculus (which itself could be simply typed) and also an interpreter for that lambda calculus.

Careful there. Simply-typed lambda calculus (without general recursion) is not Turing complete, and cannot implement an interpreter for itself. This is a general limit — Eduardo is right, even though he might have skipped a couple of steps (I'm sure I did too). The best discussion I know of these issues is in "Typed Syntactic Meta-programming".

So, implementing a self-interpreter does not make a typesystem inconsistent as a logic — it just shows (indirectly) that the inconsistency was already there.

Metalanguage does not need to a logic

I think a Turing complete mata language is all that is required. I am not even sure it needs to be typed (perhaps it can be typed, but think that's a distraction). You can write an axiomatic solver in the Turing complete language. This is much like the mathematician that interprets axioms is "Turing complete" and exists in time (the mathematician has a state before and after reading/inputting each axiom). For example Agda might be initially written in 'C'. This is the "real metalanguage".

-

duplicate.

-

duplicate.

So there are no paradoxes

if you only have algorithms and nothing by definition or declaration.

Mathematical Objects

I would have thought it obvious that there are no paradoxes in computation. X = X + 1 would be a paradox declaratively, but as an algorithm its fine.

You can of course declaratively define data structures (mathematical objects, including algorithms themselves).

I'd much rather hear about practical computational logic

from you than about Gödel from Hewitt.

Seriously, nothing on LTU seems to be practical in the slightest.

It's as if they're a bunch of mathematicians who only pretend to care about computation.

Y-Combinator Type Inference

Types are obviously mathematical objects, as are algorithms. We can define an algorithm mapping algorithms to types (type inference), and one for comparing algorithms and types (type checking). I wonder what should happen if you execute a type? I presume like the algorithmic expression '1' it evaluates to itself (at the next time step).

More abstractly, if we have some data types at time T, and some other data types at time T+1 and algorithm that generates the data at T+1 from data at T. This sounds a lot like category theory, except we don't allow endofunctors.

It also seems clear that equality is an algorithm (is that extensional equality?)

I still don't see a problem with the Y combinator it is a potentially non-terminating algorithm that takes a mathematical object representing an algorithm as its argument. The type of the Y combinator is a mathematical object, derived from the definition if the Y combinator by the type inference algorithm.

Importance of types in practice

Types are of fundamental importance to the practice of computer science and engineering as they are crucial to performance, security, readability of programs, etc.

For Computer Science, it is practically important to clean up the confusion caused by Gödel's work and to put types on a firm rigorous foundation.

There is a distinction

There is a distinction between mathematical objects and the algorithms by which they are constructed, and nothing can exist which you cannot construct.

That's not the default in mathematics; if you want a mathematics founded on this idea, you should look at the various forms of constructive/intuitionistic logic. You might want also to look at predicativity, for your bit "it cannot include itself in the search as it does not exist when the search happens". And all these are part of the tricks that finitist (meta)mathematics uses to be reasonably safe from paradoxes.

Mathematicians Manipulate Written Descriptions.

I agree, but I am saying something stronger than that. I think any statement that can't be constructed cannot exist as a mathematical object. After all we can write down all mathematical statements, which means they are constructable, and the mathematician performs algorithms on the written statements.

Mathematicians manipulate social descriptions

After all we can write down all mathematical statements

Writing a mathematical statement sometimes requires coining new terminology. Coining terminology sometimes requires ongoing communication between the person who's trying to coin the terminology and the people who are trying to understand it.

So we can't write down all mathematical statements, because we can't artificially construct all possible societies.

Actually we can

Look, the argument is whether the data required is finite.

The data required to teach someone any new concept is finite.

Pictures are finite. Sound is finite. Language is finite.

If a person could conceivably learn a new terminology from a lifetime of instruction then we can set a finite limit on how much data it takes to give someone a lifetime of instruction.

You can't even argue that we need infinite symbols. Humans have limited resolution eyes and ears, so all possible finite length writings and sounds can be encoded in finite space as bitmaps or sampled sounds.

We can't yet put touch into a recorded medium nor interaction, but those aren't inherently infinite either.

Fallacy — infinities are confusing

First, of course, we can't write down *all* possible mathematical statements in a finite time — they're a countable infinity. But any mathematical statements can be written down, any proof can be written down (mathematicians require that from logics they use) and then checked. And I think that's enough for Keean's argument.

Furthermore, we can in fact write a program which, left running indefinitely, will eventually produce all mathematical theorems. You just loop over possible proofs of some length (there's a finite amount of those), then the ones of the next length, and so on. You check each of these proofs, and print the proved theorem if the proof is valid. (This is the hardest thing to do — printing all possible definitions is easier; printing all sensible definitions is a different matter).
At each point, you will only have a finite amount of math, but the program will go on. Proving that each valid theorem will appear is trickier, because it depends on the search strategy — you're exploring an infinite tree, so you only get to every point if you use BFS, not DFS. That's why I ordered proofs by length — that's a breadth-first visit. Proving that each true theorem will appear is yet a different matter, because you need each true theorem to have a proof — and this fails for Peano arithmetic, no matter how you define it. In first-order logic, you have theorems T that only hold for actual naturals, but your axiom apply also to non-standard naturals, so that T can't be provable. With most other logics, you can have axioms that are only modeled by actual naturals, but you cannot prove all semantic consequences of the axioms.

Thankfully, modern mathematics can settle philosophical questions like that rather precisely.

enumerating all theorems (not)

Furthermore, we can in fact write a program which, left running indefinitely, will eventually produce all mathematical theorems.

That seems rather improbable.

Surely, mathematics in general can prove theorems about each individual constructable real (included in the theorem by construction).

Yet the constructable reals are not computably enumerable.

Consequently, the provable theorems of mathematics must not be computably enumerable.

Subsets of recursively

Subsets of recursively enumerable sets need not be recursively enumerable.

Finite but unbounded is a subtle concept

because it isn't finite in the sense that real world objects are finite.

Math that can't be written down

First, of course, we can't write down *all* possible mathematical statements in a finite time — they're a countable infinity. But any mathematical statements can be written down, any proof can be written down (mathematicians require that from logics they use) and then checked. And I think that's enough for Keean's argument.

I was saying "all" in an attempt to match Keean's use of "all," and I think that's the same as what you describe. So, while this clarification is nice to have out in the open, it doesn't shift my position.

We can't meaningfully write down a mathematical statement unless we designate a meaning for it, and that's a social process. To designate a meaning, we have several options, but defining all our terminology in terms of more terminology isn't one of them.

We can almost give up meaning by saying that we're merely dealing with arbitrary symbols that can be manipulated according to abstract inference rules. However, to really give them meaning that way, we have to give a meaning to things like "symbol" and "inference rule." We happen to be in a culture where "symbol" and "inference rule" are thriving memes, but that doesn't give them absolute Platonic existence. Our culture may yet adapt to interpret these memes in ways we don't currently anticipate.

For instance, Hewitt's favorite way to interpret these concepts is that mathematical sentences may contain embedded real numbers. In a system I'm designing, definitions all go through cryptographic code-signing; this pragmatically enforces that everyone gets to invent names that nobody else can clobber, which means names are not as easy to define and use as you might be used to. But for any given real number, or any given name, it's pretty easy to say "that one!" and make mathematical claims about it.

If you say that "any mathematical statements can be written down," it only serves to exclude Hewitt's and my systems from being "mathematical." I'm okay with describing my system in ways other than "mathematical," but Hewitt's been more insistent.

Then reals would be countable

In logics, proof theory works a bit like you describe, but there's more.
Usually statements don't really exist as mathematical objects unless you're doing metamathematics. But statements (or rather, predicates) can describe mathematical objects. Let's take descriptions of reals then.
As we know, the set of reals is uncountable. You can also take finite descriptions of reals — say, "x^2 = 2 and x > 0" is a description of sqrt(2). Since these descriptions are syntactic objects (strings of symbols which are well-formed predicates that old for a single real), they're clearly countable. Hence, most reals can't be described. Reals aren't an exception.

However, even if you don't believe in reals (and such people exist), but you believe in finitist mathematics, I can show you with finitist methods* that proofs cannot produce nonsense. That was Hilbert's program.

*Well, not quite: because of Gödel's theorem, you need to step a bit outside, and use methods which are less finitist. So a logic with an uncountable set of statements wouldn't help the original Hilbert's program.

(There's also a theory of real numbers in constructive mathematics, but I don't know it well, and I'm not sure it produces an uncountable set of real — or even what exactly "uncountable" means in that context).

Reals aren't countable but statments are because

there doesn't have to be a one to one correspondence between reals and statements. We have statements that describe, for instance a segment from the number line which itself has an uncountable number of reals.

So we never describe any uncountable real directly, we only describe a set that contains it and others.

Yes, undescribeable mathematical objects are fine

Agreed, no paradox here. I reacted to (one interpretation of) the parent's comment (by Keean):

I think any statement that can't be constructed cannot exist as a mathematical object.

In your example, we only describe sets of "uncountable reals", but those reals without description are still "mathematical objects". I guess you already agree, so this is for Keean's benefit.

The Set Of Reals Exists.

Those Reals are not mathematical objects. The second order logical axioms that describe the set of real are mathematical objects. I think this confusion that somehow being able to describe/imagine someything makes it real is at the heart of the problem. Real numbers simply do not exist. You can give examples of specific irrational numbers that belong to other countable sets (power series expansions). But the result of the second order description of reals is simply counting how many angels can dance on the head of a pin. Can you write down a proof of the existence of "a real number"?

Half by mass or volume?

I probably should have phrased that as 'a large part' rather than 'half'. I didn't mean it objectively.

re errors in his reasoning;

It'll be more interesting when you show some. :-P

Over the years we've pointed

Over the years we've pointed out several; admittedly, they're very spread out over time (and therefore hard to collect) because we don't often get a clear glimpse of what he's doing. The most recent was that egregious conceptual mistake about completed infinities.

re Over the years we've pointed out several;

Over the years we've pointed out several [significant errors]

I don't think so. I do agree that we thought we did that.

The most recent was that egregious conceptual mistake about completed infinities.

I haven't seen you convincingly point out a mistake in that area.

Please do so if you can.

When you wrote about that earlier, I was left with the impression that you were missing a kind of dualism of atemporal and temporal semantics, both of which "pun" in the same formal system.

I don't think so. I do agree

I don't think so. I do agree that we thought we did that.

I believe that you don't think so. But the demonstrations of error weren't problematic; and since then, the one attempt you made to argue specifics was just plain wrong, so the preponderance of evidence suggests you've simply let your desire to believe in him get the better of you. (No, I'm not retracting my intent to read his paper. I still hope to.)

I haven't seen you convincingly point out a mistake in that area.

I've pointed it out. You didn't see it, and so weren't convinced, and I have doubts that my pointing it out again is going to be useful since I'm not sure what went wrong before. However, I'm willing to try. Once more, anyway; unless we can keep the ball in the air with new questions or something, there must be a point of diminishing returns.

The essence of Hewitt's reasoning on this point, as I understand your explanation of it (speaking of communication breakdowns), is that you can specify the creation of an actor that nondeterministically spits out bits, and then uniquely associate that actor with a particular infinite sequence of bits that it spits out (representing, we may assume for convenience, a real number in the interval from zero to one). The unique association is supposed to justify identifying the actor with the real number, which is practically certain to be irrational (the odds of it being rational would be aleph-null over aleph-one). Except that there is no meaingful sense in which the actor can be identified with the real number. You can't replace the one with the other in a representation, because you can't represent the real number except by gesturing to that particular actor and saying 'the real number whose representation is spit out by that actor', which isn't replacing the actor at all. There isn't even any useful sense in which the entire infinite sequence is contained in the actor at any given moment that we look at it, because at that moment it has only spit out a finite number of those bits, and the definition of what it's going to do explicitly does not define what it'll do. So the real number ostensibly associated with the actor does not, in any meaningful way, exist. You can conjecture a world line for the actor spitting out the entire sequence, but there are a couple of problems with that: on one hand, the probability that this actor would not be destroyed by something over an infinite period of time is, well, about the same as the probability that a random real between zero and one would be rational; and on the other hand, if such a completed world line exists it has no interaction with the program beyond what the incompleted form of the actor would have, so the association is, literally, not meaningful.

There isn't even any useful

There isn't even any useful sense in which the entire infinite sequence is contained in the actor at any given moment that we look at it, because at that moment it has only spit out a finite number of those bits, and the definition of what it's going to do explicitly does not define what it'll do.

I've seen this objection a few times, but I'm not sure how this definition of the reals is appreciably different from any other constructive definition of the reals.

I'm not sure how this

I'm not sure how this definition of the reals is appreciably different from any other constructive definition of the reals.

It's not constructive. You couldn't do it more than once and get the same answer; that's what "nondeterministic" means; the only way to reproduce it is to record what you got the first time.

re I'm not sure how this

It's not constructive. You couldn't do it more than once and get the same answer; that's what "nondeterministic" means; the only way to reproduce it is to record what you got the first time.

Ok, at this point you are not talking about the reals at all. You are talking about the constructible reals which is but a subset.

The actor that produces a lazy infinite list of bits, with no tail of all 0s or all 1s, is a kind of effective version of a choice axiom for non-dyadic real fractions.

Shutt: real numbers and axioms

Except that there is no meaingful sense in which the actor can be identified with the real number.

That's incorrect. You can define (programmatically, even) the algebraic structure of those infinite actor outputs. These definitions are usefully isomorphic to every other definition of reals. By analogy, consider systems like Emacs calc, Maxima, and Wolfram's-thing-whatever-its-called.

You can define

You can define (programmatically, even) the algebraic structure of those infinite actor outputs.

Programmatically there is no algebraic structure of the infinite actor outputs; only finite prefixes can every be used by a program, and those are uncorrelated with what comes after them.

re programmatically there is no algebraic structure

Programmatically there is no algebraic structure

Your confidence that arbitrary precision maths libraries can not exist is interesting but fails an empirical check.

only finite prefixes can every be used by a program,

Nothing I have said anywhere contradicts that very obvious fact.

Progress?

We may be making progress.

only finite prefixes can every be used by a program,
Nothing I have said anywhere contradicts that very obvious fact.

That you think that obvious is the first encouraging thing I've heard in all this. That you don't see the relevance to what you're saying is only ordinarily frustrating (there's plenty of frustration to go around, here). But there's an interesting anomaly in your other remark that might help.

Programmatically there is no algebraic structure
Your confidence that arbitrary precision maths libraries can not exist is interesting but fails an empirical check.

You've misquoted me, in a way that could be quite useful by providing a clue into where we're failing to connect. I did not say "Programmatically there is no algebraic structure"; I said (emphasis added) "Programmatically there is no algebraic structure of the infinite actor outputs". Do you see the difference? Each finite prefix of an actor output has, programmatically, an algebraic structure; off hand I can't imagine how anyone would disagree with that. But the infinite output of these actors does not have, programmatically, an algebraic structure (for the twin reasons I named in the remainder of that sentence).

banish metaphysics (re Progress?)

Certain effective aspects of programmatically-refied actor computations can be correctly, mathematically modelled by the classicakl real numbers.

I said (emphasis added) "Programmatically there is no algebraic structure of the infinite actor outputs". Do you see the difference?

dominating LtU

Another thing in this area is that I don't think Hewitt is really spamming or dominating LtU so much as other people have so little of any interest to say. PLT as a field of research is mostly in a moribund state.

I also think there's something not quite right about younger people complaining he isn't explaining things in ways they can just grasp without much effort.

respectfully

> complaining he isn't explaining things in ways they can just grasp without much effort.

For me, that (what I see as poor communication style) would be fine on e.g. his own blog. Or yours. Or heck even mine. I do not want Hewitt's work or words to disappear en toto. I would very much enjoy and appreciate if (he or) you or anybody else who has the ability to be better at communicating did start threads where they clarify what he is getting at.

But I do not see the way the communication has been practiced to date as being constructive for, or respectful of, LtU. Hence my attempt to point my finger at the moon, and ask everybody to voluntarily chip in.

All this is of course my personal $0.02 as somebody who has been the un/intentional trouble maker in many places before; takes one to know one; etc.

raould, in the news biz we have a slogan

I guess it was Justice Brandeis who gets paraphrased a lot as "the best cure for free speech is more free speech".

My unsolicited advice:

Post interesting things.

Make interesting comments.

Avoid trying to rally the community to single out and ostracize someone you wish to repress.

There is no ostracizing

There is no ostracizing there, just a request to keep topics on topic. Not every post needs be drowned in 90% actors & inconsistency robustness. Even if the content were good quality then it would still be inappropriate to spread it to every post on the site.

Fighting in the war room

Maybe we should start a conversation about how a conversation about Actors flooded the conversation about flooding conversations with conversations about Actors.

To be fair, it hasn't flooded this conversation nearly as much as the conversation about foundations of mathematics has.

Maybe we should start a

Maybe we should start a conversation about how a conversation about Actors flooded the conversation about flooding conversations with conversations about Actors.

No need; this conversation provides its own self-proof.

on the one hand

i find the irony actually nigh too much to bear. it is just utterly stunning. not only due to the people i'd expect it from, but also from the people who are feeding the tolls in places where i was hoping people would learn not to feed them.

maybe the core problem is that drupal makes it hard for people to start a whole new thread? (or that people are lazy?)

on the other hand, i hope that however much this thread gets such things, it will at least mean that other threads are not.

Sorry

I was actually mindful of this when I posted on this thread, and almost didn't post for this reason. I've otherwise kept my engagements in the discussion limited to threads Hewitt started, so they didn't feel OT. My rationalization for posting here was basically the last thing you said: at least I'm not derailing some thread not originally about Hewitt. But I'll refrain from any more posts in this thread. I also feel bad about the fact that what I love most about LtU are the very interesting research papers that people post about, and the on-topic discussion that follows, but I've limited all my engagement to this Hewitt stuff.

Fwiw

Fwiw, I'd been thinking it's consistent with the purpose of this thread to use it as a contained place to try to clear up some Hewitt-induced misapprehensions, because clearing up those is probably the single most effective measure that could be taken to reduce Hewitt's ability to destroy other threads.

I apologize for any troll-feeding

On reflection, I probably am one of the troll-feeders. If I have been feeding trolls, let me know (even privately) and I'll stop. There's a fine line between troll-feeding and giving people yet another chance (something which I've been doing at times).

Not just troll feeding

I'm pretty sure Tom is not a troll, but some of these conversations have been going on for far too long with seemingly little progress made so I've decided to stop participating.

Not Tom

To be sure, I didn't talk about my answers to Tom. Otherwise, point taken.

re Tom

What I find particular frustrating is that it seems like very few people who insult Hewitt's work or Hewitt have made a serious attempt to try to understand it. A decent number look at it briefly and decide it is too confusing or hard to follow and then go into attack mode. Then, on LtU, they act as if it is Hewitt's (or someone else's) duty to examine their confused statements, correct them, and keep doing that until the commenters have "read the papers without actually looking at them".

That is just not the way that scientific, or mathematical, or even rigorous engineering discourse works.

I should have like nothing better than to find on LtU others who would read the materials together with me, acting as a kind of reading group that tries to help one another sort out the meeting.

I find nothing of the sort, though.

Rigorous math

That is just not the way that scientific, or mathematical, or even rigorous engineering discourse works.

Rigorous mathematics is definition, theorem, proof subjected to peer review. AFAIK, the last article Hewitt posted has either not undergone or not passed peer review and to my eye does not meet the standard for rigorous mathematics. I still haven't seen a complete presentation of the rules of Direct Logic and I don't think you can piece it together from what Hewitt's referenced here.

Reaction from LtU members has varied, but I think it's reasonable for people who can't understand his Arvix or HAL article links to complain about the way they have been presented. Particularly when the responses you get from him often seem evasive or non-responsive to questions asked. It's very frustrating!

Despite all of this, I have personally read through most? of the material he's posted here. Some of it several times. My opinion is still that it has problems. I think the characterization of Godel is off in ways I've described at length: pinning down the meaning of the turnstile seems key. Other issues: he doesn't seem to appreciate that first-order semantics are just a tool in a mathematician's toolbox for understanding a formal system, the construction of uncountable sentences still seems like a gimmick to me, and the idea of inconsistency robust logic seems like a bad idea to me or at best something I'm just not interested in.

All of this is just my opinion, and I don't have any interest in insulting Hewitt. When you indicate that you thought there was something important there, I gave it a hard second look. But I just can't see it and so I'm moving on. Again, I'll keep reading what you write, so you could one day convince me, but I'm pretty skeptical at this point.

Researching ├

There is still more work to be done Researching ├.

By a formalized proof, Mathematics (├) proves its own consistency. However, the proof does so without regard to the content of Mathematics. For example, Mathematics includes the Peano/Dedekind categorical axiomatization of the natural numbers. The consistency proof doesn't lead to any technical problems as long as there are no inconsistencies, e.g.,
* the Peano/Dedekind categorical axiomatization doesn't infer any contradictions
* there are no "self-referential" propositions in Mathematics

What is mathematics?

Claiming that mathematics proves its own consistency means that it proves there is no proof of false in mathematics. DirectLogic has a turnstile in its notation, but how do we know it means "proves in mathematics"? You've stated the importance of the Peano/Dedekind axioms categorically defining the naturals, but is it not just as important that the rules of DirectLogic categorically define provability in mathematics? What constitutes a valid proof in mathematics?

What is known about Mthematics and what can be known?

Classical Direct Logic is a formalization of Mathematics, e.g., Church's paradox etc. If more is known about Mathematics, than what is current formalized in Classical Direct Logic, then it should be added.

By definition, Mathematics is consistent if an only if a contradiction cannot be inferred. The definition of consistency can be formalized in Classical Direct Logic and used to prove the consistency of Mathematics.

It is possible to define and prove categoricity of the Peano/Dedekind axiomatization because it is not as powerful as Mathematics. It is not known how to define categoricity for Mathematics.

Where does it end?

Classical Direct Logic is a formalization of Mathematics, e.g., Church's paradox etc. If more is known about Mathematics, than what is current formalized in Classical Direct Logic, then it should be added.

This plan can never be finished. If you give me a formal system for DirectLogic that supposedly formalizes mathematics, I can mechanically produce a new sentence that DirectLogic doesn't prove that is true if DirectLogic is consistent. A mathematician who understands this can never formalize his own understanding of mathematics.

Formalising Mathematics is Done.

I think the mistake is to think you can axiomatically formalise mathematics. Axioms are mathematical objects and a set of axioms can describe a system of logic, but there are transformations on axioms that can only be done by a mathematician. However all these transformations can be encoded as algorithms. So we can formalise mathematics as data-structures and algorithms. Some algorithms are logic interpreters for first or second order logic, and some data structures are axioms in those logics.

We can understand the algorithm that implements the logic as a formal specification of that logic, and data structures of the format accepted by that algorithm as the mathematical objects in that logic.

No

For every algorithm, there is a axiomatization that derives the same things that the algorithm computes, and vice versa. This idea was considered as a way around Godel long ago and found to be a dead end.

Godel is irrelevant

I am not really saying anything controversial, only that all mathematics can be written down, and that mathematicians manipulate the written descriptions using algorithms, and also define their meanings using algorithms.

This was what I found controversial

I think the mistake is to think you can axiomatically formalise mathematics.

Mathematicians

Well Turing machines are more powerful than axiomatic systems due to the existence of memory and time. Turing extended the language of mathematics, and did things not previously possible. One could argue that in the past mathematicians interpreted axioms and gave them meaning, but now axioms can be given meaning by their interpretation on a Turing machine. Can axioms give meaning to axioms?

Axiom systems as machines

[...] but now axioms can be given meaning by their interpretation on a Turing machine. Can axioms give meaning to axioms?

Axiom systems strike me as being pretty much the same thing as nondeterministic machines. I say they're nondeterministic because at every time step, they let a mathematician choose among specific ways to rewrite the tape.

So we have memory, time, and even some I/O interaction with the mathematician. The question remains whether the machine's design is suitable for your purposes, but I think this can be explored in a way that's incremental, with cross-pollination between machines and formalisms.

Axioms and Time

Do axiomatic system include time? No axiomatic description of logic I have (in my limited experience) seen has a temporal element (nor memory). Logic assumes propositions are true for all time, and something is either provable or it is not provable.

A computation system can incorporate the idea that no proof of P has been found at time T, but a proof has been found at time T+1.

"Logic" is not very restrictive

Not quite.

To be sure, your examples are about the general question is: is a formal system as powerful as computation? Or even, is a formal system with axioms and inference rules as powerful?

There's a bunch of temporal logics. Then, Hoare logics (and variants) discuss the meaning of programs. Then, it's been shown that with structural operational semantics you can formalize, with axioms and inference rules, program evaluation with all the features that have been tried. Turing-complete languages can be easily modeled.

You're right that logic is *usually* about timeless truths, but that's a purely aesthetic choice.

The above does *not* prove much, but as long as you have decidable proof-checking and your proofs are syntax (IIRC), the proof of Gödel's theorems goes through. And everybody but Hewitt agrees that giving up those requirements makes your "logic" of little use.

Looking at it the other way?

Are there any axiomatic systems that cannot be implemented on a Turing machine? Which can you build (easily), therefore which would make the better starting point for constructive mathematics?

Linear logic

Logic assumes propositions are true for all time, and something is either provable or it is not provable.

Have you seen linear logic? The inference rules use up and introduce resources, so the things that are "true" change over time as we follow a proof from start to finish.

Re: linear logic

Yes, I have seen linear logic, and it did not immediately associate it with time. It had elements that can only be used once. It seems it could model time now I think about it.

re I think the mistake is to think you can axiomatically formali

I think the mistake is to think you can axiomatically formalise mathematics.

DL formalizes a theory of types including type restrictions on the construction of mathematical propositions.

The definition of DL does not pretend to list a complete set of axioms from which all of math can be mechanically derived.

Defining DL

DL does not formally define DL. You need to implement DL on a Turing machine to formally define it.

Implement turing machines using turtles

DL does not formally define DL. You need to implement DL on a Turing machine to formally define it.

So what you are saying is that I should head to Radio Shack and get a soldering iron, right?

Termination

Look on the bright side, once your soldering actor receives the "all done" message that is guaranteed to arrive eventually you will have proven that DL can be simulated by a Turing Machine, thus resolving many of the issues in this conversation.

Head to Radio Shack

Well that would be one way. You probably don't need to build you own computer, Knuth's approach of defining an imaginary machine is good enough. If you can implement DL in MMIX, that would be a formal specification. In my opinion a formal specification needs to be something that can be executed by a machine, there is too much ambiguity in human language and people's interpretation of things. I don't think DL needs any of the message-order semantics from actors, so at least you won't need a network of Turing machines.

Implement it

Any implementation at all would help.

Formalizing a system in Coq (or Agda) helps a lot nowadays — you can define the system and prove any theorems about it, and readers (almost) only need check the definitions and the theorem statements to see what you're doing.

I find that formalization, while technically hard, tends to be also quite instructive also for the authors (just like implementation).

re "this plan can never be finished"

This plan can never be finished. If you give me a formal system for DirectLogic that supposedly formalizes mathematics, I can mechanically produce a new sentence that DirectLogic doesn't prove that is true if DirectLogic is consistent.

If a theory τ (obtained by adding axioms to DL) is closed, then by diagonalization you can prove that there is a true proposition (⊢⊨τΦ) that can not be proved in τ (⊢⊬τΦ).

You can carry out a proof of that in DL.

DL can prove of itself, by contradiction via diagonalization, that it is not a closed theory. As with the self-proof of consistency, the self-proof of openness is not enough to put to bed the question of DL's consistency.

I can mechanically produce a new sentence that DirectLogic doesn't prove that is true if DirectLogic is consistent.

You can do that very easily, in fact. First, define within DL a closed subtheory, τ, then do your construction for that subtheory. (This is laid out starting on page 13 and going pretty much throught the end of the main body of the paper.)

How should interpret the fact that DL lets you do that?

Hewitt:

Information Invariance is a fundamental technical goal of logic consisting of the following:

1.Soundness of inference: information is not increased by inference

2. Completeness of inference: all information that necessarily holds can be inferred.

That a closed mathematical theory T is inferentially undecidable with respect to [an undecidable proposition Ψ in τ] does not mean incompleteness with respect to the information that can be inferred because (by construction) ⊢(⊬T Ψ), (⊬T Ψ).

I think I can correctly say also that DL does not attempt to give a set of axioms to which every theory can be reduced. In fact, DL presumes that the "theories of practice" people actually used within mathematics contain many inconsistencies among themselves. Since we want to include all these theories of practice as sub-theories of DL, DL can not be reduced to a single, universal set of axioms. (DL is open.)

DL does offer a theory of types that includes strict restrictions on how, e.g., propositions can be constructed from earlier constructed objects.

What is "mathematics"?

I still can't figure out what "mathematics" and the naked turnstile are supposed to be. This quote from Hewitt seems to contradict your understanding of DL being open:

Classical Direct Logic is a formalization of Mathematics, e.g., Church's paradox etc. If more is known about Mathematics, than what is current formalized in Classical Direct Logic, then it should be added.

Reasoning about arbitrary other theories doesn't require openness. You just prove implication statements that have the theory as a hypothesis. Pretty much all logics support this.

And in practice, if DL is to serve as a logic for use in computer science, then it will be implemented on a computer and as such will not be open. "Open logic" is a contradiction in terms.

re "what is mathematics?"

I still can't figure out what "mathematics" and the naked turnstile are supposed to be.

Schematically and informally (and prepared to be Hewitt-scolded if I botch this):

A mathematical theory is defined by the definitions of the theory's types, along with axioms.

A theory τ is an extension of a theory ρ if τ is formed from ρ by the addition of new types and/or new axioms.

A reflective theory includes as mathematical objects its own types, propositions, proofs, etc.

A meta-theory is a reflective theory that includes it's own extensions as mathematical objects.

A theory, M, is suitable as a foundational theory of mathematics if every mathematical theory we come up with can be usefully rendered as an extension of M.

DL is claimed to be a nice foundational theory of mathematics, especially from the perspective of computer science.

"Open logic" is a contradiction in terms.

I am using "open" to mean a theory in which the proofs can not be computationally enumerated. For example, any consistent theory of the constructable reals must be "open" in this sense.

open/closed/undeclared

Something interesting, btw: Hewitt mentions that group theory could be defined as a DL extension using the group theory axioms along with an axiom that the resulting theory is closed.

That amounts to saying "proofs of group theory can be constructed in the following ways ..... and there are no other proofs of group theory."

DL contains no "there are no other proofs in DL" axiom (afaict).

Tortoise and Achilles, indeed.

Closed

I am using "open" to mean a theory in which the proofs can not be computationally enumerated.

So was I. DL, as implemented by computer, will be closed.

(I'm off for the night.)

coin flips and programs (re DL by computer is closed)

So was I. DL, as implemented by computer, will be closed.

In DL:

If σ:Type, x:σ, and f:Booleanσ, then f[x]:Proposition.

Any total, computable, function from σ to Boolean gives us a proposition for each object of type σ.

Consider an ActorScript program like:

  GenerateReal.[]:ℕ → Memoize◁ℕ ↦ Boolean ▷.[ [x:ℕ] → (0 either 1) ]

Each call to GenerateReal returns a total computable function from which infinitely many Propositions may be constructed.

A computer program that implements DL can use a return value from GenerateReal to form a proposition but no compputer program can enumerate all possible return values of GenerateReal!

subsets of recursively enumerable sets, again

This comment has no force as an argument that DL cannot be closed. Even if we grant that you've identified a set of proofs of propositions that is not recursively (i.e., computably) enumerable, that would have no bearing on whether the set of all proofs is recursively enumerable or not. After all, the naturals for example are recursively enumerable, but "most" subsets of the naturals are not. If I identify a non r.e. subset of the naturals, that does not allow me to exclaim "consequently, the naturals can't be r.e.!" This is an error, probably from (incorrectly) transferring the intuition that a subset of a countable set is countable. There is nothing surprising about some subset of a r.e. set failing to be r.e. If you want to argue that the set of all proofs in DL is not recursively enumerable, you'll need to find a different argument.

re subsets of recursively enumerable sets, again

This comment has no force as an argument that DL cannot be closed.

I'll clarify:

The proof that DL is not closed uses a direct formalization of Church's paradox. That is given in the paper starting on page 13 (which PDF page 14).

In answer to Matt M., a proof-by-existence that DL-as-implemented-by-computer is not closed, is given via the procedure GenerateReal.

GenerateReal illustrates that a computational coinflip can be used to constructively define an uncountable set together a constructively valid axiom of choice for that set.

Mathematics is enumerable

The statement you have written does not create anything new, it is a membership test. Given finite symbols from which mathematical objects can be constructed, some of those objects will pass this test, and others not. What you have said is equivalent to claiming there are infinite subtle shades of colour, therefore the number of sentences in English is not enumerable. However the number of sentences in English is clearly enumerable irrespective of this argument. Why is this when you are free to invent words for new colours at any time?

Recorded mathematical results are enumerable

One can in principle enumerate all the theorems that humanity ever proves, presumably.

What GenerateReals shows in terms of denotations of programs is a constructive way to establish an uncountable class of propositions, and an effective way to choose an arbitrary member from that class.

It is thus constructively impossible to enumerate all of the propositions.

Converting computation to mathematical functions

If σ:Type, x:σ, and f:Booleanσ, then f[x]:Proposition.

Any total, computable, function from σ to Boolean gives us a proposition for each object of type σ.

Hmm, is there really a rule that lets us convert procedures [σ]↦Boolean to mathematical functions Booleanσ? I imagine this would at least require a totality proof, or even a proof of purity from side effects.

converting procedures to functions

is there really a rule that lets us convert procedures [σ]↦Boolean to mathematical functions Booleanσ?

Not as a built-in axiom to Classical Direct Logic (afaict). It would be hard to add such an axiom in general since, as you note, we would have to admit only the provably total, complete, referentially transparent procedures.

To convert procedures to fucntions you could define a (sub-)theory extending DL (and reasoned about in and using DL).

This sub-theory would include axioms that that certain functions exist (which abstract the actor procedures defining the functions).

If you turn out to be wrong and your procedure is not referentially transparent, you can derive contradictions in the sub-theory.

If you turn out to be wrong because your procedure diverges, I suppose you could "hang" your implementation of DL.

As an engineering rule of thumb, where possible, you could write the procedure (or configure the actor communication substrate) to return a value quickly or else throw an exception. In this case, if you are wrong that the procedure can always return quickly, a proof that exposes that mistake will again produce a contradiction.

re: coin flips and programs (re DL by computer is closed)

This post will be separated from its parent by a page break with my settings, so let me include the context:

So was I. DL, as implemented by computer, will be closed.

In DL:

If σ:Type, x:σ, and f:Booleanσ, then f[x]:Proposition.

Any total, computable, function from σ to Boolean gives us a proposition for each object of type σ.

Consider an ActorScript program like:

GenerateReal.[]:ℕ → Memoize◁ℕ ↦ Boolean ▷.[ [x:ℕ] → (0 either 1) ]
Each call to GenerateReal returns a total computable function from which infinitely many Propositions may be constructed.

A computer program that implements DL can use a return value from GenerateReal to form a proposition but no compputer program can enumerate all possible return values of GenerateReal!

Honestly, it's hard for me to take this line of argument seriously. This is like an argument that the proofs of DirectLogic aren't enumerable because DirectLogic has Unicode or SoundBlaster support. It's just obviously a red herring. We could trudge through all of the relevant statements about Godel's theorem updating them to account for terminology that allows propositions with arbitrary embedded Actors and, after a whole lot of work, we'd eventually find a new way to phrase the statement that refers to what's actually finitely provable by DirectLogic, the computer program. But I have zero interest in doing this.

DL types vs other objects

We could trudge through all of the relevant statements about Godel's theorem updating them to account for terminology that allows propositions with arbitrary embedded Actors and, after a whole lot of work, we'd eventually find a new way to phrase the statement that refers to what's actually finitely provable by DirectLogic, the computer program.

You can define a sub-theory in which you define some class of provably, total, deterministic actors. You could add an axiom that no terms exist which do not correspond to such actors.

In that case, your proof could go through, in DL, about that sub-theory.

The theorems of Mathematics are *not* computationally enumerable

By the argument of Church's paradox, the theorems of Mathematics are not computationally enumerable.

Limited Symbols

We can only write a limited range of symbols, so all proofs of a given length are enumerable. The most important proofs are the shortest (and most elegant), and proofs become less useful the longer they get. Therefore the utility of a proof tends to zero, and there are no useful infinitely long proofs. So the set of useful proofs is enumerable.

Further I just don't agree that real numbers exist. I have never seen one written down, and I never will. No proof will ever be written that relies in one.

Proofs of finite length are uncountable

Proofs of finite length are uncountable because there are uncountably many Actors. Of course, it is computationally decidable whether a Proof is correct.

It will always be possible to discover new axioms for Mathematics because theorems of Mathematics are not computationally enumerable.

Cantor famously proved that the real numbers are uncountable.

Actors are Imaginary

If there are uncountably many actors, then I have to conclude that actors do not exist. There will be some other set, lets call it "realisable actors" that will be much more useful than the imaginary notion of actors.

Cantor proved that real numbers do not exist :-)

If you have proved you cannot count the number of fairies dancing on the head of a pin, what does it matter if fairies do not exist?

Don't add more nonstandard terminology

According to mainstream mathematics, reals do exist and are uncountable. IIUC, the set of actors is as big as the set of infinite bitstreams, which is as big as the reals from 0 to 1.
Hard-core constructivists might agree with you, but there are constructive reals.

Now, while all of that can be an interesting matter of discussion, that's settled enough, so let's please not mix the discussions, it wouldn't help.

Still no.

"the set of actors is as big as the set of infinite bitstreams, which is as big as the reals from 0 to 1."

If you make actors restricted to not have inputs only outputs and you restrict them to not have a "random" instruction, then the set of outputs has less elements than the set of programs (because some programs will have the same outputs), which is at most a countable infinity of instructions (and also at least one, since we can easily create countable infinities of distinct programs).

What allowing a random instruction gives you is not an uncountable number of actors what it does is break the correspondence between actors and their outputs.

Whether the outputs are "uncountable" doesn't really matter then.

the correspondence of actors to outputs

As a kind of "philosophical" note:

In a foundational theory, one that can be specialized for various needs, what do you gain by prohibiting abstracting terms as (run-time) actors?

We can write programs that do rigorous, automated mathematics using terms of that kind. But you want a foundation for math that prohibits this?

I have no idea what you're saying but I do see

that the argument was that referring to "an actor" is either referring to a program and it's state, which is a countable infinity, or it is referring to a completed infinite output (which can't be addressed or related to a program and its state) or you get confused between the two and start talking about "world lines" like you did before.

re I have no idea what you're saying

I have no idea what you're saying

You could ask.

"an actor" is either referring to a program and it's state, which is a countable infinity, or it is referring to a completed infinite output (which can't be addressed or related to a program and its state) or you get confused between the two and start talking about "world lines" like you did before.

If an actor can make coin flips then the its state is distinguished from the state of other similar actors by the fact that they occupy distinct regions of space-time. (If they occupied exactly the same region of space-time then they would be the same actor with the same coin-flip outcomes.)

i have no idea what you're saying... you get confused

<bristle>You got the first part right.</bristle>

"occupy[ing] distinct regions of space-time." hasn't been part

of set theory in the past.

You would have to formalize that.

space-time and set theory

"occupy[ing] distinct regions of space-time." hasn't been part of set theory in the past.

You would have to formalize that.

Not so. If a computing system that "does DL" has an actor that it wants to reify in mathematics as a function, it can introduce a theory in which the existence of the reified function is an axiom.

If the programmer is wrong and the reified actor doesn't actually implement a total, computable function then minimally, the resulting theory will contain contradictions and, potentially, the bug will hang that instance of the DL theorem prover.

The knowledge about how coin-flips and world-lines interact is for the programmer who is physically constructing a computing system. Internally to DL, that knowledge isn't necessary.

There is obviously a lot here I know nothing about but also

a lot here which is strange.

Doing metareasoning about algorithms that contain unspecifiable real numbers isn't normal math.

... I guess no one is going to explain it all, and I can't read mathematical papers, so I guess I should just give up.

"reify in mathematics as a function" don't know what this means

"If a computing system that 'does DL'" don't know what this means

"reified actor doesn't actually implement a total, computable function then minimally, the resulting theory will contain contradictions and, potentially, the bug will hang that instance of the DL theorem prover." I had no idea... nor do I know why

"The knowledge about how coin-flips and world-lines interact is for the programmer who is physically constructing a computing system."

Uhm but you were basing reasoning and classifications on "world lines"!!!

good questions (there is obviously a lot)

"If a computing system that 'does DL'" don't know what this means

As an example, a program might define some data structures for representing (some) propositions, take some propositions as assumptions and/or goals, and try to prove something by forward-or-backward chaining (according to Classical Direct Logic, etc.).

Maybe the program is trying to prove that Socrates is mortal and has a negative bank balance.

"reify in mathematics as a function" don't know what this means

Concretely: a program might hold a reference to a procedure actor and from that reference, build new propositions that treat that actor as an oracle for computing the values of some DL function.

Conceptually, in DL theory, there is an abstract proposition containing use of an abstract function.

The concrete computer program is creating data structures to represent the proposition and function.

The data structure for a DL function might happen to use (as an internal implementation detail) an actor procedure that can compute values of the function for given arguments.

"abstraction" and "reification" are complements, in this usage. Viewed abstractly, a DL function term could be a procedure actor. Conversely, a procedure actor can in some cases be reified as a DL term.

"reified actor doesn't actually implement a total, computable function then minimally, the resulting theory will contain contradictions and, potentially, the bug will hang that instance of the DL theorem prover." I had no idea... nor do I know why

For a procedure actor to actually compute values of a DL function, it must return a value for every possible argument (it must be total). It must also always return the same value for the same argument.

A DL procedure actor isn't guaranteed to have those properties. It might hang rather than return a value. On different calls, it might return different values for the same arguments.

If a program reifies a bogus DL procedure as a function, something will break. Either we can then prove contradictions, using the (bogus) "function", or our automated theorem prover will just hang at some point.

"The knowledge about how coin-flips and world-lines interact is for the programmer who is physically constructing a computing system."

Uhm but you were basing reasoning and classifications on "world lines"!!!

A programmer can accurately believe that his memoized-flip computes a total, computable function of type Boolean without needing an elaborate mathematical formalization of his assumptions about physics.

When he reifies the memoized-flip actor as DL function, he can simply make it a logical assumption (for the purpose of proof-making) that the resulting function is in fact really a function.

p.s.: Above I'm describing as if there is no need to formalize reification of actors to functions. And there isn't in some practical scenarios. There can, however, be (sub-)theories in DL that reason about the mathematics of when it is OK and when not to reify procedure actors as functions.

Look, what does it break to be more precise?

Say this:
1) An actor is not its output.
2) the set of actors that refers to other actors is at most a countable infinity. Lets say I call those actors0
3) for theoretical reasons, it's useful to reason, when possible, about completed infinities, including the output of actors.
4) actors that have a random instruction can produce a continuum of completed infinite outputs, lets say I call those actors0'
5) actors that can refer to the completed infinite results of other actors, are another kind of actor. If they refer to actor0, then their results can be compressed to the actual actor and state, and they are still countable, and compressable to actor0
6) if they refer to the output of actors0' then they're not countable, call them actors1

I got a phone call to answer, but I think I made my point.

re Look, what does it break to be more precise?

I got a phone call to answer, but I think I made my point.

You're almost there, afaict.

Pretend that DL was designed by someone who thoroughly understands all those hairs you want to split. Suppose that person wanted a mathematical foundation that is sensitive to all those hair splits, and still general enough to be a foundation for "mathematics (in general, especially for computer science)".

Assume DL is such a system and then try to figure out why it works.

If you find a place where it fails at that, that would be interesting.

Sigh, so what is the point here?

You've created hypothetical algorithms that refer to real numbers that no one can specify.

If you could specify a specific real number in any way, then it's not a member of the set that can't be specified by countable infinity of specifications.

...
So you've hypothesized these algorithms (which can never be realized) as mathematical objects. Call them hyperalgorithms. Not one of them are objects that a mathematician could create individually, not one is an object that he could specify individually, because if he could it wouldn't be a member of this set.

And in some way this invalidates Godel (beyond me), even though one could easily filter your statements down to a subsystem of non-hyperalgorithms that mathematicians actually work with.

[edit] you know, if Hewitt talked about these things in a non-ambiguous language, distinguishing the very different things I deliniate, we wouldn't be wasting time in this long thread. I suggest that you stop defending the confusing ambiguous language he uses.

Ambiguity of talking about special objects as if they weren't

Part of the damn problem is that Hewitt conflates programs that communicate with an unlimited number of other programs as "actors"

Then you take very special objects, (programs that have no inputs, put out a single infinite string and use a "random" instruction) and equate the output in some this very specialized circumstance with "the actor".

Gah. There are nonsensical religious dogmas that are less confused!!!

[edit] The way these objects relate to less limited objects needs to be made clear. If all sets of outputs are "an actor" what about inputs? What about code, if two "actors" have different code but the same output on some hypothetical occasion are they "the same actor"? What about ambiguity of message order?

Each CreateReal.[ ] is one of uncountably many Actors

Concurrent computation (inherent indeterminate) can be exponentially faster than the parallel nondeterministic lambda calculus.

Indeterminism is an inherent part of this performance advantage which has become critical because of many-core distributed computer systems.

Thus, each CreateReal.[ ] is one of uncountably many Actors.

See Inconsistency Robustness in Foundations.

Example please

If Reals exist, please reply with an example.

In order for actors to be uncountable

then they have to refer to reals that can not be referred to in any finite format not even as a metafunction that generates it.

So any real that can be specified, by definition, is not a member of that set.

The random instruction is a stupid trick in a way because it lets us sort of specify.

One could argue that letting a random function run for eternity is will not specify any specific real, that the probability is an infintesimal.

So we need infinity larger than infinity.

I want to say something like "stupid pet tricks" but maybe I'm wrong. i=sqrt(-1) might seem a stupid trick to someone. We're interpreting random in some totally magical way. Is that useful? I have no idea.

Types and objects (re Actors are Imaginary)

In Direct Logic, ℕ is a type, not a set.

We can construct the type ℕ, and we can construct each natural number 0, 1, 2, ....

We have no way to construct "the set of all natural numbers".

The type ℕ is enumerable because we can define a total, computable function EnumerateNaturals:ℕ.

(The identity function will do for a definition of EnumerateNaturals but there are other possibilities.)

Similarly, we could constructively define a type NaturalMultiplesOfThree, and we can construct objects of that type 0, 3, 6, 9, ....

It so happens we can also enumerate the objects of that type: EnumerateMultiplesOfThree: NaturalMultiplesOfThree

We can construct a type for functions from naturals to booleans: ℕ ↦ Boolean.

We can not enumerate the objects of that type (as a diagonalization argument shows). At a minimum, that means that the type is not "computationally enumerable".

Of course, we can construct at least some objects of the type. For example, we can construct any total, computable function from naturals to booleans. (A familiar diagonalization argument proves that these are not computably enumerable.)

In DL, we also say that the type Boolean is not well founded. Specifically, we can define a function that is syntactically of the required type (Boolean), yet which is not total.

We can also define functions of type Boolean which are total, computable but which are not provably total, computable.

Finally, if we allow a constructive definition of a function to include a coin flip, we get something akin to an axiom of choice for the type Boolean.

In particular, we can (as needed) choose (by construction) any countable subset of Boolean, although using this method, we can not in general prove equality of two randomly chosen functions Boolean.

Of course, with a suitable definition of equality, Boolean is isomorphic to ℝ.

To sum up:

The type Boolean, a syntactic type, exists.

The elements of Boolean are not enumerable (indeed they are uncountable).

If coin flips are allowed, we can construct random, countable choices of objects of type Boolean.

Types of uncountable functions.

I don't think there is any argument that you can have a type for (N -> Boolean) that is uncountable. But that does not mean the imagined functions actually exist.

If you have an untyped language, you can only write a countable number of functions. Types do not magically create more functions than you can write in an untyped language. So (N -> Boolean) is a subset of untyped programs (which is countable) so therefore must also be countable by the following algorithm.

- enumerate all untyped programs.
- infer types for each program in turn.
- test each for membership of (N -> Boolean).

We have just enumerated all members of (N -> Boolean).

choice (re types of uncountable functions)

I don't think there is any argument that you can have a type for (N -> Boolean) that is uncountable. But that does not mean the imagined functions actually exist.

Historically, mathematicians distinguished "constructable" values on the one hand, from unconstructable values.

Constructable values, informally speaking, are ones we can work out using pen and paper starting from a few seed values and following some list of rules.

Unconstructable values are alleged mathematical objects that can't be constructed.

The classical Axiom of Choice (of ZFC) let's us take a mathematical step of declaring, by axiomatic assumption, some variable x to be an arbitrary member of a non-empty set.

We can define x that way even if that means x might stand for some "unconstructable" value.

Intuitionist / constructivist math, broadly speaking, rejects that move -- partly because of the kinds of realist arguments you are making (and partly in hopes of avoiding paradoxes).

Yet:

If we assume that the either operator is "constructive", that traditional landscape changes. We can now randomly choose certain values that were formerly considered "unconstructable".

Does that mean that all of the real numbers "exist" in some philosophical sense? Personally, I don't know or care.

I only know that in computing systems we can randomly pick countable lists of formerly unconstructable values without restriction.

If you have an untyped language, you can only write a countable number of functions.

If you have a programming language, you can only write a countable number of procedure definitions in that language.

A single procedure definition can formally denote an uncountable number of possible return values.

Finite State Machines.

The stack space available for the return value is finite, the return values are countable as bitstreams.

In a real machine no values are normally returned at all, instead the FSM is left in a different state. As the registers and memory are finite, the machine is a finite-state-machine. No matter what fancy language or type system shenanigans you try, the underlying machine is still a FSM.

So any operator implemented in such a FSM does not magically make unconstructable Reals constructable, even a nondeterministic random generator (which might exist as digitized noise from a quantum tunnelling diode - already used in hardware RNGs without causing a philosophical crisis).

re FSMs

You are right that people reason about programs in terms of universal machines, but that in practice people can only build approximations of these.

When people do this carefully, as in constructive math like Hewitt's, you can interpret the countable infinities as processes that compute successive approximations of limits.

Interpretation is not reality

You cannot interpret processes that compute successive approximations of limits as anything other than processes that compute successive approximations of limits.

They are never infinite. Yes I can pass around the process instead of a value, so that I can use it in arbitrary precision maths. But its not the same thing as the value itself.

In a lazy language like Haskell, these things are commonplace, and don't even need to involve Reals. A classic example is a fibbonachi series generator function that outputs an infinite list of fibbonachi numbers.

Arbitrary precision is not the same thing as infinite precision.

no no no no no!

Anything of finite length is countable, as long as the symbols it's made from are from a finite set! Hell, if the alphabet of symbols is a countable infinity, then the set of strings of them is still countable!

Countable doesn't mean that you can constructively denumerate elements that excludes invalid ones and duplicates. If you can count up a set that INCLUDES them all, then they're countable even if some elements aren't valid.

The proof that rationals are countable is a construction that includes duplicates.

Look, if you have a finite alphabet of symbols then an integer in the same base as the number of symbols of the alphabet can be read digit by digit as a string of those symbols. Some of those strings will be valid sentences. If I count up then I have countable sequence of all strings. Exclude the invalid strings, and I have a countable sequence of all sentences.

Now if the symbols are from a countable infinite set, then you can still enumerate them all by interleaving the sequence of strings from larger and larger alphabets.

Say you define the power of a string P, as "size of alphabet"^"number of symbols in the string". Then for any power P, there is a finite position X in the sequence where all strings of that power have already been enumerated.

That's countable.

The reason that's not an uncountable power set, is that we're excluding infinite strings. The regular power set -> continuum sets become countable sets if we don't allow infinities. Counting TO infinity but never completing doesn't get you uncountable.

Let me add two points:
1) I'm surprised some people here don't get basic Cantor. This is the level of math that hobbyists on the internet love because it's so easy. This is basic level stuff you should get in a low level class.
2) it all sounds useless to me. It comes down to whether, when we construct sets from infinite processes, we allow those processes to complete. BUT we've defined "countable" as "never completes" so the fact that we DO allow completed infinites in some contexts doesn't change the fact that when we start with integers, we DON'T allow them to complete because we say "infinity isn't an integer"...

Basically, the usual way of doing things isn't really "we can complete infinities" but rather "natural numbers are typed to not be complete, but other infinities are"...

In the end we could have typed things different and still come up with a consistent system, it would just be a DIFFERENT consistent system where even natural numbers have the order of the continuum and there's more than one positive infinity (infinity being the part of the natural numbers that has the order of the continuum).

Symbols not countable

Anything of finite length is countable, as long as the symbols it's made from are from a finite set!

Ergo, Hewitt can't be using a finite (or even countable) set of symbols. As we know, Hewitt's symbols include arbitrary Actors, which Hewitt's saying are uncountable.

That sounds like circular reasoning

If you assume that actors are uncountable then actors are uncountable.

But unless he has another reason for them to be uncountable then this contradicts any sensible definition of "actor".

A CreateReal.[ ] is one of uncountably many Actors

A CreateReal.[ ] is one of uncountably many Actors.

Paradigm shifts are very difficult

Thomas,

Paradigm shifts are very difficult and confusing because new unfamiliar stuff appears and some previously standard stuff is cast off or subtly modified.

Normally people operate fairly well just using pattern recognition and don't have to resort to detailed technical argumentation (which is very difficult if not impossible for most people). But paradigm shifts upset things until a new paradigm becomes established long enough that older ways of thinking have largely disappeared.

Also, scientific articles (which are written mainly for experts to advance the state of the art) are not the best way to learn a paradigm shift. The current tools at LtU and many contributors here do not seem to be up to the admittedly very tough job. It is very difficult even for professionals at the highest level to advance the state of the art (what counts scientifically) in the book "Inconsistency Robustness" .

LtU has been helpful to me in that I get ideas on how to advance things even from comments and statements that are mostly wrong; just as I learn from misunderstandings and questions of students and colleagues. Finding limitations is fundamental to making further advances.

So don't be discouraged. You have made some important contributions here and are starting to get the hang of it. If you persist in your investigations, you will be rewarded.

I will be happy to answer any particular questions that you might have.

Regards,
Carl

PS. For a humorous view, see Lewis Carrol "What the tortuous said to Achilles".

What I find particular

What I find particular frustrating is that it seems like very few people who insult Hewitt's work or Hewitt have made a serious attempt to try to understand it. A decent number look at it briefly and decide it is too confusing or hard to follow and then go into attack mode. Then, on LtU, they act as if it is Hewitt's (or someone else's) duty to examine their confused statements, correct them, and keep doing that until the commenters have "read the papers without actually looking at them".

That is just not the way that scientific, or mathematical, or even rigorous engineering discourse works.

But that is the way scientific discourse works. Matt M already answered about peer review, and I (a mere PhD student) elaborated earlier on information overload. Many CFPs explicitly demand clarity. The OOPSLA 2012 CFP made this painfully clear:

The length of a submitted paper should not be a point of concern for authors. Authors should focus instead on addressing the criteria mentioned above, whether it takes 5 pages or 15 pages. It is, however, the responsibility of the authors to keep the reviewers interested and motivated to read the paper. Reviewers are under no obligation to read all or even a substantial portion of a paper if they do not find the initial part of the paper interesting.

http://splashcon.org/2012/cfp/due-april-13-2012/378-oopsla-research-papers

Even though other CFPs are not as extreme, I've learned that's good writing advice.

Improving scientific articles is challenging

Improving scientific articles is challenging.

Readability is certainly an issue.
Writing good abstracts and introductions is very difficult.
Authors initially mainly get help from colleagues (including students).
After an article has been accepted at a prestigious journal (e.g. Science, Nature, CACM), the professional editorial staff help with article organization and sentence wording, but can't do much for content.

The system of using unpaid anonymous reviewers is not adding much value either in terms of content or editorial improvement :-(
After a number of prominent scandals, lead journal are rethinking how to proceed.

Is it really a problem?

Not every thread in every topic interests me. Many do, and everybody has their own personalised ranking. Nobody has a ranking that agrees with anyone else but we all feel that they should be roughly consistent with each other. Stick in the idea that somehow relevant information should be prominent than the noise and I would imagine there is an analogue of Arrow's Theorem that applies to every forum in existence. Previous attempts to resolve this?
* Hiding / collapsing threads.
* Killfiles.
* Tagging / hiding.
* The pgdn key.

Being a little pragmatic, one of these is currently implemented on LtU and the rest will never be (barring some bizarre late-22nd century upload of the database to a sentient drupal successor). The nice thing about skipping though stuff that I don't want to read is the interestingly serendipitous way to discover that I am more interested in a particular topic that I thought at first. It causes reevaluations of interest and discovery of random tidbits. This is the classic tradeoff between searching and browsing interfaces. I would highly recommend it to anyone who has trouble with one particular topic taking over a thread or topic.

Once the reading experience is providing more pleasure there are simple approaches to improving the writing experience.
* Don't put time and energy into discussions that you do not value.
* Write more *other* things that do interest you.

Over the years the coverage of Actors has slowed down a bit. Fewer unpublished reports get posted and the number of threads related to it seems to have reduced. I haven't checked but was 2013 peak-Hewitt? Personally I like a lot of the discussions it has provoked. Hewitt can be terse and cryptic at some times, and repetitive at others. But, he has said a number of interesting things over the years. With a background in process calculi, and an interest in the foundations of maths I find that most of the discussion appeals to me. Where it spills over into other areas (engineering of X is solved by actors), well I rely on the pgdn key as a technical solution.

Peak Hewitt

<delurk>
2013 might have been "Peak Hewitt", but it seems that 2015 is becoming "Peak Discussions About Hewitt". That should start triggering alarm bells; it's usually around this point, especially in the absence of technical solutions beyond <pgdn>, and particularly when threadspamming hits the usability of the medium as hard as it does here, that "community mainstays" start quietly drifting away.

Personally, I'm finding more and more that seeing posts by Hewitt or Thomas Lord cause me to switch off.
</delurk>

Put it in Wikipedia

Lets make a Wikipedia page about "Peak Hewitt." I see no reason why this couldn't become yet another Internet theme.

Good luck with Wikipedia; censorship there is *very* strong :-(

Good luck with Wikipedia; censorship these days is very strong :-(

In the early days, I contributed to Wikipedia including writing the article on "Actor Model" (which is still there) and other articles.
But then the censors banned me :-(

See Corruption of Wikipedia.

Please no

I would very kindly suggest that here is not the place to air personal persecution complexes. I'm pretty sure marco was joking.

What gave me away?

What gave me away? ;-)

human decency

I would very kindly suggest that here is not the place to air personal persecution complexes.

This is disgusting and shame on you.

Marco made -- and let's be generous and say -- a joke. The joke was despicably ugly. It proposed inciting an internet-wide mob to direct hatred at Hewitt.

An "anonymous coward," apparently, Marco -- as a joke (let's say) -- suggested we get as many people as possible to gang up on and socially maltreat Hewitt. This is in a context where similar "jokes" have famously created tremendous grief and suffering as mobs get out of control.

Somewhat gracefully, Hewitt rebuffed the "joke" in kind.

You, Ben, started "policing" comments in the wrong place.

I would very kindly...

Not at all. You would dress your obnoxious comment in polite language. That is different. There is no way to "kindly" say what you said.

I interpreted "Peak Hewitt"

I interpreted "Peak Hewitt" as the number of posts Hewitt makes, not the posts against him.

" I see no reason why this couldn't become yet another Internet"

In its context, this is hate speech:

"I see no reason why this couldn't become yet another Internet theme."

In its context, this is

In its context, this is hate speech...

No, it isn't. Why the sudden flamemode on?

inciting crowds towards social isolation is hate speech

Marco's proposal to objectify Hewitt, a live human being with some of the most interesting things to say about programming language theory among all LtU commenters, as a target of ridicule and bullying.

Marco's proposal (even if a joke) is an example of a recently more common trend in how people behave on the Internet in social forums. That form of bullying has produced outcomes like people needing police protection when mobs start making certain kinds of threats, suicides, and more.

The Internet as popularly used makes it easy for people to engage in such bullying without suffering personal consequences from it.

One way we can help to resist the problem is to call it out where we can usefully do so.

This is not bullying

I have no idea what you are talking about. This isn't bullying. It's not even close. Are you suggesting that Hewitt needs police protection from LtU now? Please don't cheapen the discussion here with such exaggerations and comparisons to hate speech.

The issue at hand has been one of behavior; repeated, persistent, intentional derailing of threads towards Hewitt's work, by himself and one or two others. The negative reaction to years of the this behavior could hardly be construed as bullying. Besides, I get the impression that LtU people are not exactly the bullying type; strong opinions and heated discussions tend to stay rather impersonal, even if they misunderstand or fundamentally disagree with other people's positions and paint with an overly broad brush.

I don't see anyone suggesting Carl is a bad person, is stupid, or that he kills puppies. Moreover, I don't see anyone even really suggesting that Actors are bad or stupid or that they kill puppies. However, I do get the sense that some people are frustrated about how quickly threads get derailed towards his work and the ensuing amount of noise.

Maybe you experience this as persecution, but IMHO it's triggered by a long-term pattern of behavior that some people find annoying. Marco's joke notwithstanding, about the strongest language I've seen here is "that's annoying, please stop." and "you're trolling".

"derailing" a thread

Hewitt's response comments are generally on-topic, usually very brief, and not necessarily easy to grasp without putting some effort into the work he talks about.

Hewitt's top-level comments are generally on-topic and of a similar character, though not necessarily so brief.

Some of them have provoked lengthy discussion of varying quality.

The only sense I can make of the phrase "derailing a thread" in this context is that you wish other people would not have these discussions.

Further, in this LtU topic, some people are offering a social strategy to achieve your aim of preventing people from having those discussions. The social strategy is to ask everyone but Hewitt to single Hewitt out and assign to him a second class status in this LtU society.

bullying language

about the strongest language I've seen here is "that's annoying, please stop." and "you're trolling".

Or how about the one accusing him of having a persecution complex?

"please stop" is a peculiar thing to say when you consider that the person saying it could instead say nothing and ignore the thread.

"you're trolling" can be fair accusation if it is true which does not appear to be the case here.

Trying to incite everyone-but-Hewitt to treat Hewitt as a lesser among equals is bullying. It's especially embarrassing from a perspective that recognizes the works he talks about as being really quite interesting and PLT-relevant on a broad range of subjects.

re: trolling

On the whole, as I have said from the top of this whole topic -- if not clearly enough, I do sincerely apologize -- I agree that the main problem is that people reply to Hewitt, not that Hewitt posts in the first place. Too bad the "zen" approach to interactions with people here ends up with very trollish looking outcomes.

Implied Agreement

Part of the problem is that not replying imparts a kind of impled agreement, that would be misleading in a public forum, that may be used as a reference.

Please read what I actually

Please read what I actually wrote. I suggested that this is not the place to air grievances against Wikipedia. Or do you disagree? I was under the impression that LtU was some kind of forum for discussion programming languages.

If LtU allows personal attacks, it should also allow defense

Ben,

It's interesting that you chose not to criticize Marco.

If LtU allows personal attacks, then in fairness, it should also allow those attacked to defend themselves.

Regards,
Carl

For what it's worth

...wikipedia isn't a place where experts are made welcome. I'm somewhat surprised an article by yourself on the actor model has managed to survive (it being, of course, "original research"). You probably didn't manage to annoy enough admins to get it nuked.

I find that wikipedia is best kept as a stress relief tool. If you find yourself embroiled in a situation like this one, head off to wikipedia and make some edits. The trick to it is to make grammar fixing edits on relatively uncontroversial subjects (thus keeping your head down from the "serious" wiki-police), but let a few of them subtly (or not so subtly) alter the meaning as well as the grammar. If you add a few spurious references to noexistent, long-out-of-print books, that helps.

If you want to really nail those edits down as "truthy", though, start editing them *back* to the correct state with a different persona, and then use the old persona to complain bitterly that the new persona is a sockpuppet of Banned User:CarlHewitt and watch "important" wiki-role-players rush to reinstate your lies.

It needs access to a couple of proxies, but adding blatant falsehoods to wikipedia is quite a fun thing to do. Whatever you do, don't start believing wikipedia's somehow important.

wikipedia's pervasive inconsistency

It needs access to a couple of proxies, but adding blatant falsehoods to wikipedia is quite a fun thing to do.

Anti-social and yet unpreventable the way they have structured the institution.

LtU is doing a lot better than Wikipedia :-)

LtU is doing a lot better than Wikipedia :-)
But the LtU ecosystem is fragile and could easily be destroyed :-(

There are some excellent suggestions for improving LtU here.

PS. People should not sabotage Wikipedia articles.
Instead, they should put the effort into fixing Wikipedia's broken administration.