When are Actors appropriate?

So there have been conversations about what is appropriate talk on LtU. Personally I am not offended by brusque commentary. On the other hand, while I respect Hewitt, I do think that the 'actors everywhere' thing has not gotten any better, only worse. I posted about process calculi and the first post was to railroad it towards actors, and sure enough enough people took the bait. :-(

I believe that conversations should be allowed to go off on to tangents e.g. talking about quantum. We all have subjective opinions like Justice Stewart about when it has gone from respectable tangent over into frankly trolling beavhiour, even if the troller doesn't do it to be a troll.

I'm asking everybody who is getting involved in each and every actor thread to please stop when they are on topics that did not in fact start out talking about actors. Personally I think it is OK if you go and start a new parallel topic where you draw the link and leave the conversation there. I do not need anybody to stop talking, I just would very much appreciate it if we could all try to see if we can keep the torrent of actor threads contained in their own places, so that other threads can live or die on their own.

(At the very least, realize that by forcing a thread to go onto 2 pages you have broken my ability to get to the new posts. Hardy har-har.)

Thanks for your support.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Importance of types in practice

Types are of fundamental importance to the practice of computer science and engineering as they are crucial to performance, security, readability of programs, etc.

For Computer Science, it is practically important to clean up the confusion caused by Gödel's work and to put types on a firm rigorous foundation.

There is a distinction

There is a distinction between mathematical objects and the algorithms by which they are constructed, and nothing can exist which you cannot construct.

That's not the default in mathematics; if you want a mathematics founded on this idea, you should look at the various forms of constructive/intuitionistic logic. You might want also to look at predicativity, for your bit "it cannot include itself in the search as it does not exist when the search happens". And all these are part of the tricks that finitist (meta)mathematics uses to be reasonably safe from paradoxes.

Mathematicians Manipulate Written Descriptions.

I agree, but I am saying something stronger than that. I think any statement that can't be constructed cannot exist as a mathematical object. After all we can write down all mathematical statements, which means they are constructable, and the mathematician performs algorithms on the written statements.

Mathematicians manipulate social descriptions

After all we can write down all mathematical statements

Writing a mathematical statement sometimes requires coining new terminology. Coining terminology sometimes requires ongoing communication between the person who's trying to coin the terminology and the people who are trying to understand it.

So we can't write down all mathematical statements, because we can't artificially construct all possible societies.

Actually we can

Look, the argument is whether the data required is finite.

The data required to teach someone any new concept is finite.

Pictures are finite. Sound is finite. Language is finite.

If a person could conceivably learn a new terminology from a lifetime of instruction then we can set a finite limit on how much data it takes to give someone a lifetime of instruction.

You can't even argue that we need infinite symbols. Humans have limited resolution eyes and ears, so all possible finite length writings and sounds can be encoded in finite space as bitmaps or sampled sounds.

We can't yet put touch into a recorded medium nor interaction, but those aren't inherently infinite either.

Fallacy — infinities are confusing

First, of course, we can't write down *all* possible mathematical statements in a finite time — they're a countable infinity. But any mathematical statements can be written down, any proof can be written down (mathematicians require that from logics they use) and then checked. And I think that's enough for Keean's argument.

Furthermore, we can in fact write a program which, left running indefinitely, will eventually produce all mathematical theorems. You just loop over possible proofs of some length (there's a finite amount of those), then the ones of the next length, and so on. You check each of these proofs, and print the proved theorem if the proof is valid. (This is the hardest thing to do — printing all possible definitions is easier; printing all sensible definitions is a different matter).
At each point, you will only have a finite amount of math, but the program will go on. Proving that each valid theorem will appear is trickier, because it depends on the search strategy — you're exploring an infinite tree, so you only get to every point if you use BFS, not DFS. That's why I ordered proofs by length — that's a breadth-first visit. Proving that each true theorem will appear is yet a different matter, because you need each true theorem to have a proof — and this fails for Peano arithmetic, no matter how you define it. In first-order logic, you have theorems T that only hold for actual naturals, but your axiom apply also to non-standard naturals, so that T can't be provable. With most other logics, you can have axioms that are only modeled by actual naturals, but you cannot prove all semantic consequences of the axioms.

Thankfully, modern mathematics can settle philosophical questions like that rather precisely.

enumerating all theorems (not)

Furthermore, we can in fact write a program which, left running indefinitely, will eventually produce all mathematical theorems.

That seems rather improbable.

Surely, mathematics in general can prove theorems about each individual constructable real (included in the theorem by construction).

Yet the constructable reals are not computably enumerable.

Consequently, the provable theorems of mathematics must not be computably enumerable.

Subsets of recursively

Subsets of recursively enumerable sets need not be recursively enumerable.

Finite but unbounded is a subtle concept

because it isn't finite in the sense that real world objects are finite.

Math that can't be written down

First, of course, we can't write down *all* possible mathematical statements in a finite time — they're a countable infinity. But any mathematical statements can be written down, any proof can be written down (mathematicians require that from logics they use) and then checked. And I think that's enough for Keean's argument.

I was saying "all" in an attempt to match Keean's use of "all," and I think that's the same as what you describe. So, while this clarification is nice to have out in the open, it doesn't shift my position.

We can't meaningfully write down a mathematical statement unless we designate a meaning for it, and that's a social process. To designate a meaning, we have several options, but defining all our terminology in terms of more terminology isn't one of them.

We can almost give up meaning by saying that we're merely dealing with arbitrary symbols that can be manipulated according to abstract inference rules. However, to really give them meaning that way, we have to give a meaning to things like "symbol" and "inference rule." We happen to be in a culture where "symbol" and "inference rule" are thriving memes, but that doesn't give them absolute Platonic existence. Our culture may yet adapt to interpret these memes in ways we don't currently anticipate.

For instance, Hewitt's favorite way to interpret these concepts is that mathematical sentences may contain embedded real numbers. In a system I'm designing, definitions all go through cryptographic code-signing; this pragmatically enforces that everyone gets to invent names that nobody else can clobber, which means names are not as easy to define and use as you might be used to. But for any given real number, or any given name, it's pretty easy to say "that one!" and make mathematical claims about it.

If you say that "any mathematical statements can be written down," it only serves to exclude Hewitt's and my systems from being "mathematical." I'm okay with describing my system in ways other than "mathematical," but Hewitt's been more insistent.

Then reals would be countable

In logics, proof theory works a bit like you describe, but there's more.
Usually statements don't really exist as mathematical objects unless you're doing metamathematics. But statements (or rather, predicates) can describe mathematical objects. Let's take descriptions of reals then.
As we know, the set of reals is uncountable. You can also take finite descriptions of reals — say, "x^2 = 2 and x > 0" is a description of sqrt(2). Since these descriptions are syntactic objects (strings of symbols which are well-formed predicates that old for a single real), they're clearly countable. Hence, most reals can't be described. Reals aren't an exception.

However, even if you don't believe in reals (and such people exist), but you believe in finitist mathematics, I can show you with finitist methods* that proofs cannot produce nonsense. That was Hilbert's program.

*Well, not quite: because of Gödel's theorem, you need to step a bit outside, and use methods which are less finitist. So a logic with an uncountable set of statements wouldn't help the original Hilbert's program.

(There's also a theory of real numbers in constructive mathematics, but I don't know it well, and I'm not sure it produces an uncountable set of real — or even what exactly "uncountable" means in that context).

Reals aren't countable but statments are because

there doesn't have to be a one to one correspondence between reals and statements. We have statements that describe, for instance a segment from the number line which itself has an uncountable number of reals.

So we never describe any uncountable real directly, we only describe a set that contains it and others.

Yes, undescribeable mathematical objects are fine

Agreed, no paradox here. I reacted to (one interpretation of) the parent's comment (by Keean):

I think any statement that can't be constructed cannot exist as a mathematical object.

In your example, we only describe sets of "uncountable reals", but those reals without description are still "mathematical objects". I guess you already agree, so this is for Keean's benefit.

The Set Of Reals Exists.

Those Reals are not mathematical objects. The second order logical axioms that describe the set of real are mathematical objects. I think this confusion that somehow being able to describe/imagine someything makes it real is at the heart of the problem. Real numbers simply do not exist. You can give examples of specific irrational numbers that belong to other countable sets (power series expansions). But the result of the second order description of reals is simply counting how many angels can dance on the head of a pin. Can you write down a proof of the existence of "a real number"?

Half by mass or volume?

I probably should have phrased that as 'a large part' rather than 'half'. I didn't mean it objectively.

re errors in his reasoning;

It'll be more interesting when you show some. :-P

Over the years we've pointed

Over the years we've pointed out several; admittedly, they're very spread out over time (and therefore hard to collect) because we don't often get a clear glimpse of what he's doing. The most recent was that egregious conceptual mistake about completed infinities.

re Over the years we've pointed out several;

Over the years we've pointed out several [significant errors]

I don't think so. I do agree that we thought we did that.

The most recent was that egregious conceptual mistake about completed infinities.

I haven't seen you convincingly point out a mistake in that area.

Please do so if you can.

When you wrote about that earlier, I was left with the impression that you were missing a kind of dualism of atemporal and temporal semantics, both of which "pun" in the same formal system.

I don't think so. I do agree

I don't think so. I do agree that we thought we did that.

I believe that you don't think so. But the demonstrations of error weren't problematic; and since then, the one attempt you made to argue specifics was just plain wrong, so the preponderance of evidence suggests you've simply let your desire to believe in him get the better of you. (No, I'm not retracting my intent to read his paper. I still hope to.)

I haven't seen you convincingly point out a mistake in that area.

I've pointed it out. You didn't see it, and so weren't convinced, and I have doubts that my pointing it out again is going to be useful since I'm not sure what went wrong before. However, I'm willing to try. Once more, anyway; unless we can keep the ball in the air with new questions or something, there must be a point of diminishing returns.

The essence of Hewitt's reasoning on this point, as I understand your explanation of it (speaking of communication breakdowns), is that you can specify the creation of an actor that nondeterministically spits out bits, and then uniquely associate that actor with a particular infinite sequence of bits that it spits out (representing, we may assume for convenience, a real number in the interval from zero to one). The unique association is supposed to justify identifying the actor with the real number, which is practically certain to be irrational (the odds of it being rational would be aleph-null over aleph-one). Except that there is no meaingful sense in which the actor can be identified with the real number. You can't replace the one with the other in a representation, because you can't represent the real number except by gesturing to that particular actor and saying 'the real number whose representation is spit out by that actor', which isn't replacing the actor at all. There isn't even any useful sense in which the entire infinite sequence is contained in the actor at any given moment that we look at it, because at that moment it has only spit out a finite number of those bits, and the definition of what it's going to do explicitly does not define what it'll do. So the real number ostensibly associated with the actor does not, in any meaningful way, exist. You can conjecture a world line for the actor spitting out the entire sequence, but there are a couple of problems with that: on one hand, the probability that this actor would not be destroyed by something over an infinite period of time is, well, about the same as the probability that a random real between zero and one would be rational; and on the other hand, if such a completed world line exists it has no interaction with the program beyond what the incompleted form of the actor would have, so the association is, literally, not meaningful.

There isn't even any useful

There isn't even any useful sense in which the entire infinite sequence is contained in the actor at any given moment that we look at it, because at that moment it has only spit out a finite number of those bits, and the definition of what it's going to do explicitly does not define what it'll do.

I've seen this objection a few times, but I'm not sure how this definition of the reals is appreciably different from any other constructive definition of the reals.

I'm not sure how this

I'm not sure how this definition of the reals is appreciably different from any other constructive definition of the reals.

It's not constructive. You couldn't do it more than once and get the same answer; that's what "nondeterministic" means; the only way to reproduce it is to record what you got the first time.

re I'm not sure how this

It's not constructive. You couldn't do it more than once and get the same answer; that's what "nondeterministic" means; the only way to reproduce it is to record what you got the first time.

Ok, at this point you are not talking about the reals at all. You are talking about the constructible reals which is but a subset.

The actor that produces a lazy infinite list of bits, with no tail of all 0s or all 1s, is a kind of effective version of a choice axiom for non-dyadic real fractions.

Shutt: real numbers and axioms

Except that there is no meaingful sense in which the actor can be identified with the real number.

That's incorrect. You can define (programmatically, even) the algebraic structure of those infinite actor outputs. These definitions are usefully isomorphic to every other definition of reals. By analogy, consider systems like Emacs calc, Maxima, and Wolfram's-thing-whatever-its-called.

You can define

You can define (programmatically, even) the algebraic structure of those infinite actor outputs.

Programmatically there is no algebraic structure of the infinite actor outputs; only finite prefixes can every be used by a program, and those are uncorrelated with what comes after them.

re programmatically there is no algebraic structure

Programmatically there is no algebraic structure

Your confidence that arbitrary precision maths libraries can not exist is interesting but fails an empirical check.

only finite prefixes can every be used by a program,

Nothing I have said anywhere contradicts that very obvious fact.

Progress?

We may be making progress.

only finite prefixes can every be used by a program,
Nothing I have said anywhere contradicts that very obvious fact.

That you think that obvious is the first encouraging thing I've heard in all this. That you don't see the relevance to what you're saying is only ordinarily frustrating (there's plenty of frustration to go around, here). But there's an interesting anomaly in your other remark that might help.

Programmatically there is no algebraic structure
Your confidence that arbitrary precision maths libraries can not exist is interesting but fails an empirical check.

You've misquoted me, in a way that could be quite useful by providing a clue into where we're failing to connect. I did not say "Programmatically there is no algebraic structure"; I said (emphasis added) "Programmatically there is no algebraic structure of the infinite actor outputs". Do you see the difference? Each finite prefix of an actor output has, programmatically, an algebraic structure; off hand I can't imagine how anyone would disagree with that. But the infinite output of these actors does not have, programmatically, an algebraic structure (for the twin reasons I named in the remainder of that sentence).

banish metaphysics (re Progress?)

Certain effective aspects of programmatically-refied actor computations can be correctly, mathematically modelled by the classicakl real numbers.

I said (emphasis added) "Programmatically there is no algebraic structure of the infinite actor outputs". Do you see the difference?

dominating LtU

Another thing in this area is that I don't think Hewitt is really spamming or dominating LtU so much as other people have so little of any interest to say. PLT as a field of research is mostly in a moribund state.

I also think there's something not quite right about younger people complaining he isn't explaining things in ways they can just grasp without much effort.

respectfully

> complaining he isn't explaining things in ways they can just grasp without much effort.

For me, that (what I see as poor communication style) would be fine on e.g. his own blog. Or yours. Or heck even mine. I do not want Hewitt's work or words to disappear en toto. I would very much enjoy and appreciate if (he or) you or anybody else who has the ability to be better at communicating did start threads where they clarify what he is getting at.

But I do not see the way the communication has been practiced to date as being constructive for, or respectful of, LtU. Hence my attempt to point my finger at the moon, and ask everybody to voluntarily chip in.

All this is of course my personal $0.02 as somebody who has been the un/intentional trouble maker in many places before; takes one to know one; etc.

raould, in the news biz we have a slogan

I guess it was Justice Brandeis who gets paraphrased a lot as "the best cure for free speech is more free speech".

My unsolicited advice:

Post interesting things.

Make interesting comments.

Avoid trying to rally the community to single out and ostracize someone you wish to repress.

There is no ostracizing

There is no ostracizing there, just a request to keep topics on topic. Not every post needs be drowned in 90% actors & inconsistency robustness. Even if the content were good quality then it would still be inappropriate to spread it to every post on the site.

Fighting in the war room

Maybe we should start a conversation about how a conversation about Actors flooded the conversation about flooding conversations with conversations about Actors.

To be fair, it hasn't flooded this conversation nearly as much as the conversation about foundations of mathematics has.

Maybe we should start a

Maybe we should start a conversation about how a conversation about Actors flooded the conversation about flooding conversations with conversations about Actors.

No need; this conversation provides its own self-proof.

on the one hand

i find the irony actually nigh too much to bear. it is just utterly stunning. not only due to the people i'd expect it from, but also from the people who are feeding the tolls in places where i was hoping people would learn not to feed them.

maybe the core problem is that drupal makes it hard for people to start a whole new thread? (or that people are lazy?)

on the other hand, i hope that however much this thread gets such things, it will at least mean that other threads are not.

Sorry

I was actually mindful of this when I posted on this thread, and almost didn't post for this reason. I've otherwise kept my engagements in the discussion limited to threads Hewitt started, so they didn't feel OT. My rationalization for posting here was basically the last thing you said: at least I'm not derailing some thread not originally about Hewitt. But I'll refrain from any more posts in this thread. I also feel bad about the fact that what I love most about LtU are the very interesting research papers that people post about, and the on-topic discussion that follows, but I've limited all my engagement to this Hewitt stuff.

Fwiw

Fwiw, I'd been thinking it's consistent with the purpose of this thread to use it as a contained place to try to clear up some Hewitt-induced misapprehensions, because clearing up those is probably the single most effective measure that could be taken to reduce Hewitt's ability to destroy other threads.

I apologize for any troll-feeding

On reflection, I probably am one of the troll-feeders. If I have been feeding trolls, let me know (even privately) and I'll stop. There's a fine line between troll-feeding and giving people yet another chance (something which I've been doing at times).

Not just troll feeding

I'm pretty sure Tom is not a troll, but some of these conversations have been going on for far too long with seemingly little progress made so I've decided to stop participating.

Not Tom

To be sure, I didn't talk about my answers to Tom. Otherwise, point taken.

re Tom

What I find particular frustrating is that it seems like very few people who insult Hewitt's work or Hewitt have made a serious attempt to try to understand it. A decent number look at it briefly and decide it is too confusing or hard to follow and then go into attack mode. Then, on LtU, they act as if it is Hewitt's (or someone else's) duty to examine their confused statements, correct them, and keep doing that until the commenters have "read the papers without actually looking at them".

That is just not the way that scientific, or mathematical, or even rigorous engineering discourse works.

I should have like nothing better than to find on LtU others who would read the materials together with me, acting as a kind of reading group that tries to help one another sort out the meeting.

I find nothing of the sort, though.

Rigorous math

That is just not the way that scientific, or mathematical, or even rigorous engineering discourse works.

Rigorous mathematics is definition, theorem, proof subjected to peer review. AFAIK, the last article Hewitt posted has either not undergone or not passed peer review and to my eye does not meet the standard for rigorous mathematics. I still haven't seen a complete presentation of the rules of Direct Logic and I don't think you can piece it together from what Hewitt's referenced here.

Reaction from LtU members has varied, but I think it's reasonable for people who can't understand his Arvix or HAL article links to complain about the way they have been presented. Particularly when the responses you get from him often seem evasive or non-responsive to questions asked. It's very frustrating!

Despite all of this, I have personally read through most? of the material he's posted here. Some of it several times. My opinion is still that it has problems. I think the characterization of Godel is off in ways I've described at length: pinning down the meaning of the turnstile seems key. Other issues: he doesn't seem to appreciate that first-order semantics are just a tool in a mathematician's toolbox for understanding a formal system, the construction of uncountable sentences still seems like a gimmick to me, and the idea of inconsistency robust logic seems like a bad idea to me or at best something I'm just not interested in.

All of this is just my opinion, and I don't have any interest in insulting Hewitt. When you indicate that you thought there was something important there, I gave it a hard second look. But I just can't see it and so I'm moving on. Again, I'll keep reading what you write, so you could one day convince me, but I'm pretty skeptical at this point.

Researching ├

There is still more work to be done Researching ├.

By a formalized proof, Mathematics (├) proves its own consistency. However, the proof does so without regard to the content of Mathematics. For example, Mathematics includes the Peano/Dedekind categorical axiomatization of the natural numbers. The consistency proof doesn't lead to any technical problems as long as there are no inconsistencies, e.g.,
* the Peano/Dedekind categorical axiomatization doesn't infer any contradictions
* there are no "self-referential" propositions in Mathematics

What is mathematics?

Claiming that mathematics proves its own consistency means that it proves there is no proof of false in mathematics. DirectLogic has a turnstile in its notation, but how do we know it means "proves in mathematics"? You've stated the importance of the Peano/Dedekind axioms categorically defining the naturals, but is it not just as important that the rules of DirectLogic categorically define provability in mathematics? What constitutes a valid proof in mathematics?

What is known about Mthematics and what can be known?

Classical Direct Logic is a formalization of Mathematics, e.g., Church's paradox etc. If more is known about Mathematics, than what is current formalized in Classical Direct Logic, then it should be added.

By definition, Mathematics is consistent if an only if a contradiction cannot be inferred. The definition of consistency can be formalized in Classical Direct Logic and used to prove the consistency of Mathematics.

It is possible to define and prove categoricity of the Peano/Dedekind axiomatization because it is not as powerful as Mathematics. It is not known how to define categoricity for Mathematics.

Where does it end?

Classical Direct Logic is a formalization of Mathematics, e.g., Church's paradox etc. If more is known about Mathematics, than what is current formalized in Classical Direct Logic, then it should be added.

This plan can never be finished. If you give me a formal system for DirectLogic that supposedly formalizes mathematics, I can mechanically produce a new sentence that DirectLogic doesn't prove that is true if DirectLogic is consistent. A mathematician who understands this can never formalize his own understanding of mathematics.

Formalising Mathematics is Done.

I think the mistake is to think you can axiomatically formalise mathematics. Axioms are mathematical objects and a set of axioms can describe a system of logic, but there are transformations on axioms that can only be done by a mathematician. However all these transformations can be encoded as algorithms. So we can formalise mathematics as data-structures and algorithms. Some algorithms are logic interpreters for first or second order logic, and some data structures are axioms in those logics.

We can understand the algorithm that implements the logic as a formal specification of that logic, and data structures of the format accepted by that algorithm as the mathematical objects in that logic.

No

For every algorithm, there is a axiomatization that derives the same things that the algorithm computes, and vice versa. This idea was considered as a way around Godel long ago and found to be a dead end.

Godel is irrelevant

I am not really saying anything controversial, only that all mathematics can be written down, and that mathematicians manipulate the written descriptions using algorithms, and also define their meanings using algorithms.

This was what I found controversial

I think the mistake is to think you can axiomatically formalise mathematics.

Mathematicians

Well Turing machines are more powerful than axiomatic systems due to the existence of memory and time. Turing extended the language of mathematics, and did things not previously possible. One could argue that in the past mathematicians interpreted axioms and gave them meaning, but now axioms can be given meaning by their interpretation on a Turing machine. Can axioms give meaning to axioms?

Axiom systems as machines

[...] but now axioms can be given meaning by their interpretation on a Turing machine. Can axioms give meaning to axioms?

Axiom systems strike me as being pretty much the same thing as nondeterministic machines. I say they're nondeterministic because at every time step, they let a mathematician choose among specific ways to rewrite the tape.

So we have memory, time, and even some I/O interaction with the mathematician. The question remains whether the machine's design is suitable for your purposes, but I think this can be explored in a way that's incremental, with cross-pollination between machines and formalisms.

Axioms and Time

Do axiomatic system include time? No axiomatic description of logic I have (in my limited experience) seen has a temporal element (nor memory). Logic assumes propositions are true for all time, and something is either provable or it is not provable.

A computation system can incorporate the idea that no proof of P has been found at time T, but a proof has been found at time T+1.

"Logic" is not very restrictive

Not quite.

To be sure, your examples are about the general question is: is a formal system as powerful as computation? Or even, is a formal system with axioms and inference rules as powerful?

There's a bunch of temporal logics. Then, Hoare logics (and variants) discuss the meaning of programs. Then, it's been shown that with structural operational semantics you can formalize, with axioms and inference rules, program evaluation with all the features that have been tried. Turing-complete languages can be easily modeled.

You're right that logic is *usually* about timeless truths, but that's a purely aesthetic choice.

The above does *not* prove much, but as long as you have decidable proof-checking and your proofs are syntax (IIRC), the proof of Gödel's theorems goes through. And everybody but Hewitt agrees that giving up those requirements makes your "logic" of little use.

Looking at it the other way?

Are there any axiomatic systems that cannot be implemented on a Turing machine? Which can you build (easily), therefore which would make the better starting point for constructive mathematics?

Linear logic

Logic assumes propositions are true for all time, and something is either provable or it is not provable.

Have you seen linear logic? The inference rules use up and introduce resources, so the things that are "true" change over time as we follow a proof from start to finish.

Re: linear logic

Yes, I have seen linear logic, and it did not immediately associate it with time. It had elements that can only be used once. It seems it could model time now I think about it.

re I think the mistake is to think you can axiomatically formali

I think the mistake is to think you can axiomatically formalise mathematics.

DL formalizes a theory of types including type restrictions on the construction of mathematical propositions.

The definition of DL does not pretend to list a complete set of axioms from which all of math can be mechanically derived.

Defining DL

DL does not formally define DL. You need to implement DL on a Turing machine to formally define it.

Implement turing machines using turtles

DL does not formally define DL. You need to implement DL on a Turing machine to formally define it.

So what you are saying is that I should head to Radio Shack and get a soldering iron, right?

Termination

Look on the bright side, once your soldering actor receives the "all done" message that is guaranteed to arrive eventually you will have proven that DL can be simulated by a Turing Machine, thus resolving many of the issues in this conversation.

Head to Radio Shack

Well that would be one way. You probably don't need to build you own computer, Knuth's approach of defining an imaginary machine is good enough. If you can implement DL in MMIX, that would be a formal specification. In my opinion a formal specification needs to be something that can be executed by a machine, there is too much ambiguity in human language and people's interpretation of things. I don't think DL needs any of the message-order semantics from actors, so at least you won't need a network of Turing machines.

Implement it

Any implementation at all would help.

Formalizing a system in Coq (or Agda) helps a lot nowadays — you can define the system and prove any theorems about it, and readers (almost) only need check the definitions and the theorem statements to see what you're doing.

I find that formalization, while technically hard, tends to be also quite instructive also for the authors (just like implementation).

re "this plan can never be finished"

This plan can never be finished. If you give me a formal system for DirectLogic that supposedly formalizes mathematics, I can mechanically produce a new sentence that DirectLogic doesn't prove that is true if DirectLogic is consistent.

If a theory τ (obtained by adding axioms to DL) is closed, then by diagonalization you can prove that there is a true proposition (⊢⊨τΦ) that can not be proved in τ (⊢⊬τΦ).

You can carry out a proof of that in DL.

DL can prove of itself, by contradiction via diagonalization, that it is not a closed theory. As with the self-proof of consistency, the self-proof of openness is not enough to put to bed the question of DL's consistency.

I can mechanically produce a new sentence that DirectLogic doesn't prove that is true if DirectLogic is consistent.

You can do that very easily, in fact. First, define within DL a closed subtheory, τ, then do your construction for that subtheory. (This is laid out starting on page 13 and going pretty much throught the end of the main body of the paper.)

How should interpret the fact that DL lets you do that?

Hewitt:

Information Invariance is a fundamental technical goal of logic consisting of the following:

1.Soundness of inference: information is not increased by inference

2. Completeness of inference: all information that necessarily holds can be inferred.

That a closed mathematical theory T is inferentially undecidable with respect to [an undecidable proposition Ψ in τ] does not mean incompleteness with respect to the information that can be inferred because (by construction) ⊢(⊬T Ψ), (⊬T Ψ).

I think I can correctly say also that DL does not attempt to give a set of axioms to which every theory can be reduced. In fact, DL presumes that the "theories of practice" people actually used within mathematics contain many inconsistencies among themselves. Since we want to include all these theories of practice as sub-theories of DL, DL can not be reduced to a single, universal set of axioms. (DL is open.)

DL does offer a theory of types that includes strict restrictions on how, e.g., propositions can be constructed from earlier constructed objects.

What is "mathematics"?

I still can't figure out what "mathematics" and the naked turnstile are supposed to be. This quote from Hewitt seems to contradict your understanding of DL being open:

Classical Direct Logic is a formalization of Mathematics, e.g., Church's paradox etc. If more is known about Mathematics, than what is current formalized in Classical Direct Logic, then it should be added.

Reasoning about arbitrary other theories doesn't require openness. You just prove implication statements that have the theory as a hypothesis. Pretty much all logics support this.

And in practice, if DL is to serve as a logic for use in computer science, then it will be implemented on a computer and as such will not be open. "Open logic" is a contradiction in terms.

re "what is mathematics?"

I still can't figure out what "mathematics" and the naked turnstile are supposed to be.

Schematically and informally (and prepared to be Hewitt-scolded if I botch this):

A mathematical theory is defined by the definitions of the theory's types, along with axioms.

A theory τ is an extension of a theory ρ if τ is formed from ρ by the addition of new types and/or new axioms.

A reflective theory includes as mathematical objects its own types, propositions, proofs, etc.

A meta-theory is a reflective theory that includes it's own extensions as mathematical objects.

A theory, M, is suitable as a foundational theory of mathematics if every mathematical theory we come up with can be usefully rendered as an extension of M.

DL is claimed to be a nice foundational theory of mathematics, especially from the perspective of computer science.

"Open logic" is a contradiction in terms.

I am using "open" to mean a theory in which the proofs can not be computationally enumerated. For example, any consistent theory of the constructable reals must be "open" in this sense.

open/closed/undeclared

Something interesting, btw: Hewitt mentions that group theory could be defined as a DL extension using the group theory axioms along with an axiom that the resulting theory is closed.

That amounts to saying "proofs of group theory can be constructed in the following ways ..... and there are no other proofs of group theory."

DL contains no "there are no other proofs in DL" axiom (afaict).

Tortoise and Achilles, indeed.

Closed

I am using "open" to mean a theory in which the proofs can not be computationally enumerated.

So was I. DL, as implemented by computer, will be closed.

(I'm off for the night.)

coin flips and programs (re DL by computer is closed)

So was I. DL, as implemented by computer, will be closed.

In DL:

If σ:Type, x:σ, and f:Booleanσ, then f[x]:Proposition.

Any total, computable, function from σ to Boolean gives us a proposition for each object of type σ.

Consider an ActorScript program like:

  GenerateReal.[]:ℕ → Memoize◁ℕ ↦ Boolean ▷.[ [x:ℕ] → (0 either 1) ]

Each call to GenerateReal returns a total computable function from which infinitely many Propositions may be constructed.

A computer program that implements DL can use a return value from GenerateReal to form a proposition but no compputer program can enumerate all possible return values of GenerateReal!

subsets of recursively enumerable sets, again

This comment has no force as an argument that DL cannot be closed. Even if we grant that you've identified a set of proofs of propositions that is not recursively (i.e., computably) enumerable, that would have no bearing on whether the set of all proofs is recursively enumerable or not. After all, the naturals for example are recursively enumerable, but "most" subsets of the naturals are not. If I identify a non r.e. subset of the naturals, that does not allow me to exclaim "consequently, the naturals can't be r.e.!" This is an error, probably from (incorrectly) transferring the intuition that a subset of a countable set is countable. There is nothing surprising about some subset of a r.e. set failing to be r.e. If you want to argue that the set of all proofs in DL is not recursively enumerable, you'll need to find a different argument.

re subsets of recursively enumerable sets, again

This comment has no force as an argument that DL cannot be closed.

I'll clarify:

The proof that DL is not closed uses a direct formalization of Church's paradox. That is given in the paper starting on page 13 (which PDF page 14).

In answer to Matt M., a proof-by-existence that DL-as-implemented-by-computer is not closed, is given via the procedure GenerateReal.

GenerateReal illustrates that a computational coinflip can be used to constructively define an uncountable set together a constructively valid axiom of choice for that set.

Mathematics is enumerable

The statement you have written does not create anything new, it is a membership test. Given finite symbols from which mathematical objects can be constructed, some of those objects will pass this test, and others not. What you have said is equivalent to claiming there are infinite subtle shades of colour, therefore the number of sentences in English is not enumerable. However the number of sentences in English is clearly enumerable irrespective of this argument. Why is this when you are free to invent words for new colours at any time?

Recorded mathematical results are enumerable

One can in principle enumerate all the theorems that humanity ever proves, presumably.

What GenerateReals shows in terms of denotations of programs is a constructive way to establish an uncountable class of propositions, and an effective way to choose an arbitrary member from that class.

It is thus constructively impossible to enumerate all of the propositions.

Converting computation to mathematical functions

If σ:Type, x:σ, and f:Booleanσ, then f[x]:Proposition.

Any total, computable, function from σ to Boolean gives us a proposition for each object of type σ.

Hmm, is there really a rule that lets us convert procedures [σ]↦Boolean to mathematical functions Booleanσ? I imagine this would at least require a totality proof, or even a proof of purity from side effects.

converting procedures to functions

is there really a rule that lets us convert procedures [σ]↦Boolean to mathematical functions Booleanσ?

Not as a built-in axiom to Classical Direct Logic (afaict). It would be hard to add such an axiom in general since, as you note, we would have to admit only the provably total, complete, referentially transparent procedures.

To convert procedures to fucntions you could define a (sub-)theory extending DL (and reasoned about in and using DL).

This sub-theory would include axioms that that certain functions exist (which abstract the actor procedures defining the functions).

If you turn out to be wrong and your procedure is not referentially transparent, you can derive contradictions in the sub-theory.

If you turn out to be wrong because your procedure diverges, I suppose you could "hang" your implementation of DL.

As an engineering rule of thumb, where possible, you could write the procedure (or configure the actor communication substrate) to return a value quickly or else throw an exception. In this case, if you are wrong that the procedure can always return quickly, a proof that exposes that mistake will again produce a contradiction.

re: coin flips and programs (re DL by computer is closed)

This post will be separated from its parent by a page break with my settings, so let me include the context:

So was I. DL, as implemented by computer, will be closed.

In DL:

If σ:Type, x:σ, and f:Booleanσ, then f[x]:Proposition.

Any total, computable, function from σ to Boolean gives us a proposition for each object of type σ.

Consider an ActorScript program like:

GenerateReal.[]:ℕ → Memoize◁ℕ ↦ Boolean ▷.[ [x:ℕ] → (0 either 1) ]
Each call to GenerateReal returns a total computable function from which infinitely many Propositions may be constructed.

A computer program that implements DL can use a return value from GenerateReal to form a proposition but no compputer program can enumerate all possible return values of GenerateReal!

Honestly, it's hard for me to take this line of argument seriously. This is like an argument that the proofs of DirectLogic aren't enumerable because DirectLogic has Unicode or SoundBlaster support. It's just obviously a red herring. We could trudge through all of the relevant statements about Godel's theorem updating them to account for terminology that allows propositions with arbitrary embedded Actors and, after a whole lot of work, we'd eventually find a new way to phrase the statement that refers to what's actually finitely provable by DirectLogic, the computer program. But I have zero interest in doing this.

DL types vs other objects

We could trudge through all of the relevant statements about Godel's theorem updating them to account for terminology that allows propositions with arbitrary embedded Actors and, after a whole lot of work, we'd eventually find a new way to phrase the statement that refers to what's actually finitely provable by DirectLogic, the computer program.

You can define a sub-theory in which you define some class of provably, total, deterministic actors. You could add an axiom that no terms exist which do not correspond to such actors.

In that case, your proof could go through, in DL, about that sub-theory.

The theorems of Mathematics are *not* computationally enumerable

By the argument of Church's paradox, the theorems of Mathematics are not computationally enumerable.

Limited Symbols

We can only write a limited range of symbols, so all proofs of a given length are enumerable. The most important proofs are the shortest (and most elegant), and proofs become less useful the longer they get. Therefore the utility of a proof tends to zero, and there are no useful infinitely long proofs. So the set of useful proofs is enumerable.

Further I just don't agree that real numbers exist. I have never seen one written down, and I never will. No proof will ever be written that relies in one.

Proofs of finite length are uncountable

Proofs of finite length are uncountable because there are uncountably many Actors. Of course, it is computationally decidable whether a Proof is correct.

It will always be possible to discover new axioms for Mathematics because theorems of Mathematics are not computationally enumerable.

Cantor famously proved that the real numbers are uncountable.

Actors are Imaginary

If there are uncountably many actors, then I have to conclude that actors do not exist. There will be some other set, lets call it "realisable actors" that will be much more useful than the imaginary notion of actors.

Cantor proved that real numbers do not exist :-)

If you have proved you cannot count the number of fairies dancing on the head of a pin, what does it matter if fairies do not exist?

Don't add more nonstandard terminology

According to mainstream mathematics, reals do exist and are uncountable. IIUC, the set of actors is as big as the set of infinite bitstreams, which is as big as the reals from 0 to 1.
Hard-core constructivists might agree with you, but there are constructive reals.

Now, while all of that can be an interesting matter of discussion, that's settled enough, so let's please not mix the discussions, it wouldn't help.

Still no.

"the set of actors is as big as the set of infinite bitstreams, which is as big as the reals from 0 to 1."

If you make actors restricted to not have inputs only outputs and you restrict them to not have a "random" instruction, then the set of outputs has less elements than the set of programs (because some programs will have the same outputs), which is at most a countable infinity of instructions (and also at least one, since we can easily create countable infinities of distinct programs).

What allowing a random instruction gives you is not an uncountable number of actors what it does is break the correspondence between actors and their outputs.

Whether the outputs are "uncountable" doesn't really matter then.

the correspondence of actors to outputs

As a kind of "philosophical" note:

In a foundational theory, one that can be specialized for various needs, what do you gain by prohibiting abstracting terms as (run-time) actors?

We can write programs that do rigorous, automated mathematics using terms of that kind. But you want a foundation for math that prohibits this?

I have no idea what you're saying but I do see

that the argument was that referring to "an actor" is either referring to a program and it's state, which is a countable infinity, or it is referring to a completed infinite output (which can't be addressed or related to a program and its state) or you get confused between the two and start talking about "world lines" like you did before.

re I have no idea what you're saying

I have no idea what you're saying

You could ask.

"an actor" is either referring to a program and it's state, which is a countable infinity, or it is referring to a completed infinite output (which can't be addressed or related to a program and its state) or you get confused between the two and start talking about "world lines" like you did before.

If an actor can make coin flips then the its state is distinguished from the state of other similar actors by the fact that they occupy distinct regions of space-time. (If they occupied exactly the same region of space-time then they would be the same actor with the same coin-flip outcomes.)

i have no idea what you're saying... you get confused

<bristle>You got the first part right.</bristle>

"occupy[ing] distinct regions of space-time." hasn't been part

of set theory in the past.

You would have to formalize that.

space-time and set theory

"occupy[ing] distinct regions of space-time." hasn't been part of set theory in the past.

You would have to formalize that.

Not so. If a computing system that "does DL" has an actor that it wants to reify in mathematics as a function, it can introduce a theory in which the existence of the reified function is an axiom.

If the programmer is wrong and the reified actor doesn't actually implement a total, computable function then minimally, the resulting theory will contain contradictions and, potentially, the bug will hang that instance of the DL theorem prover.

The knowledge about how coin-flips and world-lines interact is for the programmer who is physically constructing a computing system. Internally to DL, that knowledge isn't necessary.

There is obviously a lot here I know nothing about but also

a lot here which is strange.

Doing metareasoning about algorithms that contain unspecifiable real numbers isn't normal math.

... I guess no one is going to explain it all, and I can't read mathematical papers, so I guess I should just give up.

"reify in mathematics as a function" don't know what this means

"If a computing system that 'does DL'" don't know what this means

"reified actor doesn't actually implement a total, computable function then minimally, the resulting theory will contain contradictions and, potentially, the bug will hang that instance of the DL theorem prover." I had no idea... nor do I know why

"The knowledge about how coin-flips and world-lines interact is for the programmer who is physically constructing a computing system."

Uhm but you were basing reasoning and classifications on "world lines"!!!

good questions (there is obviously a lot)

"If a computing system that 'does DL'" don't know what this means

As an example, a program might define some data structures for representing (some) propositions, take some propositions as assumptions and/or goals, and try to prove something by forward-or-backward chaining (according to Classical Direct Logic, etc.).

Maybe the program is trying to prove that Socrates is mortal and has a negative bank balance.

"reify in mathematics as a function" don't know what this means

Concretely: a program might hold a reference to a procedure actor and from that reference, build new propositions that treat that actor as an oracle for computing the values of some DL function.

Conceptually, in DL theory, there is an abstract proposition containing use of an abstract function.

The concrete computer program is creating data structures to represent the proposition and function.

The data structure for a DL function might happen to use (as an internal implementation detail) an actor procedure that can compute values of the function for given arguments.

"abstraction" and "reification" are complements, in this usage. Viewed abstractly, a DL function term could be a procedure actor. Conversely, a procedure actor can in some cases be reified as a DL term.

"reified actor doesn't actually implement a total, computable function then minimally, the resulting theory will contain contradictions and, potentially, the bug will hang that instance of the DL theorem prover." I had no idea... nor do I know why

For a procedure actor to actually compute values of a DL function, it must return a value for every possible argument (it must be total). It must also always return the same value for the same argument.

A DL procedure actor isn't guaranteed to have those properties. It might hang rather than return a value. On different calls, it might return different values for the same arguments.

If a program reifies a bogus DL procedure as a function, something will break. Either we can then prove contradictions, using the (bogus) "function", or our automated theorem prover will just hang at some point.

"The knowledge about how coin-flips and world-lines interact is for the programmer who is physically constructing a computing system."

Uhm but you were basing reasoning and classifications on "world lines"!!!

A programmer can accurately believe that his memoized-flip computes a total, computable function of type Boolean without needing an elaborate mathematical formalization of his assumptions about physics.

When he reifies the memoized-flip actor as DL function, he can simply make it a logical assumption (for the purpose of proof-making) that the resulting function is in fact really a function.

p.s.: Above I'm describing as if there is no need to formalize reification of actors to functions. And there isn't in some practical scenarios. There can, however, be (sub-)theories in DL that reason about the mathematics of when it is OK and when not to reify procedure actors as functions.

Look, what does it break to be more precise?

Say this:
1) An actor is not its output.
2) the set of actors that refers to other actors is at most a countable infinity. Lets say I call those actors0
3) for theoretical reasons, it's useful to reason, when possible, about completed infinities, including the output of actors.
4) actors that have a random instruction can produce a continuum of completed infinite outputs, lets say I call those actors0'
5) actors that can refer to the completed infinite results of other actors, are another kind of actor. If they refer to actor0, then their results can be compressed to the actual actor and state, and they are still countable, and compressable to actor0
6) if they refer to the output of actors0' then they're not countable, call them actors1

I got a phone call to answer, but I think I made my point.

re Look, what does it break to be more precise?

I got a phone call to answer, but I think I made my point.

You're almost there, afaict.

Pretend that DL was designed by someone who thoroughly understands all those hairs you want to split. Suppose that person wanted a mathematical foundation that is sensitive to all those hair splits, and still general enough to be a foundation for "mathematics (in general, especially for computer science)".

Assume DL is such a system and then try to figure out why it works.

If you find a place where it fails at that, that would be interesting.

Sigh, so what is the point here?

You've created hypothetical algorithms that refer to real numbers that no one can specify.

If you could specify a specific real number in any way, then it's not a member of the set that can't be specified by countable infinity of specifications.

...
So you've hypothesized these algorithms (which can never be realized) as mathematical objects. Call them hyperalgorithms. Not one of them are objects that a mathematician could create individually, not one is an object that he could specify individually, because if he could it wouldn't be a member of this set.

And in some way this invalidates Godel (beyond me), even though one could easily filter your statements down to a subsystem of non-hyperalgorithms that mathematicians actually work with.

[edit] you know, if Hewitt talked about these things in a non-ambiguous language, distinguishing the very different things I deliniate, we wouldn't be wasting time in this long thread. I suggest that you stop defending the confusing ambiguous language he uses.

Ambiguity of talking about special objects as if they weren't

Part of the damn problem is that Hewitt conflates programs that communicate with an unlimited number of other programs as "actors"

Then you take very special objects, (programs that have no inputs, put out a single infinite string and use a "random" instruction) and equate the output in some this very specialized circumstance with "the actor".

Gah. There are nonsensical religious dogmas that are less confused!!!

[edit] The way these objects relate to less limited objects needs to be made clear. If all sets of outputs are "an actor" what about inputs? What about code, if two "actors" have different code but the same output on some hypothetical occasion are they "the same actor"? What about ambiguity of message order?

Each CreateReal.[ ] is one of uncountably many Actors

Concurrent computation (inherent indeterminate) can be exponentially faster than the parallel nondeterministic lambda calculus.

Indeterminism is an inherent part of this performance advantage which has become critical because of many-core distributed computer systems.

Thus, each CreateReal.[ ] is one of uncountably many Actors.

See Inconsistency Robustness in Foundations.

Example please

If Reals exist, please reply with an example.

In order for actors to be uncountable

then they have to refer to reals that can not be referred to in any finite format not even as a metafunction that generates it.

So any real that can be specified, by definition, is not a member of that set.

The random instruction is a stupid trick in a way because it lets us sort of specify.

One could argue that letting a random function run for eternity is will not specify any specific real, that the probability is an infintesimal.

So we need infinity larger than infinity.

I want to say something like "stupid pet tricks" but maybe I'm wrong. i=sqrt(-1) might seem a stupid trick to someone. We're interpreting random in some totally magical way. Is that useful? I have no idea.

Types and objects (re Actors are Imaginary)

In Direct Logic, ℕ is a type, not a set.

We can construct the type ℕ, and we can construct each natural number 0, 1, 2, ....

We have no way to construct "the set of all natural numbers".

The type ℕ is enumerable because we can define a total, computable function EnumerateNaturals:ℕ.

(The identity function will do for a definition of EnumerateNaturals but there are other possibilities.)

Similarly, we could constructively define a type NaturalMultiplesOfThree, and we can construct objects of that type 0, 3, 6, 9, ....

It so happens we can also enumerate the objects of that type: EnumerateMultiplesOfThree: NaturalMultiplesOfThree

We can construct a type for functions from naturals to booleans: ℕ ↦ Boolean.

We can not enumerate the objects of that type (as a diagonalization argument shows). At a minimum, that means that the type is not "computationally enumerable".

Of course, we can construct at least some objects of the type. For example, we can construct any total, computable function from naturals to booleans. (A familiar diagonalization argument proves that these are not computably enumerable.)

In DL, we also say that the type Boolean is not well founded. Specifically, we can define a function that is syntactically of the required type (Boolean), yet which is not total.

We can also define functions of type Boolean which are total, computable but which are not provably total, computable.

Finally, if we allow a constructive definition of a function to include a coin flip, we get something akin to an axiom of choice for the type Boolean.

In particular, we can (as needed) choose (by construction) any countable subset of Boolean, although using this method, we can not in general prove equality of two randomly chosen functions Boolean.

Of course, with a suitable definition of equality, Boolean is isomorphic to ℝ.

To sum up:

The type Boolean, a syntactic type, exists.

The elements of Boolean are not enumerable (indeed they are uncountable).

If coin flips are allowed, we can construct random, countable choices of objects of type Boolean.

Types of uncountable functions.

I don't think there is any argument that you can have a type for (N -> Boolean) that is uncountable. But that does not mean the imagined functions actually exist.

If you have an untyped language, you can only write a countable number of functions. Types do not magically create more functions than you can write in an untyped language. So (N -> Boolean) is a subset of untyped programs (which is countable) so therefore must also be countable by the following algorithm.

- enumerate all untyped programs.
- infer types for each program in turn.
- test each for membership of (N -> Boolean).

We have just enumerated all members of (N -> Boolean).

choice (re types of uncountable functions)

I don't think there is any argument that you can have a type for (N -> Boolean) that is uncountable. But that does not mean the imagined functions actually exist.

Historically, mathematicians distinguished "constructable" values on the one hand, from unconstructable values.

Constructable values, informally speaking, are ones we can work out using pen and paper starting from a few seed values and following some list of rules.

Unconstructable values are alleged mathematical objects that can't be constructed.

The classical Axiom of Choice (of ZFC) let's us take a mathematical step of declaring, by axiomatic assumption, some variable x to be an arbitrary member of a non-empty set.

We can define x that way even if that means x might stand for some "unconstructable" value.

Intuitionist / constructivist math, broadly speaking, rejects that move -- partly because of the kinds of realist arguments you are making (and partly in hopes of avoiding paradoxes).

Yet:

If we assume that the either operator is "constructive", that traditional landscape changes. We can now randomly choose certain values that were formerly considered "unconstructable".

Does that mean that all of the real numbers "exist" in some philosophical sense? Personally, I don't know or care.

I only know that in computing systems we can randomly pick countable lists of formerly unconstructable values without restriction.

If you have an untyped language, you can only write a countable number of functions.

If you have a programming language, you can only write a countable number of procedure definitions in that language.

A single procedure definition can formally denote an uncountable number of possible return values.

Finite State Machines.

The stack space available for the return value is finite, the return values are countable as bitstreams.

In a real machine no values are normally returned at all, instead the FSM is left in a different state. As the registers and memory are finite, the machine is a finite-state-machine. No matter what fancy language or type system shenanigans you try, the underlying machine is still a FSM.

So any operator implemented in such a FSM does not magically make unconstructable Reals constructable, even a nondeterministic random generator (which might exist as digitized noise from a quantum tunnelling diode - already used in hardware RNGs without causing a philosophical crisis).

re FSMs

You are right that people reason about programs in terms of universal machines, but that in practice people can only build approximations of these.

When people do this carefully, as in constructive math like Hewitt's, you can interpret the countable infinities as processes that compute successive approximations of limits.

Interpretation is not reality

You cannot interpret processes that compute successive approximations of limits as anything other than processes that compute successive approximations of limits.

They are never infinite. Yes I can pass around the process instead of a value, so that I can use it in arbitrary precision maths. But its not the same thing as the value itself.

In a lazy language like Haskell, these things are commonplace, and don't even need to involve Reals. A classic example is a fibbonachi series generator function that outputs an infinite list of fibbonachi numbers.

Arbitrary precision is not the same thing as infinite precision.

no no no no no!

Anything of finite length is countable, as long as the symbols it's made from are from a finite set! Hell, if the alphabet of symbols is a countable infinity, then the set of strings of them is still countable!

Countable doesn't mean that you can constructively denumerate elements that excludes invalid ones and duplicates. If you can count up a set that INCLUDES them all, then they're countable even if some elements aren't valid.

The proof that rationals are countable is a construction that includes duplicates.

Look, if you have a finite alphabet of symbols then an integer in the same base as the number of symbols of the alphabet can be read digit by digit as a string of those symbols. Some of those strings will be valid sentences. If I count up then I have countable sequence of all strings. Exclude the invalid strings, and I have a countable sequence of all sentences.

Now if the symbols are from a countable infinite set, then you can still enumerate them all by interleaving the sequence of strings from larger and larger alphabets.

Say you define the power of a string P, as "size of alphabet"^"number of symbols in the string". Then for any power P, there is a finite position X in the sequence where all strings of that power have already been enumerated.

That's countable.

The reason that's not an uncountable power set, is that we're excluding infinite strings. The regular power set -> continuum sets become countable sets if we don't allow infinities. Counting TO infinity but never completing doesn't get you uncountable.

Let me add two points:
1) I'm surprised some people here don't get basic Cantor. This is the level of math that hobbyists on the internet love because it's so easy. This is basic level stuff you should get in a low level class.
2) it all sounds useless to me. It comes down to whether, when we construct sets from infinite processes, we allow those processes to complete. BUT we've defined "countable" as "never completes" so the fact that we DO allow completed infinites in some contexts doesn't change the fact that when we start with integers, we DON'T allow them to complete because we say "infinity isn't an integer"...

Basically, the usual way of doing things isn't really "we can complete infinities" but rather "natural numbers are typed to not be complete, but other infinities are"...

In the end we could have typed things different and still come up with a consistent system, it would just be a DIFFERENT consistent system where even natural numbers have the order of the continuum and there's more than one positive infinity (infinity being the part of the natural numbers that has the order of the continuum).

Symbols not countable

Anything of finite length is countable, as long as the symbols it's made from are from a finite set!

Ergo, Hewitt can't be using a finite (or even countable) set of symbols. As we know, Hewitt's symbols include arbitrary Actors, which Hewitt's saying are uncountable.

That sounds like circular reasoning

If you assume that actors are uncountable then actors are uncountable.

But unless he has another reason for them to be uncountable then this contradicts any sensible definition of "actor".

A CreateReal.[ ] is one of uncountably many Actors

A CreateReal.[ ] is one of uncountably many Actors.

Paradigm shifts are very difficult

Thomas,

Paradigm shifts are very difficult and confusing because new unfamiliar stuff appears and some previously standard stuff is cast off or subtly modified.

Normally people operate fairly well just using pattern recognition and don't have to resort to detailed technical argumentation (which is very difficult if not impossible for most people). But paradigm shifts upset things until a new paradigm becomes established long enough that older ways of thinking have largely disappeared.

Also, scientific articles (which are written mainly for experts to advance the state of the art) are not the best way to learn a paradigm shift. The current tools at LtU and many contributors here do not seem to be up to the admittedly very tough job. It is very difficult even for professionals at the highest level to advance the state of the art (what counts scientifically) in the book "Inconsistency Robustness" .

LtU has been helpful to me in that I get ideas on how to advance things even from comments and statements that are mostly wrong; just as I learn from misunderstandings and questions of students and colleagues. Finding limitations is fundamental to making further advances.

So don't be discouraged. You have made some important contributions here and are starting to get the hang of it. If you persist in your investigations, you will be rewarded.

I will be happy to answer any particular questions that you might have.

Regards,
Carl

PS. For a humorous view, see Lewis Carrol "What the tortuous said to Achilles".

What I find particular

What I find particular frustrating is that it seems like very few people who insult Hewitt's work or Hewitt have made a serious attempt to try to understand it. A decent number look at it briefly and decide it is too confusing or hard to follow and then go into attack mode. Then, on LtU, they act as if it is Hewitt's (or someone else's) duty to examine their confused statements, correct them, and keep doing that until the commenters have "read the papers without actually looking at them".

That is just not the way that scientific, or mathematical, or even rigorous engineering discourse works.

But that is the way scientific discourse works. Matt M already answered about peer review, and I (a mere PhD student) elaborated earlier on information overload. Many CFPs explicitly demand clarity. The OOPSLA 2012 CFP made this painfully clear:

The length of a submitted paper should not be a point of concern for authors. Authors should focus instead on addressing the criteria mentioned above, whether it takes 5 pages or 15 pages. It is, however, the responsibility of the authors to keep the reviewers interested and motivated to read the paper. Reviewers are under no obligation to read all or even a substantial portion of a paper if they do not find the initial part of the paper interesting.

http://splashcon.org/2012/cfp/due-april-13-2012/378-oopsla-research-papers

Even though other CFPs are not as extreme, I've learned that's good writing advice.

Improving scientific articles is challenging

Improving scientific articles is challenging.

Readability is certainly an issue.
Writing good abstracts and introductions is very difficult.
Authors initially mainly get help from colleagues (including students).
After an article has been accepted at a prestigious journal (e.g. Science, Nature, CACM), the professional editorial staff help with article organization and sentence wording, but can't do much for content.

The system of using unpaid anonymous reviewers is not adding much value either in terms of content or editorial improvement :-(
After a number of prominent scandals, lead journal are rethinking how to proceed.

Is it really a problem?

Not every thread in every topic interests me. Many do, and everybody has their own personalised ranking. Nobody has a ranking that agrees with anyone else but we all feel that they should be roughly consistent with each other. Stick in the idea that somehow relevant information should be prominent than the noise and I would imagine there is an analogue of Arrow's Theorem that applies to every forum in existence. Previous attempts to resolve this?
* Hiding / collapsing threads.
* Killfiles.
* Tagging / hiding.
* The pgdn key.

Being a little pragmatic, one of these is currently implemented on LtU and the rest will never be (barring some bizarre late-22nd century upload of the database to a sentient drupal successor). The nice thing about skipping though stuff that I don't want to read is the interestingly serendipitous way to discover that I am more interested in a particular topic that I thought at first. It causes reevaluations of interest and discovery of random tidbits. This is the classic tradeoff between searching and browsing interfaces. I would highly recommend it to anyone who has trouble with one particular topic taking over a thread or topic.

Once the reading experience is providing more pleasure there are simple approaches to improving the writing experience.
* Don't put time and energy into discussions that you do not value.
* Write more *other* things that do interest you.

Over the years the coverage of Actors has slowed down a bit. Fewer unpublished reports get posted and the number of threads related to it seems to have reduced. I haven't checked but was 2013 peak-Hewitt? Personally I like a lot of the discussions it has provoked. Hewitt can be terse and cryptic at some times, and repetitive at others. But, he has said a number of interesting things over the years. With a background in process calculi, and an interest in the foundations of maths I find that most of the discussion appeals to me. Where it spills over into other areas (engineering of X is solved by actors), well I rely on the pgdn key as a technical solution.

Peak Hewitt

<delurk>
2013 might have been "Peak Hewitt", but it seems that 2015 is becoming "Peak Discussions About Hewitt". That should start triggering alarm bells; it's usually around this point, especially in the absence of technical solutions beyond <pgdn>, and particularly when threadspamming hits the usability of the medium as hard as it does here, that "community mainstays" start quietly drifting away.

Personally, I'm finding more and more that seeing posts by Hewitt or Thomas Lord cause me to switch off.
</delurk>

Put it in Wikipedia

Lets make a Wikipedia page about "Peak Hewitt." I see no reason why this couldn't become yet another Internet theme.

Good luck with Wikipedia; censorship there is *very* strong :-(

Good luck with Wikipedia; censorship these days is very strong :-(

In the early days, I contributed to Wikipedia including writing the article on "Actor Model" (which is still there) and other articles.
But then the censors banned me :-(

See Corruption of Wikipedia.

Please no

I would very kindly suggest that here is not the place to air personal persecution complexes. I'm pretty sure marco was joking.

What gave me away?

What gave me away? ;-)

human decency

I would very kindly suggest that here is not the place to air personal persecution complexes.

This is disgusting and shame on you.

Marco made -- and let's be generous and say -- a joke. The joke was despicably ugly. It proposed inciting an internet-wide mob to direct hatred at Hewitt.

An "anonymous coward," apparently, Marco -- as a joke (let's say) -- suggested we get as many people as possible to gang up on and socially maltreat Hewitt. This is in a context where similar "jokes" have famously created tremendous grief and suffering as mobs get out of control.

Somewhat gracefully, Hewitt rebuffed the "joke" in kind.

You, Ben, started "policing" comments in the wrong place.

I would very kindly...

Not at all. You would dress your obnoxious comment in polite language. That is different. There is no way to "kindly" say what you said.

I interpreted "Peak Hewitt"

I interpreted "Peak Hewitt" as the number of posts Hewitt makes, not the posts against him.

" I see no reason why this couldn't become yet another Internet"

In its context, this is hate speech:

"I see no reason why this couldn't become yet another Internet theme."

In its context, this is

In its context, this is hate speech...

No, it isn't. Why the sudden flamemode on?

inciting crowds towards social isolation is hate speech

Marco's proposal to objectify Hewitt, a live human being with some of the most interesting things to say about programming language theory among all LtU commenters, as a target of ridicule and bullying.

Marco's proposal (even if a joke) is an example of a recently more common trend in how people behave on the Internet in social forums. That form of bullying has produced outcomes like people needing police protection when mobs start making certain kinds of threats, suicides, and more.

The Internet as popularly used makes it easy for people to engage in such bullying without suffering personal consequences from it.

One way we can help to resist the problem is to call it out where we can usefully do so.

This is not bullying

I have no idea what you are talking about. This isn't bullying. It's not even close. Are you suggesting that Hewitt needs police protection from LtU now? Please don't cheapen the discussion here with such exaggerations and comparisons to hate speech.

The issue at hand has been one of behavior; repeated, persistent, intentional derailing of threads towards Hewitt's work, by himself and one or two others. The negative reaction to years of the this behavior could hardly be construed as bullying. Besides, I get the impression that LtU people are not exactly the bullying type; strong opinions and heated discussions tend to stay rather impersonal, even if they misunderstand or fundamentally disagree with other people's positions and paint with an overly broad brush.

I don't see anyone suggesting Carl is a bad person, is stupid, or that he kills puppies. Moreover, I don't see anyone even really suggesting that Actors are bad or stupid or that they kill puppies. However, I do get the sense that some people are frustrated about how quickly threads get derailed towards his work and the ensuing amount of noise.

Maybe you experience this as persecution, but IMHO it's triggered by a long-term pattern of behavior that some people find annoying. Marco's joke notwithstanding, about the strongest language I've seen here is "that's annoying, please stop." and "you're trolling".

"derailing" a thread

Hewitt's response comments are generally on-topic, usually very brief, and not necessarily easy to grasp without putting some effort into the work he talks about.

Hewitt's top-level comments are generally on-topic and of a similar character, though not necessarily so brief.

Some of them have provoked lengthy discussion of varying quality.

The only sense I can make of the phrase "derailing a thread" in this context is that you wish other people would not have these discussions.

Further, in this LtU topic, some people are offering a social strategy to achieve your aim of preventing people from having those discussions. The social strategy is to ask everyone but Hewitt to single Hewitt out and assign to him a second class status in this LtU society.

bullying language

about the strongest language I've seen here is "that's annoying, please stop." and "you're trolling".

Or how about the one accusing him of having a persecution complex?

"please stop" is a peculiar thing to say when you consider that the person saying it could instead say nothing and ignore the thread.

"you're trolling" can be fair accusation if it is true which does not appear to be the case here.

Trying to incite everyone-but-Hewitt to treat Hewitt as a lesser among equals is bullying. It's especially embarrassing from a perspective that recognizes the works he talks about as being really quite interesting and PLT-relevant on a broad range of subjects.

re: trolling

On the whole, as I have said from the top of this whole topic -- if not clearly enough, I do sincerely apologize -- I agree that the main problem is that people reply to Hewitt, not that Hewitt posts in the first place. Too bad the "zen" approach to interactions with people here ends up with very trollish looking outcomes.

Implied Agreement

Part of the problem is that not replying imparts a kind of impled agreement, that would be misleading in a public forum, that may be used as a reference.

Please read what I actually

Please read what I actually wrote. I suggested that this is not the place to air grievances against Wikipedia. Or do you disagree? I was under the impression that LtU was some kind of forum for discussion programming languages.

If LtU allows personal attacks, it should also allow defense

Ben,

It's interesting that you chose not to criticize Marco.

If LtU allows personal attacks, then in fairness, it should also allow those attacked to defend themselves.

Regards,
Carl

For what it's worth

...wikipedia isn't a place where experts are made welcome. I'm somewhat surprised an article by yourself on the actor model has managed to survive (it being, of course, "original research"). You probably didn't manage to annoy enough admins to get it nuked.

I find that wikipedia is best kept as a stress relief tool. If you find yourself embroiled in a situation like this one, head off to wikipedia and make some edits. The trick to it is to make grammar fixing edits on relatively uncontroversial subjects (thus keeping your head down from the "serious" wiki-police), but let a few of them subtly (or not so subtly) alter the meaning as well as the grammar. If you add a few spurious references to noexistent, long-out-of-print books, that helps.

If you want to really nail those edits down as "truthy", though, start editing them *back* to the correct state with a different persona, and then use the old persona to complain bitterly that the new persona is a sockpuppet of Banned User:CarlHewitt and watch "important" wiki-role-players rush to reinstate your lies.

It needs access to a couple of proxies, but adding blatant falsehoods to wikipedia is quite a fun thing to do. Whatever you do, don't start believing wikipedia's somehow important.

wikipedia's pervasive inconsistency

It needs access to a couple of proxies, but adding blatant falsehoods to wikipedia is quite a fun thing to do.

Anti-social and yet unpreventable the way they have structured the institution.

LtU is doing a lot better than Wikipedia :-)

LtU is doing a lot better than Wikipedia :-)
But the LtU ecosystem is fragile and could easily be destroyed :-(

There are some excellent suggestions for improving LtU here.

PS. People should not sabotage Wikipedia articles.
Instead, they should put the effort into fixing Wikipedia's broken administration.