Strengthening Process Calculi

Mingsheng Ying: Topology in process calculus: approximate correctness and infinite evolution of concurrent programs.

Since process calculi come up now and then, I borrowed this book from the library and tried to read it. Cannot claim to have grokked it, but I was excited reading through the example applications toward the end of it. I (think I) am hoping this kind of work can find its way into the main-stream somehow, some day.

Communication and concurrency are essential in understanding complex dynamic systems, and there have been many theories to deal with them such as Petri nets, CSP and ACP. Among them, process calculus is one of the most important and mathematically developed models of communication and concurrency. Various behavior equivalences between agents, such as (strong and weak) bisimilarity, observation congruence, trace equivalence, testing equivalence and failure equivalence, are central notions in process calculus.

In the real applications of process calculus, specification and implementation are described as two agents, correctness of programs is treated as a certain behavior equivalence between specification and implementation, and then the proof of correctness of programs is a task to establish some behavior equivalence between them.

The goal of this book is to provide some suitable and useful concepts and tools for the understanding and analysis of approximate correctness of programs in concurrent systems. Throughout this book the focus is on the framework of process calculus, and the main idea is to construct some natural and reasonable topological structures which can reveal suitably a mechanism of approximate computation in process calculus and to work out various relationships among processes which are compatible with these topological structures.

Comment viewing options

Process calculi: Actors with channels (modeled as Actors)

There are fundamental differences between the Actor Model and the π–calculus:
⃝ The Actor Model is founded on physics whereas the π–calculus is founded on algebra.
⃝ Semantics of the Actor Model is based on message orderings in the Computational Representation Theorem. Semantics of the π–calculus is based on structural congruence in various kinds of bi-simulations and equivalences.

Communication in the π -calculus takes the following form:
⃝ input: aChannel[aMessage].anExpresion is a process that gets a message from a communication channel aChannel before proceeding as anExpression, binding the message received to the identifier aMessage. In ActorScript [Hewitt 2010a], this can be modeled as follows: Let aMessage←aChannel.get[ ]⬤ anExpression
⃝ output: ū[aMessage].anExpression is a process that puts a message aMessage on communication channel u before proceeding as anExpression. In ActorScript, this can be modeled as follows: u.put[aMessage]⬤ anExpression
The above operations of the π-calculus can be implemented in Actor systems using a two-phase commit protocol [Knabe 1992; Reppy, Russo, and Xiao 2009]. The overhead of communication in the π–calculus presents difficulties to its use in practical applications.

Process calculi (e.g. [Milner 1993; Cardelli and Gordon 1998]) are closely related to the Actor Model. There are similarities between the two approaches, but also many important differences (philosophical, mathematical and engineering):
⃝ There is only one Actor Model (although it has numerous formal systems for design, analysis, verification, modeling, etc.) in contrast with a variety of species of process calculi.
⃝ The Actor Model was inspired by the laws of physics and depends on them for its fundamental axioms in contrast with the process calculi being inspired by algebra [Milner 1993].
⃝ Unlike the Actor Model, the sender is an intrinsic component of process calculi because they are defined in terms of reductions (as in the lambda calculus).
⃝ Processes in the process calculi communicate by sending messages either through channels (synchronous or asynchronous), or via ambients (which can also be used to model channel-like communications [Cardelli and Gordon 1998]). In contrast, Actors communicate by sending messages to the addresses of other Actors (this style of communication can also be used to model channel-like communications using a two-phase commit protocol [Knabe 1992]).

There remains a Great Divide between process calculi and the Actor Model:
⃝ Process calculi: algebraic equivalence, bi-simulation [Park 1980], etc.
⃝ Actor Model: futures [Baker and Hewitt 1977], Swiss cheese, garbage collection, etc.

Who's, or which, physics?

Who's, or which, physics?

Physics includes relativistic and quantum

Physics includes relativistic and quantum.

It also includes fluid

It also includes fluid dynamics

marco... please knock off the noise?

It also includes fluid dynamics

Hewitt points with lucid hints and you are mocking him on the grounds of your (professed, anyway) inability to pick up his hints and see what he is pointing at. Just listen or ignore for a while, please?

(We already discussed in an earlier recent thread why QM + relativity give support to the concept of non-deterministic Actor computations)

I am not the one claiming to

I am not the one claiming to understand the "laws of physics."

re: I am not the one claiming to

I am not the one claiming to understand the "laws of physics."

I am at a loss to see what you are offering the discussion at all, in fact.

That's not up to you

I wanted a) a precise answer, preferably where he uses QM, if so, and b) I can do without making up new theories as we go along.

You came up and invented a theory on Dedekind cuts which supposedly is what Hewitt intended.

A post on LtU is suddenly going to explain it? I rather stick to what's published.

So, what's your contribution then. Exactly?

He's pointing out that

He's pointing out that the emperor has no clothes.

marco: re QM

I wanted a) a precise answer, preferably where he uses QM, if so, and b) I can do without making up new theories as we go along.

The definition of "Real.[]" describes an uncountable set.

QM + relativity (with an assumption of unbounded time) prove that a real machine can be built that chooses a truly random element from the set described by "Real.[]". Every time you start the machine, it randomly chooses a new element from that set.

The Free Will Theorems from Conway and Kocher help to work out how minimal our assumptions about QM and relativity can be.

Every time you start the

Every time you start the machine, it randomly chooses a new element from that set.

No, it doesn't. Every time you run the machine forever it would randomly choose a new element from that set. Which is something you can do exactly never. You will always have not yet finished. The idea that by starting to run the machine you have already chosen the complete future evolution of it is a misapprehension.

John I get this

I don't get why they are calling this sequence of 1s and 0s "an actor", let alone your point that one unfinished, countable sequence of 1s and 0s is not a continuum.

Though that second mistake is slightly funny in that it could be called Hewitt's version of the axiom of choice.

I see you've started an infinite process ...

It sounds like this to me as well, so maybe it does to many other readers:

The idea that by starting to run the machine you have already chosen the complete future evolution of it is a misapprehension.

Maybe it's a feature of the human brain that likes to invoke an auto-pilot once you start a process. You get in your car, and suddenly you're at work after N minutes of fugue. Or linguistically, one form of metaphor uses a part to represent the whole, so starting something is like having it finished as far as naming it is concerned (and mental habits of shorthand can fire on auto-pilot unless you self criticize a lot).

Sometimes when you ask someone how they reached conclusion Y, they explain very carefully the evidence for X. Exploring further, you find that X reminds them of Y, and they somehow made the leap that since X therefore probably also Y, even without direct relation. When I was young I just thought people were crazy, but I kind of get it now. And I find myself making the same sorts of mistakes too eventually, when I'm honest with myself. (Meaning: no criticism is intended, because the idea is only to characterize what things sound like.)

They not only posit having finished an infinite process

but in order to claim that it represents the order of the continuum, they imagine having completed it an uncountably infinite number of times.

It's sheer force of will. As long as you're going to imagine having finished an infinite process, and as long as you're going to imagine having done it an infinite number of times, you might as well imagine having done it to the highest possible order.

It's a new form of the axiom of choice... If you ASSUME that something has an order, then if you sample from your assumed set then it has the order you wanted.

What no one will answer me is how Hewitt and Thomas get from this to "shows that there are an uncountable number of actors".

Uhm Real.[ ] is ONE actor.

re "You will always have not yet finished. "

Every time you start the machine, it randomly chooses a new element from that set.

No it doesn't.

The worldline containing the instant of turning on the machine, including its future, describes the new element from the uncountable set.

What describes the worldline?

What describes the worldline?

re what describes a worldline

As an example, any axiomitized account of special relativity or even a conservative subset.

An example of its use in computer science is in the somewhat famous pair of papers by Leslie Lamport, "The mutual exclusion problem: Parts I and II", published originally in JACM, April 1986, v33 #2

Interpretations of QM

I think this is at best misleading. In the classical QM interpretation, when the wave-collapse happens, it happens in a single universe, hence the machine can only be switched on once, and only ever produce one number that you can never completely read (only ask for successively better approximations).

In the multi-universe interpretation, switching the machine on could cause a split into a finite number of universes (because you cannot assemble an infinite-qbit quantum machine, it would take infinite time to assemble even in an infinite universe, and we believe the universe is finite in any case), but communication between such universes is impossible, so from within any universe the result appears the same as the classical case.

Prove it

QM + relativity (with an assumption of unbounded time) prove that a real machine can be built

QM doesn't prove you can create a realistic fair machine in the mathematical sense; on the contrary, I would suppose that every physicist would explain to you that the ideal of a fair coin flip machine cannot be reached without a certain error margin.

Hewitt claims his model is based on physics. I don't know why he claims that, but that claim by itself is problematic. Especially when adding that to the mix of math and logic. At minimum I want to know what he means, or intends, with that.

Reality, physics, the laws of physics, special theories of physics, stochastic processes, math, logic, they all (can) mean different things.

At least the other paper restricts itself to fair processes, which is a mathematical technical term. (Which I am not entirely certain what it means within the context of that paper.)

I know this is cruel

but Hewitt's claims like "based on physics" are so vague, that I sometimes get this odd feeling that he had an insight a long time ago and is now trying to promote it, except that he can't remember what it was.

I suspect

I suspect he means stochastic process in the mathematical sense.

re I know this is cruel

I know this is cruel but Hewitt's claims like "based on physics" are so vague, that I sometimes get this odd feeling [....]

That places you in an interesting position.

You have choices.

1) You could blather on endlessly about your suspicion that Hewitt is off his nut, never offering anything more robust than your "feels". You should not do that on LtU, imo. It would be just your endless blather and nothing more.

2) You could try to pin down your intuition about what is wrong and then try to formulate objective questions, referring to the works at hand, to test your reading against actuality.

You are so far doing a good job at (1).

except that he can't remember what it was.

You have no room to point fingers in that way given your performance.

it's up to him to explain what he means.

People usually don't have trouble explaining an idea that they came up with.

As for my performance, you're not managing to answer some thoroughly basic questions.

re it's up to him

People usually don't have trouble explaining an idea that they came up with.

Hewitt doesn't. You haven't convinced me you have even bothered to read the paper so I think you should stop talking or do better. You are off topic and rude.

This is an academic forum, not a an internet chat room.

Some people are here because we have an interest

in computer languages rather than an interest in mathematical papers or even the ability to read them.

If you refuse to answer basic questions, rather than being revered as knowledgeable, you might be judged incapable. I'm not your student, you lack authority to castigate me for for not being up to some specific grade.

DL is foundational for PLT

Some people are here because we have an interest in computer languages rather than an interest in mathematical papers

That's why Hewitt's posts are appropriate to the subject matter of LtU.

Suggest reading "Actor Model of Computation" on HAL

I suggest reading Actor Model of Computation especially starting on page 10.

marco on QM

QM doesn't prove you can create a realistic fair machine in the mathematical sense;

Conway and Kochen's axioms in the two versions of the Free Will Theorem do prove this. None of those axioms contradict QM, so far as I know. Two of the axioms have overwhelming empirical evidence on their side. The third is not subject to empirical proof but amounts to a belief in causality.

Yah.

And those are two mathematicians, not physicists. The noteworthy difference being that physicists mostly think in terms of models and reality, and mathematicians sometimes claim math is reality.

If you really think you can construct a real fair machine then do it.

It still doesn't explain what the role of QM or relativity is apart from that Hewitt seems to want to model stochastic processes. At minimum I would expect QM to pop up in the formal treatment of his model.

And those are two mathematicians, not physicists. The noteworthy difference being that physicists mostly think in terms of models and reality, and mathematicians sometimes claim math is reality.

You also show no evidence of having read any of the source materials under consideration. You are trolling.

Disagreeing isn't insulting anyone except the narrow-minded. Moreover, I haven't seen QM in the work presented below, I have claimed that contrary to your claim you cannot create a fair real coin toss machine.

I am pointing out that the model isn't reality and most physicists know that.

I find Hewitt's claim about his model being based on physics pretty vacuous.

You ran out of arguments so now you use an ad hominem.

I actually see no reason anymore to read any of Hewitt's work.

What I CAN make sense of

I do intend to slog through the actorscript docs when I have time, but that won't be for a week.

That's where I can judge something. Computer languages are something I can learn from anyway,

Josh: re my ragging on you

I do intend to slog through the actorscript docs when I have time, but that won't be for a week.

Cool. My harshness towards your rhetoric is not meant to treat you as "my student" (as you supposed) or to otherwise abuse you. It is meant to help sharpen your efforts or more precisely, guide anyone who identifies with the way you've been approaching the subject.

It's a different subject

I'm interested in actorscript because it's an usual idea, putting a layer on a bunch of computer languages, and I'm interested in language design.

re: "I [...] see no reason anymore to read any of Hewitt's work"

Does that mean you have nothing more to say on the matter?

Or will you continue to opine critically on that which you wilfully decide to not examine?

You assume too much

I never said I didn't examine it. That was you.

re: He's pointing out that

He's pointing out that the emperor has no clothes.

On the contrary. He is asserting that and he can't back up his assertion in a rigorous way, only by insult and redirection.

The point of the

The point of the emperor-has-no-clothes thing is that it's obvious. Not, evidently, to you, and I'd be happy to help you understand this stuff better if I could figure out quite where you're going wrong, but to the rest of us it's pretty obvious.

Shutt: re obvious

to the rest of us it's pretty obvious.

Fantastic. I invite you to go through the paper with me in a close reading and you can show me where you think it fails.

I admit there's some trickiness to that project because the paper is multi-disciplinary, not only describing a formal system but situating it in a history of mathematical thought.

Lets get to brass tacks, if you are serious in the grave insult you level at the work.

You've repeatedly thrown

You've repeatedly thrown around accusations of insult. It's a pretty narrow word, that you're using broadly.

It would be nice if your optimistic description of Hewitt's paper turned out to be accurate. My past experiences with his writing induce an expectation far more pessimistic; but either way it'll have to wait until I can set aside a massive block of time and energy for the enerprise. I see no reason, however, to hold off in the meantime from pointing out on LtU flaws in things said on LtU.

Insults are pretty much irrelevant except that they detract from LtU.

However, constructive comments and concrete suggestions are gratefully appreciated.

The Actor Model

Includes all flavors of ice cream and syrup toppings.

It includes uncountable combinations of toppings.

An Actor can have aspects

Although, most Actors do not have "syrup toppings", many do have aspects ;-) See page 20 of ActorScript

Actors: Processes with names (modeled as channels)

Gul Agha (one of more prominent Actor researchers aside from Carl himself) has a nice paper showing how a model of Actors can be embedded inside a typed asynchronous version of the π-calculus (http://formal.cs.uiuc.edu/papers/ATactors_festschrift.pdf). The resulting model both sheds some light on the relationship between Actors and process calculi, and allows the definition of a behavioral equivalence on Actor configurations (which presumably allows Actor systems to be analyzed and perhaps model-checked).

Channels in π-calculus: not a good foundation for Actors

Channels in π-calculus are not a good foundation for the semantics of the Actor Model.

The π-calculus is problematical because:
* inclusion the composition operator (|)
* types are not Actors
* types are not parameterized
* there are no interfaces

Also, channels impose an undesirable indirection in the semantics.

Actors as channels: not a good foundation for process calculus

One could equally argue that the Actor model is not a good foundation for representing process calculi, because the asynchronous address-based communication of Actors requires the undesirable indirection of things like two-phase commit protocols to model process semantics. As for the rest, the addition of types to the π-calculus (not something that is part of the classical π-calculus) isn't necessarily required to model Actor systems, although Agha seems to think it is. I haven't studied it in enough depth to know.

Regardless, the point is that the Actor model and the π-calculus are different approaches with different goals. Agha says as much in his paper. Similar papers have been published showing how to translate one kind of process calculus into another, or Petri Nets into a process calculus and vice versa, all in the interests of exploring the relationships between those formalisms and understanding their relative strengths and weaknesses for expressing different kinds of systems. Since this thread started with an article about process calculus, it seemed to me that the most relevant discussion of the Actor model (if we must discuss it at all in this thread) would be one that looked at it through the lens of process calculus.

Channels are the downfall of process calculi

Channels are the downfall of process calculi:
* channels impose unneeded indirection in communication
* channels impose undesirable latency in communication

Could you expand?

Could you expand on those points?

• It seems to me that sometimes indirection is desirable (for example, different channels may represent different interfaces or services). If such indirection is not required, then it's simple enough to provide each process with only a single channel (which becomes equivalent to the "address" of that process). Since process calculi generally provide abstract models of a system rather than a direct implementation there's no reason a single-channel model could not be implemented in terms of address-based Actors.
• I fail to see how channels (which are essentially just a modelling convention for naming different interfaces) impose any kind of latency on communication.

If channels really are the downfall of process calculi, it seems odd that so many of them, developed by so many different researchers, have all included some kind of channel mechanism. Presumably there are some benefits to the notion of channels.

[Edit: Actually, it's probably worth noting that "channels" are not really fundamental to all process calculi. The more fundamental concept is an "event", which represents some form of interaction or communication. The idea of channels can be layered on top of events to provide a little structure. CCS and the π-calculus do that explicitly so that it's possible to model "input" and "output" actions, but in a sense it's a kind of syntactic sugar.]

Channels (i.e. "put" and "get"): fundamental to process calculi

Channels (i.e. put and get) are fundamental to process calculi.

Milner needed some place to store the communications in order to found process calculi on algebraic laws.

They're really not

They're really not, and I'm not sure why you keep saying that they are. Even in the π-calculus, which does tend to talk about "channels", the reduction semantics make it pretty clear that a communication is a single action or event shared between two state transition systems that represent the communicating processes, rather than anything to do with "putting" and "getting". Nor are the communications "stored" anywhere. An event is a single transition that occurs in both "sender" and "receiver" processes simultaneously (at least in synchronous calculi). From what I recall, ACP and CSP don't even mention channels in their definitions. Although there are "input" and "output" actions, they are essentially just syntactic sugar over raw events, with "channels" existing largely to group related events. One of the classic CSP examples is model of a vending machine selling chocolates or coffee. In that example the events are things like inserting a coin or pressing a button--there is no "putting" or "getting" of messages, and the concept of "putting" or "getting" a button press doesn't even seem meaningful.

they really are

For example: "get" and "put" are found in Milner's CCS in the form of complementary events. In a composition of machines R and S, machine R (semantically) "puts" an event ("α(x)") and S "gets" a complementary event ("–α(x)"). In a multi-part composition, ("R|S|T|...") either end of the channel ("{α, –α}") can be shared by multiple composed machines.

One of Hewitt's remarks here is that Actor semantics is comparatively simpler or cleaner or more elegant. In CCS (e.g.) machines connect to one end of a channel and channels connect machines on either end. In Actors, Actors reference other actors directly; channels are trivially modeled if you need them in a certain case for some kind of synchronization. Actor semantics appear to have few kinds of entities and relations between kinds of entities.

Hewitt's other remark is that:

channels impose undesirable latency in communication

He can correct me if I'm wrong but I believe he is referring to the cost of synchronization.

In CCS (for example), a primitive machine can not "put" a second event until the first event it "puts" has been received by some other machine. A non-deterministic primitive machine can try to put multiple events at once, but synchronization is still required insofar as only one of the puts non-deterministically attempted can succeed.

Guess we'll have to agree to disagree

You might be correct if you were going to restrict "process calculus" to mean "CCS and π-calculus". Although even then I'd argue that your interpretation of the semantics of processes is incorrect. For other process calculi the idea of complementary events isn't even a part of the semantic model and exists, if it exists at all, only as syntactic sugar. Regardless, the semantics of the composite R|S machine is not a separate "put" and "get", it's a single shared simultaneous transition. There is no "putting" or "getting" in the sense of placing something in a conceptually separate channel and later retrieving it. You could view it (and implement it) as a procedure call. Or as a signal transition in a wire. Or as a button press. Actions/events in a process calculus are simply an abstract representation of some kind of interaction.

[Edit: by way of additional clarification, the operational semantics of CCS states that if R = a.R1 and S = -a.S1 then the composite system R|S will have the transition R|S -- τ --> R1|S1, where τ can be thought of as representing the occurrence of the a/-a event. CSP has similar rules, but leaves the synchronizing event visible instead of making it the hidden τ action (and would not have a co-name, since the concept doesn't exist in CSP). The fact that a communication represents a single state transition shared by both the "sender" and "receiver" is essential to the various behavioral equivalences used in the process calculi.]

One of Hewitt's remarks here is that Actor semantics is comparatively simpler or cleaner or more elegant.

The Actor model has actors and messages. Process calculi have processes and events. From what I can see, the major differences are that Actor messages are asynchronous (no blocking on a send) while (in synchronous calculi) a process will not proceed until the other participants in an event are all ready to engage in the event, and that Actor composition is defined by the scope of name/address references (Actors cannot send messages to Actors they don't hold a reference to) whereas process composition is defined by the scope of parallel composition statements (processes cannot participate in events with processes they are not composed with). Synchronization "channels" can be trivially modeled by Actors when necessary. Asynchronous communication can be trivially modeled using buffering processes when necessary. Or you could just use an asynchronous process calculus, as Agha did in constructing a process calculus model of Actor semantics. Is there something I'm missing here? Because I'm really struggling to see how Actors are semantically simpler or cleaner than process calculi. If anything, the two are so similar in their nature that it's hard to see why we're arguing about this at all.

In CCS (for example), a primitive machine can not "put" a second event until the first event it "puts" has been received by some other machine.

True in the case of CCS (I'll skip right past your use of the word "puts" there). Untrue in the case of (for example) the asynchronous π-calculus. Also hardly relevant, since an asynchronous style of communication can be modeled in a synchronous process calculus by introducing buffering processes. Which is not unlike the way an Actor system may model synchronous communication by introducing handshake messaging, or introduce message queuing by adding a queuing Actor.

Hoare, in section 7.3.4 of of his Communicating Sequential Processes book says:

For many years now, I have chosen to take unbuffered (synchronised) communication as basic. My reasons have been
1. It matches closely a physical realisation on wires which connect processing agents. Such wires cannot store messages.
2. It matches closely the effect of calling and returning from subroutines within a single processor, copying the values of the parameters and the results.
3. When buffering is wanted, it can be implemented simply as a process;
...
Of course, none of these arguments carry absolute conviction. For example, if buffered communication were taken as primitive, this would make no logical difference in the common case of alternation of subroutine call and return; and synchronisation can be achieved in all other cases by following every output by input of an acknowledgement, and every input by output of an acknowledgement.

No such thing as simultaneity at different places.

There is no no such thing as simultaneity at different places.

Hoare[1985] was correct that the Actor model of communication is basic:

1. It matches closely a physical realisation of sending a message on
a wire. Such wires cannot store messages.
2. It matches closely sending messages on a X86 or Arm core by placing
a request message in registers and receiving the result in a register.


Puts and gets redux

Sure. And if I wanted to model a non-simultaneous situation I could introduce a buffering process of some kind to represent the physical distance between the communicating processes. But when my finger presses a button the finger and the button are in the same place, and the event represented by that button press occurs at the same time for finger and button. Similarly, if we're talking about modeling computational processes instead of physical ones, unless you're talking about a distributed system all of the computational processes are in "the same place" and an assumption of simultaneity can simplify things.

Or, if you prefer a default that doesn't involve simultaneity, you could use an asynchronous process calculus. Or the Actor model. It really depends on what you're trying to model, and what assumptions make the most sense from the perspective of (a) simplicity of the model, and (b) tractability of the analysis.

It matches closely sending messages on a X86 or Arm core by placing a request message in registers and receiving the result in a register.

So an Actor "puts" a message in a register, and some other Actor "gets" that message from the register so that it can generate a result that it then "puts" in a register?

Assumption of simultaneity at different places is incorrect

An assumption of simultaneity at different places is incorrect. Consequently, the || operator in process calculi is a dubious computational primitive both theoretically and in practice because it needs something like channels to make it a real.

There are no put and get communications in the Actor model when the following implementation mechanism is used invisibly by the infrastructure: a message is sent on a X86 or Arm core by placing a request message in registers and perhaps latter receiving the result in a register or responding to a thrown exception.

Also, there is no requirement to use the above mechanism in the Actor Model. For example, the mechanism is typically not used with dealing with primitive hardware Actors.

We are in violent agreement (except where we disagree)

I never said that assuming simultaneity in different places was correct. Perhaps you misread what I wrote?

I agree with you that systems involving processes or actors in different places are better modeled by non-simultaneous send/receives. In those situations an Actor-based model or an asynchronous process calculus probably makes sense. The trade-off is that by forgoing simultaneity you make the state space of the system substantially larger, which in turn makes analysis of that state space that much more difficult. The synchronous process calculi seem to favor a more tractable analytical model, with non-simultaneous effects modeled as necessary instead of built into the model. You've obviously opted for a different default. I think they're both valid choices. You seem to think your choice is the only valid one. I'm not sure what else there is to say.

There are no put and get communications in the Actor model when the following implementation mechanism is used invisibly by the infrastructure

Ok, so you claim that the put/get is an implementation detail rather than something inherent in the semantics of the Actor model. I claim the same about synchronous process calculi. Obviously you disagree. Again, I'm not sure what more there is to say.

Assumption of simultaneity at different processes is a mistake.

An assumption of simultaneity at different processes is a mistake. Consequently, the || operator in process calculi is a dubious computational primitive both theoretically and in practice because it needs something like channels to make it a real.

Also, a similar mistake was made in the design of Petri nets in that a token simultaneously disappears from different places.

proccess calcus v. actors (re put and get etc).

Regardless, the semantics of the composite R|S machine is not a separate "put" and "get", it's a single shared simultaneous transition.

The machine:

    αr


is a "get" on the "a" channel. If an "a" event arrives, the machine goes to state "r". If no "a" is ever to arrive, the machine is dead-locked. If this machine is composed in parallel with others ("(R||S)\α"), then this machine is basically waiting in a "get" until (and if) some other part of the composition puts an "a" event.

The separate "put" on the "a" channel is asynchronous because it can be semantically non-deterministic what else happens before an "a" event happens.

You need a temporal separation of the send and receive to explain non-deterministic behaviour and deadlock.

You could view it (and implement it) as a procedure call.

In a parallel machine like:

  (R||S)||T


an event put on a channel by any of the three machines may be received by the other two. The other two machine proceed in parallel and the outcome can be non-deterministic.

Is there something I'm missing here? Because I'm really struggling to see how Actors are semantically simpler or cleaner than process calculi.

It's a pretty subjective question but I think I'm with Hewitt at least insofar as Actors are easier on the intuition. This makes sense to me: "The Actor Model is founded on physics whereas the π–calculus is founded on algebra."

An actor address is a routable address to another actor. An actor is a single-threaded server capable of making choices by coin flip and capable of sparking up other servers and sharing the addresses of servers it knows about. I feel pretty comfortable programming servers and using any and every available tool to reason about their compositions.

Process calculi are one tool for reasoning about some kinds of composition of some kinds of servers. From a utilitarian perspective, they feel to me like a special case compared to actors.

Again: this is a subjective question you're putting.

Hoare, in section 7.3.4 of of his Communicating Sequential Processes book says: [...]

Actor communications are neither of Hoare's cases. They are neither unbuffered-synchronos nor are they buffered-asynchronos.

It gets a little hard to see because Actor messages are conventionally described as a send and a reply, always in pairs.

In a way, though, both sends and replies are really just asynchronous sends, one actor to another, that can arrive in any order after any amount of time.

If a message is sent, the sender knows the type of a hypothesized eventual return immediately. If the return type is Void, no reply will happen. A hypothesized reply of non-Void type can be reified as the (only waited-for-upon-demand) reply itself.

Subjectively speaking

If no "a" is ever to arrive, the machine is dead-locked.

How is this any different than an Actor that responds to only a single kind of message waiting on a message that never arrives?

The separate "put" on the "a" channel is asynchronous because it can be semantically non-deterministic what else happens before an "a" event happens. You need a temporal separation of the send and receive to explain non-deterministic behaviour and deadlock.

You've lost me now. Process calculi (except the asynchronous ones) are synchronous. When "a" happens, it happens in both "sender" and "receiver" at the same time. Things can happen before "a" or after "a", but not between "a". It's an atomic event.

The other two machine proceed in parallel and the outcome can be non-deterministic.

Depends on the process calculus. I think CCS may operate that way (I'm not that familiar with it). In CSP an event that all three processes can synchronize on engages all three processes simultaneously. Which, admittedly, is also not "like a procedure call", but it's not like the scenario you've outlined either. The "like a procedure call" comment was a naive attempt to demonstrate (for a two-process composition) that there is no "putting" or "getting". It breaks down for multi-process compositions (although there is still no putting or getting in those compositions).

From a utilitarian perspective, they feel to me like a special case compared to actors.

I see them more as duals. Each can model the other. Depending on what you're trying to achieve one will be more useful than the other. I have no problem using the Actor model where it makes sense. But I've seen and worked with systems where a process calculus model was easier to deal with. YMMV.

Again: this is a subjective question you're putting.

Fair enough. Although I wasn't the one who introduced the subjective judgement about "simpler and cleaner".

In a way, though, both sends and replies are really just asynchronous sends, one actor to another, that can arrive in any order after any amount of time.

See, this is where I get confused. You talk about "puts" and "gets" in "channels" being temporally separated (even though they are actually intended to represent an atomic action), but you don't see temporal separation in a message arrival that can occur some time after it is sent, or potential nondeterminism in an undefined arrival order?

answers to Allan M. re process calc v actors

How is this any different than an Actor that responds to only a single kind of message waiting on a message that never arrives?

Actors can also block but that is not what we were talking about. You missed the context. The significance of "αr" in a process calculus is that it designates a "get" operation which is separate from a corresponding "put" operation (separate logically and in temporal ordering of execution). You earlier said that there was no separate "get" and "put" in process calculi. I have shown you that there are.

You've lost me now. Process calculi (except the asynchronous ones) are synchronous. When "a" happens, it happens in both "sender" and "receiver" at the same time. Things can happen before "a" or after "a", but not between "a". It's an atomic event.

In process calculi, after both the "put" and the "get" have taken place, the communication can complete at any time. Neither participating machine does anything until then. It is also possible that within a single (composed) machine, multiple matched pairs of "put" and "get" can happen concurrently (i.e., in no fixed causal order). Outcomes from such a machine can be obvserably non-deterministic about the order in which concurrent get/put pairs are handled.

See, this is where I get confused. You talk about "puts" and "gets" in "channels" being temporally separated (even though they are actually intended to represent an atomic action), but you don't see temporal separation in a message arrival that can occur some time after it is sent, or potential nondeterminism in an undefined arrival order?

Yes, you are confused. I'm trying to articulate this well but it isn't easy. You aren't confused by much, afaict. Here is a patch for this one:

Yes, you are right. If actor A sends a message to B (or sends a reply to an earlier message from B), there is a non-zero amount of time between the send from A and the receipt at B.

Actor transmissions are asynchronous, though. After sending a message, an actor immediately keeps going. After sending a reply, an actor updates its state and (usually) settles into an idle condition waiting for the next message.

Both systems temporally separate transmission from receipt.

(You earlier suggested only actors did this, if I recall correctly.)

Actor Model uses a customer Actor for request and response.

The Actor Model uses a customer Actor for modeling request and response.

Runtime failures are always a possibility in Actor systems and are dealt with by runtime infrastructures. Message acknowledgement, reception, and response cannot be guaranteed although best efforts are made. Consequences are cleaned up on a best-effort basis.

Robustness is based on the following principle:
If an Actor is sent a request, then the continuation must be one of the following two mutually exclusive possibilities:
1. to process the response resulting from the recipient receiving the request
2. to throw a Messaging exception

Just sitting there forever after a request has been sent is a silent failure, which is unacceptable. So, in due course, the infrastructure must throw a Messaging exception as governed by the policies in place if a response (return value or exception) to the request has not been received.

Ideally, if the continuation of sending a request is to throw a Messaging exception, then sender of response to the request also receives a Messaging exception saying that the response could not be processed.

If desired, things can be arranged so that Messaging exceptions are very special and can be distinguished from all other exceptions

See page 7 of Actor Model of Computation

re Robustness (Hewitt)

Robustness is based on the following principle:
If an Actor is sent a request, then the continuation must be one of the following two mutually exclusive possibilities:
1. to process the response resulting from the recipient receiving the request
2. to throw a Messaging exception

Am I correct in understanding that there is no response to a message whose reply type is Void, and that a sender of such a message can continue immediately, without receiving a reply, and without an exception occuring?

Am I correct in understanding that a sender of a message whose reply type is not Void may proceed before a reply is received, until such time as evaluation become strict in the the content of the reply? (And a "customer actor" is a model for the handle on the not-yet received reply?)

In ActorScript, all requests require response; one-way allowed

In ActorScript, all requests require a response for continuation; but one-way communication is allowed instead. If type has cacheable message, then response can be obtained from cache. Cache invalidation is work in progress.

See page 48-49 of ActorScript.

AllanM || TomL

I find it somewhat amusing that a thread about communication is plagued by so many communication failures. I can't tell if I'm not expressing myself clearly enough, not understanding what you're saying, or if we mean different things by the words we're using.

The significance of "αr" in a process calculus is that it designates a "get" operation which is separate from a corresponding "put" operation (separate logically and in temporal ordering of execution). You earlier said that there was no separate "get" and "put" in process calculi. I have shown you that there are.

Yeah, that's where we disagree. You think you have shown that there are separate put and get operations, and that those things are temporally separate. I don't think you have. In fact, I've specifically outlined the operational semantics of a communication in the typical process calculus, and it's pretty clearly a single atomic transition. So either you're talking about a different, non-standard semantic model, you're expressing something that doesn't actually exist in the semantic model, or there's some other huge difference in how we're interpreting these things that I'm not seeing. I would love to see an explanation of how your put/get concept fits into the usual labelled transition system semantics of a process calculus.

In process calculi, after both the "put" and the "get" have taken place, the communication can complete at any time.

And this is what makes me think we're talking about entirely different things. There is no "put" and "get" in the semantics of the typical synchronous process calculus, there is only an atomic communication. Maybe you're talking about implementation details. If you are, those lie below the level of abstraction that the process calculi operate at, and may depend on how things are actually being implemented.

Let's consider a literal message-passing system, and see if that can help clear things up (or at least help explain where I'm coming from). Imagine a cluster of desks, with a bunch of workers doing "stuff". Sometimes, one worker needs to pass a message to another. So they hold out their hand with a note, and wait until the note is taken. Communication occurs at the precise instant that the note is taken. There's no "putting" of a message into a "channel" or any other kind of storage. Nor is there any "getting". There's an "offer" of communication, in the sense that one process is in a state in which it's ready to make a particular transition (either sending or receiving a note), and then that offer gets "accepted" when another process also enters a state in which it can make the same shared transition. The event/action/synchronization/whatever is exactly the instant at which the note changes hands. Perhaps what you're calling the "put" and "get" are the offer and acceptance? I don't see offer/acceptance as quite the same as put/get, since the "offer" just means "I'm ready to do something" (like hand over a note). I can "offer" several different events at once (and let someone else decide which one to accept, then retract the other offers). I can only "put" one message.

In CSP I might model such a system as follows:

   AllanM = doSomeWork -> note -> AllanM
TomL = doOtherWork -> note -> TomL
Workspace = AllanM || TomL


A few things to note (haha) here:

1. I haven't specified any directionality on the note-passing. In this case the event simply indicates that we have had a shared interaction. It doesn't matter who is the sender or receiver and the concept of "putting" or "getting" is irrelevant. I'm just trying to model the general behavior of the system. I suppose in that regard you might consider the physical analog to be more like a "fist bump" than an actual passing of notes. But see my next point.

2. I haven't included any data in the notes. They're just raw events. I could add data, which would force us to identify senders and receivers. This would look more like the physical passing of a note. The sender would be the one that chooses the content of the shared event, while the receiver would have to be prepared to accept any possible note event. I could write that as input and output events, or I could model it as a sender that is prepared to perform a single event and a receiver that is prepared to perform multiple events (a choice). They're semantically the same thing.
3. There are no channels. I don't care who takes my note, I just want it taken. We could add more workers to the workspace, and whoever grabs a note first would get it. If I wanted to be more specific about who processes which note I might choose to pass certain notes east and certain notes north. Again, I don't care who or what is north of me (it might be you, it might be Carl), I just know that whatever is to the north of me gets notes that have certain kinds of content. This is something like a channel in a process calculus. It isn't a storage location. It's just a way to group a bunch of related events. Obviously if you're the only one to the north of me (the only one "looking at that channel") then a channel starts to look a bit like an Actor address.
4. I haven't really talked about choice operators here. But I think you can imagine situations in which I hold out several notes at once, or am willing to take a note from anyone who's offering one. We could probably come up with similar ideas to handle multiway synchronization.

Hopefully at this point you're in agreement with me that "channels" are not "fundamental" to process calculus, since my example didn't even use them. You may quibble on the whole "put"/"get" thing, depending on whether or not you interpret "put"/"get" as my "offer/accept".

Now at this point you may object that it's silly for me to sit there with my hand out, waiting for you to take my note. And perhaps it is, depending on the context and the kind of work that I'm doing (although the waiting and the consequent synchronization of our actions have the advantage of pruning the state-space of the composite Allan/Tom system, so it's to understand or analyze the composite behavior). I have a couple of options to get around that:

1. I can introduce a "buffering process". In the physical message-passing example, this might be a message tray that sits between you and me. As a process, it is always ready to accept a new note and always ready to provide a note if one is available. We could model the tray in different ways depending on what kind of message ordering discipline we want. So, here at last is your "putting" and "getting". When I hold out my hand I "put" the note into the tray (a single event). Some time later you may "get" a note from the tray (a temporally separate event). There is no assumption of simultaneity except on events that represent a single physical action in a single place. Our communications are decoupled.

    Workspace = (AllanM || MessageTray) || TomL


(in reality we'd either have to use different "note" events in our process descriptions, or use some event renaming, to make this work, but I'll skip over that for now).

2. I can simply hold my hand out for you to take a note, and continue on with my work concurrently. This is, I think, something like what the Actor model does. A CSP-ish model of this would look something like:
    AllanM = doSomeWork -> ((note -> SKIP) ||| AllanM)


So now sending a note doesn't block me from working. I do some work, and when I want to send a note I concurrently offer up the note while at the same time I go back to my business. When the note is taken the left-hand sub-process ceases to exist. Of course, I would need an infinite number of hands to make this work in practice, or some way to bound the number of notes I can have "in flight" at once.

My understanding of the Actor model is that the last example I gave captures the flavor of message sending in that model (although without Actor addresses--but I can use channels to add something like an address). Maybe. I say maybe because the model I described still assumes "simultaneity" of the note event when it actually happens (which Carl says is unrealistic), but any kind of buffering or intermediate storage that would model non-simultaneous events seems at odds with the way Carl has previously described Actor messages.

Are we close to communicating here?

re So they hold out their hand with a note, and wait until the n

So they hold out their hand with a note, and wait until the note is taken.

That is informally called "put".

Putting a hand out to receive a note, and waiting until the note is put, is called "get".

The rationale (see Milner's CCS book for example) for the algebraic semantics is expressed in mechanistic terms founded in that get/put concept.

Guess I'm still waiting to "get" it

Yeah, I had a feeling you'd quibble on that point. Look, I haven't read Milner's book in about a decade. Maybe he uses the terms "put" and "get", which is where all this is coming from. My experience with process calculi has largely been with other formalisms (mostly CSP) that don't use that terminology at all.

The original claim was that "channels are fundamental to process calculi". I'd like to think I've demonstrated that's not true (my example involved no channels). You may disagree. If so, I don't see us making much further progress there.

There was also a claim that process calculi require "put" and "get". I maintain that this is not the case. You will not find "put" or "get" actions in the semantics of a typical synchronous process calculus. You will not even find "offers" (my terminology, I hasten to add--I don't know if anyone else uses it). What you will find is labelled transition systems. And rules about when transitions can be taken. What I described as an "offer" simply means that a particular process is in a state in which it is able to take a particular transition. Possibly more than one transition. There is no distinction between whether the transition represents what you call a "put" or a "get". It is simply a transition that, if the process is composed with another process that has a corresponding transition, will not proceed until both processes can make the transition. That's it. No puts. No gets. Just transitions.

Now, I will concede that if you were to implement such a composite transition system you might well end up using "puts" and "gets" into "channels" (although perhaps not--"offers" can be rescinded, while "puts" or "gets" cannot). But that is an implementation detail. There's nothing in the labelled transition system semantics that requires synchronous transitions to be implemented that way. The "puts" and "gets" your refer to are an implementation detail of process calculi to the same degree that they are an implementation detail of Actor communications. Again, maybe you disagree (Carl apparently does). If so, I'm not sure what else to say. I'll just go back to lurking, and wait to "get" it.

Simultaneity impossible: physically "put" & "get" are necessary

Since simultaneity is impossible in different processes, it is necessary to use something like put and get communications for a physical implementation of process calculi.

That's certainly one

That's certainly one possible approach.

Is there another possible implementation beyond "put" and "get"?

Is there another possible physical implementation beyond put and get communications?

Human beings sitting at desks, holding their hands out.

blurry data pub/sub seems less discrete than put/get

Maybe, when circumstances seem blurry because some essential piece is spread out, and doesn't seem clearly attached to a put or get.

Most higher level logical/abstract communication can be turned into put and get at a low level, but flavor seems to change depending on when synchronization occurs. The part about sync is going to be hard to describe, but I'll try. At a high level, several puts might occur in a batch before validated en masse. A reader might see partial output that is ignored when it doesn't pan out. I'll give an example that assumes some form of tuple space. (One node might use shared memory; multiple nodes might use a distributed version that need not be globally consistent.)

It has a stochastic quality, and can let a reader find a hit on something that was not actually an individual entry put by a writer (although this is exceedingly unlikely). But as long as a reader is satisfied with the bits, it doesn't matter. The semantics of get seem ruined by anonymization of content, erasure of sender, an inability to establish transaction boundary. While this example sounds weird, it might be other examples of poor fit with put/get also sound weird.

There are hardware

There are hardware implementations that don't use "puts" or "gets" at all. Software implementations with suitable runtime infrastructure can probably do the same. Is there an Actor implementation that doesn't use puts and gets?

I recommend it, at least the interesting bits.

Maybe he uses the terms "put" and "get",

No, but he describes equivalent abstractions while developing a mechanistic interpretation of concurrency which the algebra is supposed to model.

What I described as an "offer" simply means that a particular process is in a state in which it is able to take a particular transition.

 r1 = βr2 + τr2
r2 = γNIL

were τ indicates an allowable ε transition
that can be taken any time depending on the "weather"
(in Milner's terms)


In state r1 some other part of the system might "put" a "c" (on the γ channel) and it will only eventually be picked up by the "get" in state r2.

If the first thing that happens is the ε transtion to state r2, then this machine is waiting in a "get" until some other part of the system does a "put" on γ.

The "allowable transitions" in process calculi are like this. They model the relative timings of "puts" and "gets" (even though in his discussion of "mutual experimentation" Milner does not use those specific two words).

There is no "putting" of c

There is no "putting" of c onto a "channel". There's a transition system ready to perform a "c" transition (or, if you want to include the identifier, a γ.c transition). It might also be ready to perform "a" and "b" transitions for all we know (in the same way that r1 can initially perform a β or τ transition). If your process r1 takes the τ transition (which represents a hidden internal action of the process--thus the "state of the weather" comment) then both transition systems are now in a state where the "c" transition can happen. This requires no "putting" or "getting" on any channel unless (a) you stretch the terms "put, "get", and "channel" to a extent that makes their intuitive value as an informal description worthless, or (b) you're talking about a way to implement the transition system behavior (which may be done with puts and gets on channels).

If (a) then it doesn't matter what I say, because anything I describe you will label as a "put" or "get" regardless of how meaningful those words are. If (b) then, we'll I've said before that puts and gets are implementation details but not a required part of the semantics.

Inefficient equiv of "put" and "get" required to implement ||

Since simultaneous change is impossible in different processes, the inefficient equivalent of put and get communications are required to physically implement || in process calculi. No such inefficient indirection is require for communication in the Actor Model.

Implementation detail

So you agree that what you're talking about when you talk of "puts" and "gets" is an implementation detail?

Not a detail: significant required inefficiency in computation

Significant required inefficiency in computation (e.g. using process calculi) is not a detail.

[Edited to more clearly make distinction between carrying out computation and modeling computation.]

See my other comment about "foundational". Foundational for implementation is not the same thing as foundational for modeling and analysis.

Also not a detail: unrealistic modeling of computation using ||

Unrealistic modeling of computation using || (i.e. imagining simultaneous change in different processes) is also not a detail.

Done

If you say so.

To me it looks like a simplifying abstraction that can be valid in certain contexts (in the same way that we imagine planets are point masses, resistors have no capacitance, and digital logic doesn't include transmission line effects). You obviously disagree. But that strikes me as a subjective judgement rather than anything for which either of us can offer objective proof. Which means this argument is unlikely ever to be resolved. And I'm sick of arguing about it. So I will quit for now.

Semantically Equivalent

During the transition the value "c" is communicated between the two processes. It is possible to throw away the value, or use some kind of void, but semantically this is still a synchronous write and a synchronous read of a value between two processes.

Barriers (which I think you are describing as offers/events?) are one way of implementing synchronous channels. Synchronous channels provide one technique for implementing barriers between processes. The get operation is a blocking wait state inside a process until there is a synchronisation with another process across the barrier. Yes, this what happens in your note-passing example, and what you are describing is a get/put operation.

What Hewitt has said is roughly correct about simultaneous operations, but there is a caveat that he has not mentioned. It is impossible to execute two operations at the same time in different clock domains. It is entirely to execute two operations at the same time if they share a clock domain (i.e. if they are software processes on the same processor with common clock cycles). It is also entirely possible to ensure that two operations in different domains have a fixed order in time, most commonly using a multi-phase protocol to implement barriers - which are semantically equivalent to pairs of put and get operations.

The meaning of "get"

The get operation is a blocking wait state inside a process until there is a synchronisation with another process across the barrier. Yes, this what happens in your note-passing example, and what you are describing is a get/put operation.

If "get" means "process in a blocking state" then your definition of "get" is apparently quite different from mine.

This whole argument originated because Carl insisted that synchronous events as defined by the semantics of the typical synchronous process calculus involve a process "putting" something into a channel, and "getting" something from a channel. But now "put" apparently involves blocking with something in my hand, and "get" involves blocking until something is available. Even though I'm not putting things anywhere, and I only "get" something directly from the process I'm communicating with, not from a channel.

So yes, if you want to change the definition of "put" and "get" to mean "process blocks until other process is ready", then there are put and get operations involved. But those "put" and "get" operations don't actually involve putting anything anywhere or getting anything from anywhere (I suppose the note-taking example does involve "getting" something--directly from the other process, not from a "channel"--but the fist-bump example does not).

Let's simplify things further: if I stick my hand out waiting for a fist-bump, am I "putting" or "getting"? When someone fist-bumps me, are they "putting" or "getting"? Or does my hand being out simply signify that I'm in a state waiting for an event?

Just an abstraction

Process calculi simply assume that something send will usually be received sometime and abstract everything else. That is modeled through a rewrite.

I liked your other reasoning more; it's just an abstraction to model message exchanging processes.

I liked your other reasoning more; it's just an abstraction to model message exchanging processes.

And the rest is implementation details...

Yeah, that's what I've been trying to say in a few different ways. I guess you're right that my other approach was better.

Yah

Exactly, implementation detail. May be RS-232, an email exchange, a neuron firing, or a biochemical process.

The key is that a message is exchanged (or not), usually nothing more is modeled.

What does it buy you: assumption of simultaneous change

What does the following buy you:


assumption of simultaneous change in different processes


Simpler analysis

A smaller state space, which makes the system easier to reason about or analyze.

Assuming simultaneous change is an alternative unreal space

Assuming simultaneous change in different processes is an alternative unreal space that can lead to the following problems:
* Livelock
* Serious inefficiencies
* Lack of appropriate modularity and security including denial of service

It would be very useful for someone to document how each of the above has actually occurred using the erroneous assumption of simultaneous change in different processes :-( Simultaneous change in different processes is not a simpler conceptional space.

Unfortunately, it's quite easy to get into deep trouble by making false assumptions.

Done. Again

And it looks like I'm done on this subthread too, because we're back to the same subjective argument about whether a simplifying abstraction can be useful.

Global state space: not practical or useful for concurrency

For massive concurrency, the global state space model is:
* impractical
* not conceptually useful
* technically incorrect

Small state space

Do you mean simultaneous or independent? If two transition systems can execute without affecting each other then the size of the combined space behaves more like a sum than a product. This would seem to be what you are describing, with barriers / events / messages between the critical points?

Shared transitions

I was talking about the labeled transition system semantics of something like CSP, in which individual transition systems take independent transitions whenever they like, but shared transitions (what you're calling barriers/events/messages) are taken together (I think I used the word "simultaneously" to described that at some point, which is why it's even in this conversation). The shared transitions (as opposed to having separate message send and receive actions), prune the state space by restricting what the interacting transition systems can do on certain steps.

Simultaneous process changes (i.e. shared transitions): mistake

Simultaneous process changes (i.e. shared transitions) are a mistake for both modeling and implementation as discussed elsewhere in this LtU topic.

CSP

Last time I looked at the semantics of CSP was a long time ago, but in the operational semantics there are get and put operations that occur together in the shared transitions. This may be completely different if CSP is defined in a denotational style.

As far as I understand it shared transitions (that is, those that are guarded on paired events in the semantics) are equivalent to barriers. A barrier is simply a state that all of the participating process wait until until they are all released together. All synchronisation primitives require some form of wait until a mutually agreed point in time. If you consider a channel that is void (no values transmitted) then it is purely a synchronisation primitive, that is a barrier for exactly two processes. The first process to enter the barrier could be either putting or getting (or fist bumping for example), the synchronisation simply forms an agreement that everyone is past the entry point, after which the processes are released to continue transitions in any arbitrary order.

In this way there is not an interval of time that is being synchronised - where the actions of the processes would be simultaneous within the interval. This always requires a shared clock to divide time into smaller barrier-regions. Instead there is a partial agreement that no process passed the boundary until they all arrived at it. This is a somewhat weaker, but practical form of synchronisation.

In the case of a barrier the role of every process is symmetric (I think you mentioned this a few posts ago). When a value communication is tied to the synchronisation the only difference between the "putter" and the "getter" is whether the value is transmitted or received. In terms of the synchronisation itself there is different in roles between the two sides, as symmetry in behaviour is required for proper synchronisation (e.g. the relative ordering of the getter and putter must make no difference).

CSP is a process calculus (now)

Though CSP started off as a programming language it is now widely used as a formal modelling language. You talk about CSP as if it still is a programming language.

Compare it physics, you have theories of physics which allow you to develop tools (better), like a hammer. You use physics to develop better hammers, and then buildings; conversely, you don't use hammers to create a better physics.

CSP is a process calculus (now), it doesn't make much sense to talk about it as if it is a programming language. CSP has a notion of exchanging messages, how that notion is implemented is irrelevant.

You're explaining physics as if it is a bunch of hammers.

CSP is ambiguous: programming language *and* process calculus

CSP (Communicating Sequential Processes) is ambiguous: both programming language and process calculus. There are those who are fans of the programming language and its derivatives who are dubious about the later developed process calculus that was given the same name.

Which programming language

Which programming language would that be? I only find verification tools for CSP on the wikipedia page. Looks like they mostly formally verify protocols and translate those to LTS or BDD, or whatever, models.

If there is a programming language for it, I've never heard of it. (Except that some constructs find their way into PLs like occam or go.)

CSP programming language in CACM [Hoare 1978]

According to Brookes:

Tony Hoare’s 1978 paper introducing the programming language
Communicating Sequential Processes is now a classic. CSP treated input
and output [e.g., put and get] as fundamental programming primitives, and included a
simple form of parallel composition based on synchronized
communication.  Ideas from CSP have influenced the design
of more recent programming languages such as Occam and Ada.


CSP is a process calculus (now)

We already discussed, and I acknowledged, that CSP once started out as a programming language.

Things change. I don't go around proclaiming that "nice" should still be equivalent to "foolish."

CSP is a process calculus, and has been for many, many years.

Occam

When we were taught CSP it was in two different different contexts. Occam is one language for parallel systems (transputers) that implements the semantics of CSP (more or less). Then we saw the same language in formal modeling as CSP (with some translations to CCS). This was in the late 90s.

In either case there is a language defined as a particular syntax with a semantics that describes the execution of a system. If the semantics are defined as standard operational semantics then it looks more like a programming language, or I've seen similar process calculus models defined purely in terms of graph reduxs where it looks more like a system for modeling / analysis. Apart from the implementation details of Occam, which make it slightly different I thought they were the same thing: CSP is a programming language in the sense that one can write programs in it?

CSP is a process calculus (now)

CSP is used as a specification and modelling language. If you visit the Wikipedia page you'll find it is used as the input language for formal verification tools which either translate the specification to a labeled transition system or a (relation represented as a) binary decision diagram.

It also influenced a number of programming languages. (Most of them hardly used. or hardly used anymore.)

You can sometimes blur the line between specification and programming languages but I certainly don't feel like opening that can of worms here.

In practice, specification and programming languages are two separate things, CSP is a process calculus used as a specification language, and that's good enough for me.

"it depends on the mean of is"

e.g.:

CSP is a process calculus used as a specification language, and that's good enough for me.

It has other aspects, too. Other people are talking about those.

Better Analogy

You have math. In math you can define the factorial function.

fac 5 = 120

You don't explain the definition of fac in terms of a programming language (as if it computes something). It doesn't take time, fac(5) simply is 120.

It's the same with CSP. Two processes can exchange a message. There is no notion of timers, "putters" and "getters", boundaries, etc.

Two processes exchanged a message, or they didn't. CSP simply doesn't model more. In reality, that may be an email exchange, a neuron firing, or a biochemical process. But you abstracted from all that, so you don't talk about "timers", like you don't talk about the work a computer needs to do to compute fac(5).

CSP isn't a programming language (anymore).

CSP semantics

I'm not sure which CSP semantics you've seen. Since at least Hoare's 1984 book it's had a denotational semantics. Roscoe's 1997 "Theory and Practice of Concurrency" includes mutually consistent denotational, algebraic, and operational semantics. All take a view of events that is roughly the same as the CCS labeled transition system semantics. Shared transitions involve two (or more) transition systems taking a step at the same "time" (in quotes because these are untimed systems). There's no concept of putting or getting, and no specified implementation. There are no channels, except as syntactic sugar over events.

Even values can be transmitted through synchronization without "putters" and "getters". If I hold out two fists, and you bump one, you've transmitted information (binary, but still data) to me. That didn't occur because I "got" anything from a channel. I was in a state where I could take one of two possible transitions, and your holding out a single fist (the only transition you're willing to take) makes a choice for me. The roles here are still symmetric--we're both just holding out fists that signify available transitions.

No promises

It has been a very long time, but I will dig through some old notes and see what I can find. The notes are about 15 years old and I have no idea where they are, so this may take some time to produce an answer :)

Edit: Ah, saved from trawling through boxes by Google! An Operational Semantics for CSP, Gordon Plotkin, 1983.
An Operational Semantics for CSP, S.D. Brookes, A.W. Roscoe and D.J. Walker, 1986.

In both cases these look like the big-step versions of the semantics. Evolution of the system is described as a single equation that involves a pair of discrete transitions, one on the sending and one on the receiving side. Informally these transitions would be the get and the put.

Although Marco believes that CSP is not a language I have seen many implementations of CSP embedded within other languages. To actually implement these semantics in a language requires a finer grained description of the system; the small-step semantics of the language. At this point it is necessary to define how the system (as a whole) detects that there is an available transition made of smaller steps in different processes. The most common way to implement this is with a multi-phase blocking protocol. I cannot think of another way to implement the synchronisation that would be efficient, in terms of constant factors in the processes doing the synchronisation, or in the number of messages exchanged between nodes executing the program. Which is certainly no guarantee that there is none, but many people have looked at the problem and implemented the same solution.

From a denotational / big-step operational view we can describe the system as evolving simply by taking the transitions that pair up processes in synchronisation events. But this is quite an abstract viewpoint, and these semantics are inherently sequential. The execution ordering of the transition may be confluent modelling independence between computations in the processes in the system but the actual execution semantics of the transition system are operating on a single machine with a global view of the system. Implementing the semantics of CSP inside a real system requires a semantics that can operate with only local information about the state of a process, and a relevant protocol to share information between processes for communication and synchronisation.

Looping back around to an original point (not the original point of the thread, just the comment that I hopped in to reply to): barriers and get/put are equivalent implementations of CSP in a parallel system. It is possible to consider a more abstract model of CSP as a single global transition system, and this exactly how it is used in the modelling community. In that more abstract formulation the details that separate get/put and barriers are removed. It is then tempting to say "but there is no get/put, these are just paired transitions" and that is entirely true in one sense. But if you model a system with get/put at that level of abstraction you cannot see the individual gets and puts anyway as they are a detail that has been abstracted away. At a more concrete level you have to look at how the system evolves as islands of local information: processes with real separation. At that level the differences between barriers and get/put and barriers are more obvious - but they are still broadly equivalent in operation (modulo communicating the value).

Anyway, perhaps this is useful in the context of your earlier comments about "not getting Milner" as the confusion seemed to lie in things that are roughly equivalent, but not quite the same at different levels of abstraction.

Thanks for taking the time

Thanks for taking the time to dig those articles up. I haven't looked at Plotkin's semantics before. The Brookes/Roscoe/Walker version looks fairly similar to the semantics that appears in Roscoe's later book.

Evolution of the system is described as a single equation that involves a pair of discrete transitions, one on the sending and one on the receiving side. Informally these transitions would be the get and the put.

Maybe we're reading those equations differently. For parallel compositions I see a rule that essentially says "if each process can make a transition from P to P', and the transitional event is a shared one, than both processes make the P to P' transition at the same time at an ocurrence of that event".

If you want to call the sub-transitions made by each component process "put" and "get"... well, ok, but those are not the temporally separate events that Thomas was talking about. Nor do they necessarily involve any putting into or getting from channels (Plotkin's does, Brookes' deals directly with events), as Hewitt claims.

The rest of what you say I pretty much agree with. There are implementation details that may involve some kind of blocking protocol if you're implementing on a distributed system. I've said that all along (although it's curious how the original claim--not yours--of "put and get with channels" has evolved to "no channels and just blocking but we'll call it put and get anyway"). I'm not convinced that such a scheme is required though. There are hardware implementations of process models that use state-monitoring instead of blocking protocols. The same thing could be done on single processor software systems. And CSP models that represent non-software systems, like fist bumping people, don't necessarily require an explicit protocol for their implementation.

Roscoe clearly states specification language

Both the mentioned articles seem to get rephrased in Roscoe's treatment of CSP. Both give an operational semantics to CSP, I didn't read them closely, but both seem to give CSP an abstract (trace) semantics over an LTS. That doesn't imply programming language at all.

I have never said it isn't a language, I have stated that CSP is a specification language, not a programming language. Moreover, Roscoe goes out of his way to make sure the reader understand that CSP describes specifications.

As he says it:

One of the fundamental features of CSP is that it can serve as a notation for writing programs which are close to implementation, as a way of constructing specifications which may be remote from implementation, and as a calculus for reasoning about both of these things – and often comparing the two. For this reason it contains a number of operators which would either be hard to implement in a truly parallel system, or which represent some ‘bad’ forms of behaviour, thus making them unlikely candidates for use in programs as such.

but the actual execution semantics of the transition system are operating on a single machine with a global view of the system.

This simply isn't true. The transition system may be widely distributed; for example, processes exchanging email over the Internet, or the distributed secure handling of bank transactions (where even a human could be modeled as a transition system), or neurons firing.

CSP can be used to model low-level systems, it doesn't imply it must be used as, or is, a low-level programming language. It is a specification language.

[EDIT: I agree with your analysis that certain implementations will need to make sure that certain details are met. But the only assumption of synchronizing two LTSs on a message is that a message send is received at some point. You don't need to implement explicit synchronization details for that. It's just an assumption; if that breaks down, then you're simply talking about unspecified behavior.]

[EDIT2: I guess a better way of phrasing that is that in CSP you usually give specifications of programs. That doesn't mean you're programming. My students often made the same mistake.]

Dangers of wandering into unreality

There are clear dangers of wandering into unrealities like the following:
* simultaneous change in different processes
* global states

How can these dangers be mitigated?

reality and signals

The classic Leslie Lamport papers really give the right mindset to get where you are coming from here, even though they don't develop your particular point explicitly.

The thing to be modelled here is components with space-like separation. Information takes noticeable time to propagate between two literally concurrent processes. Physical simultaneity is really the assertion of no causal relation between two events, along with observer-dependent ambiguity with regards to what order the events took place in.

One implication is that every semantic of instantaneous simultaneity between space-like separated events is really (when reduced to physical implementation) a description of some synchronization protocol involving multiple round-trips between the two world-lines that are "instantaneously" communicating.

The abstract semantics of instantly simultaneous communication between two physically separate, concurrent products is really an abstract description of a process that takes a definite, non-instantaneous duration of time -- the time for that sync protocol -- which from time to time you describe as the inherent "overhead" cost of such semantics.

Lamport and Hewitt both take information propagating, in non-0 time, from one world line to another as fundamental, and any synchronization has to be modeled in terms of that.

p.s.: I think reality is liberating in the sense that once you stop pretending instantaneous communication is real, you stop committing yourself to just one sync semantic in a world that includes richer possibilities.

latency

Simultaneity can be causally related to some upstream event. E.g. if I clap, sound simultaneously reaches multiple surfaces at a given radius. What Hewitt fails to model is the latency: his models assume propagation times are not merely non-zero (which is physically realistic) but also unbounded and non-deterministic (which is not so physically realistic). He appears to conflate simultaneous change with instantaneous propagation of information.

In reality, physical systems frequently depend on predictable timings. Your car engine wouldn't run without them. Neither would your computer.

If our software is to model our physical machines, there is no particular reason we cannot depend on simultaneous change. Some software systems leverage this - e.g. cellular automata, time warp, FRP, Chuck. Doing so can even scale, so long as the system is expressive enough to model dynamic routing.

meta-suggestion re latency

In reality, physical systems frequently depend on predictable timings. Your car engine wouldn't run without them. Neither would your computer.

Car engines provide a great example.

Old enough car engines lack any digital computers. Nevertheless, as you say, their system require the somewhat tight synchronization of multiple components.

One common car engine synchronization mechanism is the distributor. A mechanical rotor spins, opening and closing circuits as it goes, thus firing the spark plugs in the engine's cylinders according to a fixed, cyclic pattern. Relevant to what I was saying about the overhead of synchronization: a distributor must be designed so that each circuit closed remains closed for a minimum delta of time, long enough for a strong enough current to flow to cause the spark to ignite the fuel air mixture in a cylinder.

Along similar lines, and closely related, is the timing belt. The timed combustions in cylinders, whose spark is orchestrated by the distributor, rotate the crankshaft. Some of the energy of the crankshaft is used to turn a gear which is connected by the timing belt (a notched belt, typically) to the camshaft. In turn, the camshaft drives the opening and closing of valves that admit fuel to cylinders just-in-time for ignition, and release exhaust from cylinders after combustion.

That system, too, is really quite elegant. Look at the wikipedia entry for "timing belt", please, if you are not familiar with this hardware. Look carefully at the belt itself. The inner surface is notched in a square wave pattern:


_   _   _
| |_| |_| |_



I want to call you attention to the width of those notches, and note how they correspond to the gear that turns them.

The notches are needed to keep the belt from slipping (at all! between repairs) relative to the gear.

The materials involved, gear and belt, determine a minimum width for those notches. If they are too thin, the belt will slip anyway.

Please notice: the minimum time that can separate two events controlled by the timing belt is determined by the number of notches to pass a given point per unit time.

That belt width, interpreted as that granularity of synchronization, is the physical manifestation of the overhead of synchronous communication.

Is that neat?

If our software is to model our physical machines, there is no particular reason we cannot depend on simultaneous change

Now you know of some particular reasons, I hope.

A lovely analogy, and one

A lovely analogy, and one that ties synchronisation concerns in software directly to the same concerns in a physical machine. This makes me wonder about another aspect of this conversation, which I believe that I've tried to ask Hewitt before, without really getting anywhere. I can't remember if you have already answered this somewhere earlier in one of the many conversations about this topic, so apologies if you already have.

How does unbounded nondeterminism match the behaviour of any machine implementable in our physics? Every machine that I have encountered either propagates a message in a finite length of time, or it fails. I have never seen an example of a machine that can delay a single message indefinitely, let alone many.

Some examples have been proposed: e.g. email servers. But this is a particularly bad example as a mail cannot delay a message indefinitely. Due to finite resources it abandons its attempt after a constant number of delivery attempts and drops the messages. Message routing across the internet was proposed as another example, but routers have finite sized buffers so they either deliver messages (because the outgoing link is free) or the buffer fills and all new messages fail.

Is there any reason to believe that unbounded nondeterminism is implementable on a real machine, or is it simply a theoretical consideration in the same vein as Hypercomputation?

Maybe that's why none of LTU's obsessions are relevant

If we started with a mathematics and logic where the assumption that everything is explicitly finite perhaps we'd stop all of this silly obsessing over fixed points, incompleteness, "countability" ...

In a machine or network of machines were memory is finite, time is limited, people's patience is limited, speed is limited of course "logic" will be incomplete, who cares? Of course no message can wait forever - uhm write your protocols to be fair and to handle failure. Etc.

Maybe if the model matched the problem (finite everything) the discussions would actually be relevant to computing.

But the real work of computing is in engineering, and I don't get the impression that people here are interested in that work. Do people talk about USING languages to accomplish anything? Do they talk about the process of engineering?

real machines and unbounded nondeterminism (repl to Andrew Moss)

How does unbounded nondeterminism match the behaviour of any machine implementable in our physics?

If you generate a stream of electrons and measure their squared spin then with probability 1, you'll eventually obtain a 0 and 1. Also, for every n ∈ ℕ, with probability 1, you will eventually obtain a stream of n 0s followed by a 1 (and as well, n 1s followed by a 0). (We have abstracted away such things as (a) the possible heat-death of the universe; (b) an empirical refutation of some pretty solidly observed (afaik) QM phenomenon.)

Classically, a machine that performs fair flips of a coin has analogous statistical properties.

As a semantic model, unbounded non-determinism is also a suitable model of the delivery time of UDP packet. Unless there is some other source of information, systems can not distinguish a packet that will never be delivered from a packet that has not yet been delivered.

Also, an unbounded nondeterministic semantic is appropriate for the outputs of some deterministic programs, in some contexts.

For example, if a black-box deterministically issues successive digits of *some* constructible irrational real number (but we don't know which one because for analytic purposes it is a "black box"), then unbounded nondeterminism is the least dangerous model of the statistical properties of digits output, compared to models that guess something like "Oh, after seeing 100 digits: that's probably pi".

Randomness

As I have pointed out before: We don't know whether we can build a truly random machine. Physically, it might be impossible; that's why I challenged you to construct one.

People have tried, we only know that some machines pass certain tests within a certain confidence interval.

science and randomness

We don't know whether we can build a truly random machine.

We have a scientific theory that predicts that we can. People do build machines of the sort that theory predicts. The theory and the machines are (by all reports) among the best empirically verified predictions that exist.

Knowledge that such machines are not only possible but in fact exist is, I would agree, contingent knowledge. It could be overturned by experiments that contradict the existing theory.

If you can come up with such experiments, you will be world famous and probably people will throw lots of money at you. (I'm not sure if that is incentive or discouragement, though.)

We don't have anything of the sorts

We have a scientific theory that predicts that we can.

Says who? You? Back it up. We don't have anything of the sorts.

The knowledge is that machines pass certain confidence intervals, and that -apart from that- we don't know whether we have random machines.

An ideal random number generator is a (mathematical) fiction
Schindler and Killman

marco look into QM

The knowledge is that machines pass certain confidence intervals, and that -apart from that- we don't know whether we have random machines.

By extension, we don't even know what we know in the sense that our memories of past experiments may be false. But here you are just drifting into pretty trite epistemological sophistry.

Thomas look into Physical Random Number Generators

Bullcrap. You need to construct a physical machine which produces true random numbers.

I simply know you can't.

Don't turn your own ignorance on me.

re random number generators

You need to construct a physical machine which produces true random numbers.

I simply know you can't.

A homemade unit isn't hard to build but for commercial purposes, you might want to buy one like this:

http://www.picoquant.com/products/category/quantum-random-number-generator/pqrng-150-quantum-random-number-generator

As I said: Passes tests

The random numbers delivered by the PQRNG 150 have successfully passed the most stringent testing possible today.

It passes tests. As I said, nobody knows whether that machine is a true random generator. There simply are philosophical, theoretical, physics, and technological reasons that you cannot know (and that that is unlikely.)

One of the things a machine like that likely does, for instance, is simply switch off for a while if for whatever reason the source of entropy is contaminated.

Nobody knows whether you can truly construct a coin-flip device, we only know some devices pass tests, and -lastly- that a device is being sold as a true random machine is more likely propaganda than anything else.

It simply passes some (inconclusive) tests. If it were known to be truly random it wouldn't even list that it passes tests.

[ Edit: Stated differently: Most people accept that we likely cannot physically create truly random machines, but we can create machines which pass (inconclusive) tests. ]

Probability

Ok, that is a useful example, and it ties back directly to the busy beaver problem and Hewitt's claims of a integer counting problem that separates the class of actor computations from those achievable by a Turing Machine. Randomness is something real and tangible that provides a concrete handle on why this would be useful.

Unbounded non-determininism is not a useful model of UDP packets though. In a coin toss I could choose to model the outcome as heads, or unknown. This would be valid, but only as an approximation that loses most information. UDP packets do not travel in cycles. They follow straight-line paths with bounded possible delays at each point, or a possibility of failure. Bounded determinism is a more useful model for UDP packet times. There are no year-old UDP packets living out their retirement in Florida that may choose to get delivered next year. We can choose a time t, such that any packet not delivered in t is a definite failure. It is a series of events that can be thresholded with perfect accuracy.

As a follow-up, unbounded nondeterminism is also achievable with probabilistic Turing Machines. In particular the provided example of building a random string of unbounded length fits nicely onto a simple probabilistic Turing Machine. What are the differences between the Actor model and other forms of computation? Is there something in particular that makes it useful?

Are you sure?

Well, I am not sure either, but it looks to me that you're confusing non-determinism with probability. At minimum it looks like you have a different opinion on what non-determinism is.

Given a choice between three doors, with behind one a gift, you can write a non-deterministic algorithm which will always open the right door.

That looks foolish, but is convenient when writing specifications. Any sorting algorithm is equivalent to a (simple) non-deterministic algorithm which 'chooses' the ordered sequence.

You cannot solve that with probabilism. I don't have the feeling unbounded non-determinism is the same as probabilism at all.

Sure?

I would not go as far as being sure. My line of reasoning was that a machine with probabilistic transitions can either write a zero and loop, or write a one and terminate. With some probability assigned to both transitions that adds up to one. This is (as far as I understand it) one form of non-determinism.

This probably differs from Thomas' example in termination conditions, although I not exactly clear on the details of how.

The corresponding concept for unbounded nontermination on probabilistic Turing machines is termination with probability one. Isn't this the situation with UDP? Delivery time isn't semantically bounded, but the odds of a packet surviving for more than a few minutes are effectively zero.

Unbounded nondeterminism seems like it might be a useful simplifying fiction for proving termination results, but doesn't correspond to anything physically implementable and even if it did, there's no observable difference between "will terminate eventually" and "might not terminate".

A message can take a long time to be received

A messages can take a long time to be received. Postal messages have been delayed for decades. Similar thing could happen if a mail server temporarily went down for a long time in a war zone.

UDP nits and unbounded nondeterminism

Saying that a UDP packet can take arbitrarily long certainly is an extreme abstraction that allows for possibilities like Internet nodes moving at relativistic speeds to one another, or nodes with wildly out of sync clocks.

In ActorScript there is a taming concept I haven't fully digested but like this: 1) If you send a message to an actor the reply may or may not ever be sent. 2) The receiving actor may, in lieu of a reply, throw you an exception. 3) The communications infrastructure, in lieu of a reply, may throw you an exception AND in this case the actual reply may or may not have actually been sent.

In other words, after the foundation lays down the law that messages and replies sent are eventually received, at a higher level we reconstruct ordinary, familiar concepts like time-outs on a message-reply transaction.

There are no year-old UDP packets living out their retirement in Florida that may choose to get delivered next year.

While true that is unfortunate because it paints a lovely picture.

As a follow-up, unbounded nondeterminism is also achievable with probabilistic Turing Machines. In particular the provided example of building a random string of unbounded length fits nicely onto a simple probabilistic Turing Machine. What are the differences between the Actor model and other forms of computation? Is there something in particular that makes it useful?

I hesitate to try to answer because that's an over-broad question and I don't want to try to reduce it all to slogans.

Unlike common Turing Machine models, actors feature multiple active computational elements (each an actor) and the dynamic creation and destruction (through abandonment) of computational elements.

Turing Machine models separate computational logic from unbounded state, the former to an automata, the latter to a "tape" In the actor model, unbounded state and computational logic are both represented as the dynamic graph of concurrently-behaving actors.

The actor model has a unified theory of discrete communication from its most foundational abstractions through higher levels. TMs have nothing like this.

Magic

There is a big chunk of the Actor model that I just don't "get" , probably as a result of building things up from the bottom. A guarantee of eventual delivery is then something that I don't have any understanding of, partly because I do not see how it could be achieved, but also because I do not see how to define it constructively. The distinction of "link temporarily out of service" (Hewitt has a nice example somewhere else in this thread of a mail being switched off) and "link is permanently removed" does not seem to be observably different to an Actor that is using it.

Turing Machine models separate computational logic from unbounded state, the former to an automata, the latter to a "tape" In the actor model, unbounded state and computational logic are both represented as the dynamic graph of concurrently-behaving actors.

If I've understood the implication correctly that means that both program size and memory are bounded for individual Actors, so that use of unbounded resources (in either dimension) requires distribution. That is quite a thought provoking idea, seems similar to the field of Spatial Computation. This would provide a satisfying answer to my question of "what is the interesting difference" between Actors and other models.

Another analogy

Not to discount your analogy, but for the sake of comparison, consider a traffic light. Traffic lights operate at a level of abstraction where the speed of light is effectively instantaneous. Multiple drivers are informed of the new state of the intersection "simultaneously".

Not a good assumption: simultaneous change in auto drivers

It's not a good assumption that automobile drivers change simultaneously.

Brandon: glass houses

Hewitt's comment makes perfect sense and succinctly points out the error you made when you wrote:

Not to discount your analogy, but for the sake of comparison, consider a traffic light. Traffic lights operate at a level of abstraction where the speed of light is effectively instantaneous. Multiple drivers are informed of the new state of the intersection "simultaneously".

You have conflated the approximately instant travel of photons to retinas with the propagation of information to systems that control the driver's choices.

There is a synchronization time involved in traffic signals. You experience that most obviously if you are stuck behind someone who hasn't noticed that the light has changed.

Did you mean that you didn't understand why the comment was made?

No error

I made no such error because I did not make any statements about the drivers "changing" nor any statement about their reaction times. I made a statement about the traffic light's delivery of information, here acting as a semaphore, which is in fact the origin of the word.

I do not dispute the fact that there is propagation delay, nor the fact that its an inherit part of physical systems. I only seek to make the point that simultaneity is a useful abstraction. You can model a traffic light as a semaphore and you can model drivers as independent processes experiencing propagation delay; these models are not at all incompatible.

Did you mean that you didn't understand why the comment was made?

I understand exactly why Hewitt's comment. I don't think it adds any value at all, considering the fact that he's made the same point dozens of times in this thread. He seems to think we're all stupid, but really we're just trying to have a different discussion than the one he wants to have.

re we're just trying to have a different discussion

Nothing is stopping you from having a different discussion.

Having a different discussion does not require you to try to shut up other people. You aren't "just trying to have a different discussion". You are trying to prevent a discussion you apparently don't like.

Frustration

I'm not trying to shut down a discussion I don't like, I'm simply frustrated with Hewitt. You've done 100X more to convey his message because of how woefully ineffective he is at communicating. His lack of tact deeply irks me (and clearly many others).

The simultaneity assumption

The simultaneity assumption is valid only if you slow down the overall responsiveness of a system quite a bit compared to individual element changes. People's reaction times are on the order of 10 ms while a traffic light change will reach drivers waiting at the light in the 100ns range. Given this, the reaction time of individual driver matters far more than the speed of light (in fact you'll see some drivers be almost always quicker on the draw). Then there are cases where the "message" is not seen or deciphered (a color blind driver or the sun is behind the light or the driver is distracted). Ignoring such real life conditions in one's model makes me think of the "Spherical Cow" joke!

Choosing an abstraction

You don't have to ignore the situation you're talking about. You just need to think about how to model it, and choose a level of abstraction appropriate to the problem (i.e., what should be considered a synchronous event and what shouldn't). For example, the light change could be treated as one observable synchronous event. The driver reactions are separate observable events which are, as you rightly point out, not synchronized with each other (and possibly susceptible to various faults, which can be modeled using the techniques I pointed to earlier). The question is whether you need to worry about the propagation delay of light in this context. I would suggest that for most problems you seek to analyze with a model of this sort the delay in arrival of the light between different drivers is so small as to be completely negligible relative to the driver reaction times.

In those situations in which you really worry about the difference in time between the arrival of the light signal at on driver and another, you can always insert an additional process that models the propagation delay (takes a light-signal-sent event as an input, and generates later light-signal-arrived events indicating the arrival of the signal at each driver). Or you could take an asynchronous-by-default approach if you prefer. It should be a question of what makes your analysis easier.

There is no simultaneity assumption

As I have stated before, process algebras simply extract from that something send (at some point) is received (at some point). (I think Roscoe somewhere even goes into that for some brief moment.)

Yah, that's a pretty high form of abstraction. Some things are likely also not easy to model or can't be modeled. But Hewitt's claim is that that would imply that the physical machinery should synchronize on that, which is a false reading, he simply didn't get the abstraction.

Neither is CSP a "programming" language, it has been a specification language for many years and been used as such with (lots of) supporting theories and tooling even.

I have no idea what his claim about "global state" are about but I am pretty darned certain that'll prove to be another misreading.

Agree with what Tom says

In communication protocols such as TCP or reliable multicast, their designers have had to account for the fact that packets may have variable latency and may arrive out of order or not arrive at all. With multicore systems and ultra high clock rates some of these problems can also manifest on a single chip now -- the inter-core interconnect paths and possible per node buffering can make prop delays variable and lossy and out of order. CSP etc. have been very useful formal models but I wish there were formal means for designing reliable distributed systems (out of unreliable components -- Dijkstra's "new elephant built from mosquitoes" but possibly *not* humming in harmony!). Instead, at present we have to use adhocry. Debugging distributed systems is a nightmare (often the lowly "printf" is the tool of choice). We do not understand the chaotic behavior of global distributed systems like those of Google or Amazon or Microsoft, or how they respond to anomalous events. We don't have the equivalent of the "Control System Theory" to control runaway information systems.

I think figuring out models that can help with the design & characterization of such distributed systems ought to be far more interesting than arguing about which of the present models are better!

Start simple

It's both possible and straightforward to model message latency, message propagation times, and message delivery failures in a synchronous process calculus (chapters 12 and 14 of Roscoe's CSP book have examples of these things--I'd imagine other process calculi have similar examples). But you don't insert those problems everywhere all at once. You insert them, explicitly so it's clear they're there and its clear what your timing or fault model is, in the parts of the model where it matters, which keeps the rest of the model simple (by which I mean "small state space"), making analysis (via, for example, model-checking) tractable.

Returning to the resistor/capacitor example I used previously, in designing a circuit you'll typically use ideal resistors or capacitors for your analysis unless you're dealing with a part of the circuit for which the internal series resistance of the capacitor matters. In parts of the circuit where the non-ideal characteristics of the circuit elements matter, you include more detailed models (built from ideal components--see, for example, "equivalent series resistance" in a capacitor) to account for those non-ideal characteristics. It keeps the overall circuit analysis simpler. The same principles apply to almost any kind of engineering modeling and analysis you care to name.

If we're going to name-drop Lamport, he has said that the way to start a specification or model is to start with something simple and add complexity as needed.

Not simpler: assuming simultaneous change in different processes

Assuming simultaneous change in different processes doesn't simplify. Instead, the false assumption introduces complexities and risk of erroneous inferences because it is unachievable in reality.

It's simple

I gave my definition of "simpler". In this context, it means having a smaller state space, which makes analysis easier. In that sense, synchronous events simplify things. I claim nothing more. If you don't want that kind of simplification, then don't use them. No one's saying the Actor model is wrong. It just offers a different set of pros and cons than other techniques. Which technique you choose will depend on the problem you're trying to solve.

Also not simpler: assuming a global state space

For large-scale systems, it's not simpler to assume a global state space with simultaneous change in different processes. Modeling using message passing is simpler, more realistic, and better suited for large-scale applications.

The assumption of simultaneous change in different processes is incorrect. Making incorrect assumptions can easily lead to making incorrect inferences.

Also not simpler: communicating with you

Plenty of people have used the assumptions you say are incorrect to model, design, and implement real systems. Someone finds utility in these assumptions, even if you don't. Assuming that a capacitor has no resistance is also incorrect, but it works surprisingly well in a lot of circumstances. But now I'm just repeating myself. Again. So I will stop.

If you might be tempted to

If you might be tempted to respond to another comment in the future, consider that every single conversation with Hewitt ends like this.

Focus on message passing instead of global state spaces

Focus on message passing instead of global state spaces in order to scale. Global state spaces are not scalable.

Communicating and Mobile Systems

Well, I finally got a chance to dig up my copy of Milner's "Communicating and Mobile Systems: the π-calculus". Chapter 4 (Concurrent Processes and Reactions), sections 4.1 and 4.2 say:

"Every complementary pair (a, -a) of labels will represent a means of interaction between black boxes; we think of the action a of one black box as pressing the button labelled -a on the other.
...
A complementary label-pair (b, -b) is not to be thought of as a buffer or channel having some capacity; it is a means for synchronized action, or handshake.
...
We might represent the synchronization by a shared transition between their transition graphs."

And goes on to describe how the shared transition resulting from b and -b is equivalent to a single internal transition τ. No mention of "puts" or "gets" or any similar mechanism. Just shared transitions.

Signal message operations with "put" and "get"

In the π-calculus, a process can perform a "put" signal message operation on a channel (called a "label" in the π-calculus) and a different process can perform a "get" signal message operation on the same channel, where "put" and "get" are the conventional terminology used to indicate the directionality of message flow. Nothing happens unless both operations are allegedly simultaneously successful, in which case one process has done a "put" and the other a "get".

It means what you model

From the pi-calculus page, on sending a message:

Typically, this models either sending a message on the network or a goto c operation.

The keyword is 'typically'. Which I liberally assume you mean too above. You wrongly generalize 'typical' to mean 'always.'

That Milner had a number of examples in his head and provides typical scenarios doesn't imply 'generally' or 'always.'

Nothing holds me back from modeling email exchanges, or drivers in front of a traffic light, where 'simultaneous change' means nothing else than that a message send was received. And I am pretty certain people specify examples like that.

'Typical' doesn't imply 'always.'

ultimate vs least modeling power

This is the most important bit of the CSP/Agent distinction from a engineering perspective:

synchronization of our actions have the advantage of pruning the state-space of the composite Allan/Tom system

Just like we don't usually program with raw beta reduction against an unadorned lambda calculus, we shouldn't expect programmers to accept the full complexity of the actor model unless absolutely necessary. Practically speaking, modern CPUs are quite good a providing the illusion of simultaneity upon request.

I tend to think of CSP a reasonably default choice for intra-process concurrency and asynchronous message sends as the default choice for inter-process distributed communication. However, it only takes an explicitly bounded context to turn inter into intra. That is, from the perspective of e.g. one unix process, it's a distributed system, but from the perspective of some script running several sub processes, it can leverage pipes as synchronizing channels to shrink the state space.

As scientists, we should hunt down the ultimately source of power and expression. However, as engineers, we should follow the principle of least power, even to the point of preferring simpler-still concurrency models, such as communication through an atomic compare-and-swap cell.

Engineering is constrained by physics and economics

There is no simultaneity at different places. Attempting to achieve semi-simultaneous effects can be very expensive, e.g., two-phase commit, etc.

On a many-core x86 or Arm, the equivalent of compare and swap (which can fail to succeed) is all that we have although it can be expensive. However, there are new hardware instructions for Actors under development that can help :-)

Engineering uses abstraction

Engineering may be constrained by physics, but it also uses abstractions. We design circuits using ideal resistors and ideal capacitors unless the circuit behavior depends critically on non-ideal characteristics, because it makes the design process easier. We design digital logic using Boolean abstractions and ignore transmission-line effects until we can't ignore them. We design orbits based on Keplerian mechanics and point-mass assumptions, and add non-ideal "perturbations" as necessary.

Engineering: pay careful attention to physics and economics

Engineering must pay careful attention to physical and economic realities.

And then

And then confidently choose to ignore them when their effects are negligible. See free body diagrams and supply vs demand charts.

No extra overhead in foundational communication primitives

There should be no extra overhead in foundational communication primitives.

That's what I get for breaking my own rule

I promised myself I wouldn't engage you. Trying to communicate with you is deeply frustrating, but at this point, I have nobody to blame but myself.

Implementing || requires equivalent of "put" and "get"

Implementing || requires equivalent of put and get communications thereby imposing inefficiencies that are not present in implementing Actors.

A perfect example

This comment is a perfect example of why it is exhausting and fruitless to attempt to communicate with you. It's a total non-sequitur.

Implementation efficiency: important in computer engineering

Implementation efficiency can be important in computer engineering ;-)

Communication Efficacy: Important In All Fields

Communication efficacy can be important in all fields ;-)

Foundational for?

That depends on whether you mean "foundational" for implementation or "foundational" for modeling . Real implementation-level capacitors are typically modeled in terms of ideal resistors and capacitors (and sometimes inductors) because it makes for a model that's easier to analyze.

Analysis of hypothesis of simultaneous change in different procs

An analysis of the hypothesis of simultaneous change in different processes would be very useful:
* Origin: How did the hypothesis originate? For example, what are its roots pre computer science.
* Scientific foundations: What are the scientific principles underlying the hypothesis? For example, how is it related to the global state hypothesis?
* Development: How is the hypothesis developing? For example, which scientific groups are pursuing it and what do they hope to achieve?
* Consequences: What are the consequences of the hypothesis? For example, why has it not gained traction in mainstream programming?

If a channel is desired, use an Actor channel

If a channel is desired, use an Actor channel with the desired functionality. (There are many kinds of channels.)

Of course, an Actor can implement multiple interfaces (sometimes called aspects) without implementing a channel interface, which usually includes put and get messages.

Why?

Why, if I'm modelling a system using a process calculus, would I wish to include Actors? That just adds an additional, unnecessary, layer of complexity. Nor do channels (or "events", or "actions") in process calculi typically include put and get messages. Events or actions are messages (or communications, or interactions). Any "putting" or "getting" (if such actions are even relevant to the kind of message being modeled--not the case if the event in question represent, for example, a button being pressed) is an implementation detail that is at a level of abstraction below that of many process calculus models. Unless, of course, one chooses to explicitly model the behavior of a communication channel.

I really don't understand your point here. Can you model systems with Actors? Yes. Can you model systems with process calculi? Yes. Which is better? It depends on what you're trying to model. There are certainly some kinds of systems that lend themselves better to modeling via Actors. There are other systems that lend themselves well to be modeled in terms of (some kind of) process calculus, particularly if you're interested in establishing behavioral equivalences (as Agha did) or performing some kind of model-checking. Indeed, different kinds of systems lend themselves to modeling via different kinds of process calculi (which might explain why there are so many of them now). And we haven't even touched on the people who prefer something like Colored Petri Nets to either Actors or process calculi.

trolling

I believe Hewitt to be a smart guy with useful things to share, but apparently one who doesn't always present himself well. Unfortunately thus when it comes to LtU at least, I would encourage everybody to actually not engage with him on any topic that did not start with actors as the original post. I'd have said such on other people's threads, but I started this one so I feel like I have a little more grounds for sticking my nose in.

A channel requires "put" and "get" for communication

A channel requires put and get for communication to get things into and out of the channel.

Talking past each other

We seem to be talking past each other. What you mean by "channel" is not what I mean.

"Puts" and "gets" may be required in an implementation of a channel-based communication scheme, particularly one that involves buffered communication (then again, an Actor implementation presumably requires "sends" and "receives"--at least that's been true of every Actor implementation I've seen). But "channels" in process calculi are typically just a syntactic labeling that groups together a bunch of related actions. You could achieve the same kind of grouping in an Actor system by including a "tag" in each message that identified the message as belonging to, for example, group "A" or group "B".

Not so great: requiring "receive commands" and/or "message tags"

The following proposals are not great requirements:
* tags on messages

Also, the process calculi seem to be lacking:
* type interfaces

livelock

Well, we don't seem to be making much progress here.

Neither of the things you list as "proposals" were actually any such thing. I'm not sure why you would interpret them that way. Nor are Actor addresses really relevant to process calculi, although Agha showed how Actor addresses could be included in a process calculus model. I'm not familiar enough with the research on type theory applied to process calculi to say anything at all about type interfaces.

About the only thing I think we've established so far is that the Actor model and process calculi are different. Since you don't seem interested in engaging in any discussion beyond reiterating those differences, I think I'm going to stop here. I'll try to confine all further comment on this thread to discussion of process calculi, since that's what the thread is supposed to be about.

Sending message to address of an Actor, requires knowing type

Sending message to address of an Actor, requires knowing the type.

There doesn't seem to be any such requirement on communication in process calculi.

Message Types and Rejection

How do you make sure the senders 'knowledge' of the type of the actor it is sending a message to is the same as the receivers? For example when updating the system it is possible some actors get updated and others not (crash or power outage during update). What happens if this results in the sender and receiver having different types for the same message. Surely any assumption the sender knows the correct types to send will result in a security hole, and so there needs to be a protocol where the types are encoded into the message and the receiver can reject the message if they don't match what it is expecting. If this happens what should the sender do if it receives a rejection instead of a result?

fail with hopefully a breadcrumb trail for debugging

You're right that it's infeasible to expect all senders and receivers to be in sync with the same type scheme corresponding to an identical version, and that implies a hole in arguments that types themselves prevent all problems. My answer to this is not Actor-specific:

If this happens what should the sender do if it receives a rejection instead of a result?

Fail if a reply was needed. Some messages are one-way and need not involve replies. Even when a protocol says a reply occurs, if the sender doesn't in fact depend on a reply, it might proceed anyway. Size of sender failure-cascade depends on how much is broken by the faulty transaction: whether it was a small thing, or something critical. Severity may depend on whether the sender judges itself now broken, versus just one of its resources being down.

If sender and receiver can be updated separately, it implies they are in sufficiently different domains that a loosely coupled relationship might be expected, where failure is handled better, with less severe service downgrade in the sender. How much to log for auditing later for diagnosis might vary with relationship semantics.

If they are that loosely coupled, and there's a trust boundary between, without authentication the receiver might pessimistically treat a sender's message as an attack intended to waste receiver resources, and ignore it as quickly as possible without sending a reply. My point is that encoding all the nuances in types seems a little hard.

A message can be encrypted for the intended type of recipient

This is an excellent question :-)

A message can be encrypted for the intended type of recipient.

Performance

This would not likely be accepted in a general purpose programming system due to the performance cost overhead on function calling / message passing.

re Perfomrance

This would not likely be accepted in a general purpose programming system due to the performance cost overhead on function calling / message passing.

Which is why nobody uses https.

People do use HTTPS

People do use https, and I did not say that it was too much overhead for a communication channel. What I was talking about is that message passing replaced function/method calls as the general programming abstraction with Actors. My point was that the overhead of encrypting the arguments to every call to 'add' to prevent type errors when an application uses the maths library is too much overhead.

In fact I think message queues and inter-thread communication are too much overhead for this kind of thing. The compiler is obviously going to want to optimise these to plain function calls, or even inline the code.

My concern with this is that optimisation is hard, and much like functional programming theoretically can match imperative languages performance with enough optimisation, it doesn't happen in practice, and although improvements will happen, absolute parity is probably unobtainable.

My conclusion at the moment is that I would want to use the actor model at a higher level, writing each actor in C++ for example.

Type encryption needed only for untrusted links and/or endpoints

Type encryption is needed only for untrusted links and/or endpoints.

There is no automatic improvement in efficiency by using low level "imperative" programming languages like C or C++.

Unless you can provide benchmarks, this claim of being able to match C++ performance is unsubstantiated. Many people have claimed it, few actually able to do it.

If actors are not imperative, then they will have the same performance problems as functional languages. All the actor examples I have seen so far look functional.

What about DLL versions where the interface types may have changed? I can replace the DLL as an attack on the security of your program

actor programs can have mutable state (re figures please)

All the actor examples I have seen so far look functional.

Here's a stack:


Interface IntegerStack with
push[x : Integer] ↦ Void,
pop[]  ↦ Integer▮

Actor MyStack[]
implements IntegerStack using
push[x: Integer] →
Void
pop[] →
True ⦂ Throw EmptyStack[ ] ⦶

[*] the list notation I used here is not "standard"


Strong types and discipline: to be competitive C and C++,

In order to be competitive with C and C++, a programming language must be very strongly typed and rigorously disciplined :-)

Below, is a parameterized strongly typed version of Thomas's code above:

Interface Stack◅aType▻ with
push[aType] ↦ Void,
pop[] ↦ aType▮

Actor CreateStack◅aType▻[]
aList:[aType*] := []。
implements Stack◅aType▻ using
push[x] → Void afterward aList := [x, \|/aList] ⦶
pop[] →  head � [ ] ⦂ Throw EmptyStack[ ] ⦶
[first, \|/rest] ⦂
first afterward aList := rest ⍰ §▮


Memory Layout

What is the memory layout of this stack? This just looks like a usual functional singly-linked list... very inefficient. How would I define collection types like vector, map, set, each of which control their own memory layout, allocation and deallocation, which is needed for efficiency?

Memory layouts are up to compilers, primitive Actors & runtimes

Memory layouts are up to compilers, primitive Actors (e.g. lists, sets, vectors, maps, etc.), and run-times with hints that can be provided by programs.

Defining Collections

If you cannot define new collection type efficiently, then I don't see how this can be the foundation of computing? I would need to go to C++ to implement the efficient vector, and then add the primitive component to the compiler. This is not good enough for a general purpose language.

A general purpose language needs have types for hardware words, bits, etc, and needs to model a random-access-machine (that is at least model memory as an array).

I should be able to define new efficient collections on top of the machine model as libraries in the language itself.

I had also assumed actors were pure, so that messages can be sent to new copies of each actor. With impure actors with state this is obviously wrong, and you will need to queue messages to actors to avoid problems with actor being busy. This would further reduce performance.

Micro-management is not the solution to PL efficiency

Micro-management is not the solution to programming language efficiency in massively concurrent systems.

Security and good abstractions (e.g. lists, sets, vectors, maps, etc.) are paramount. Sometimes a runtime will need to change a memory layout dynamically and invisibly. Edit: For example, in the above, Stack◅aType▻ could be implemented using FlexibleArray◅aType▻.

Efficiently managing contention is a fact of life in massively concurrent systems, e.g., distributed and many-core.

Bits, Memory and Real CPUs.

To paraphrase Stepanov, we like bits, we like real processors like Intel, we like memory. Really this is the stuff of computing, and you really don't want to hide it. It is not micro management, but very elegant algorithms can be written. Just have a read of Stepanov's "Elements of Programming". Stepanov's approach seems far more like the fundamentals of computing. Stepanov gives a style which manipulates bits, pointers, memory etc, but also has very powerful abstractions that let you build elegant designs that are almost entirely abstract, apart from the very bottom level.

I like the actor model for concurrency, but I without the memory model and the ability to manipulate bits, it doesn't seem to be what I am looking for. I don't even think it can handle the generic abstractions (like type-classes in Haskell, and C++ Concepts).

Security paramount in PL; then relevant abstractions

These days, security is the paramount consideration in programming languages; then relevant abstractions (which must be efficient) for massive concurrency.

Logic

By using logic to prove things about the programs and memory access we can have the performance and the security.

Further in my experience of optimisation, micro management is the best solution. Compiler hints are harder as you feel like you are trying to get the right magic incantation to persuade the optimiser to output the kind of code you want.

I think the generic optimiser is actually an impossibility, for some use case every optimiser is a pesimiser. A domain expert with low level access will write better code than any optimiser. (There are certain operations compilers are good at, like register allocation, instruction selection and packing, that they clearly do better than people by hand, so I am not suggesting people program in assembly language).

Clear examples of this are C++ and fortran libraries for matrices and linear maths.

Micro-management doesn't make sense for massive concurrency

Micro-management doesn't make sense for massive concurrency; just like it doesn't make sense in massive organizations.

Runtime, storage, and communications efficiency are important, of course.

If you are really big and run lots of servers in parallel for your service, even a small optimisation, one that most people would dismiss as too small to bother with, can save you millions of dollars. Actually the economics is that the bigger you are the more money you save from micro-optimisations, so the more budget you have to pay for people to do them.

Besides only one expert needs to write the optimised linear maths kernels, and every user of the library benefits. To make things clearer I am not talking about processor specific optimisations. Actually the level of 'C' is good enough.

Abstractions are key to PL; not bit manipulation

Abstractions are key to programming languages; not bit manipulation.
For example, using flexible arrays:

Interface Stack◅aType▻ with
push[aType] ↦ Void,
pop[] ↦ aType.▮

Actor CreateStack◅aType▻[]
aVector ← CreateFlexibleVector◅aType▻.[]。
implements Stack◅aType▻ using
push[x] → aVector.pushTop[x] ⦶
pop[] →  aVector.empty?.[] �
True ⦂ Throw EmptyStack[] ⦶
False ⦂ aVector.popTop[] ⍰ §▮

FlexibleVector◅aType▻ has
empty?[] ↦ Boolean,
pushTop[aType] ↦ Void,
popTop[] ↦ aType.▮


How Do You Write The Compiler?

You have abstractions with no foundation. Without a memory model you are limited to whatever collections the compiler happens to have built-in. How can you write the compiler?

Besides computation is based on bits. The bit is the fundamental unit of information, and any language that does not let you get at it is fundamentally limited in a significant way. From bits we add logic gates, and arithmetic.

You end up with two models of computation, the one used in the CPU, a different one used in the language. This can never be efficient due to the translation between them. I also think that any competing model has too much ground to make up to ever be competitive with the prevailing hardware model (see specialist functional CPUs failed).

Interpreters and Compilers implemented using meta abstractions

Interpreters and Compilers can be implemented using meta abstractions.

As an example of a fully typed meta-interpreter, see ActorScript.

PS. Booleans are abstractions ;-)

How can I implement a compiler that needs to allocate memory and manage the memory layout of objects in a language that provides no support for such concepts? A general purpose language needs to be able to implement a compiler for itelf (or a compiler for another language) to be general purpose.

The meta-interpreter seems to rely on an underlying language like Java or C# to work.

A Boolean is an abstraction, but a bit is a real object. (for example we can have a Heytingean instead of a Boolean, using a Heyting algebra instead of a Boolean algebra, but a bit is still a bit).

Edit: In order to be able to implement a compiler, a language must support both its own abstraction and those provided by the hardware. If you want this to be type-safe the type-system has to be able to give types to the operations of the machine-model. Then you can do a sequence of typed-program to typed-program transformations to compile the language.

very low level actors re How Do You Write The Compiler?

two models of computation, the one used in the CPU, a different one used in the language.

I think you could build an "actors all the way down" system that let you mix very low level and high level code safely and usefully.

I claim it is uncontroversially easy to add types for machine ints of various width and floats.

What of addresses and "raw memory"? The main issue here would seem to be safety and for that there appears to be a ready answer. Something (this is pseudcode) like:

   Interface memory with
putByte[addr : Integer, Bytes] ↦ Void,
etc.


Implementations may and might not perform range checking, for example, depending on the security requirements of the block of memory in question.

If you want the very, very lowest levels in this style, a micro-kernel architecture might do nicely.

A primal actor that manipulates raw memory and machine instructions to instantiate high level actors must have a kind of magic trick up its sleeve: it must have a primitive capability to produce a reference to an actor represented at some address. I would compare that to the traditional need in micro-kernels for the kernel to be able to create processes out of raw bits.

At the same time, I claim it is valuable to have a language for high level actors that doesn't include any of the low level stuff. (It is OK for an extended version of the language to have low level stuff but the non-extended version should also stand on its own.)

I justify that last claim by observing that in Actorscript, the traditional notion of computational complexity is pretty much absent. In its place there is something related but distinct: measures of the number of events over the course of a computation, and the possible orderings in which they occur. There is a minimum duration between all pairs of ordered events but it determines only a lower bound, not an upper bound on execution time.

Given the relaxation of the assumption that a program is recognizably a series of primitive steps, each of which takes some basic quantum of time, the possibilities greatly expand for how ActorScript programs can be transformed and interpreted.

In the context of that high level understanding of actors, where time is about the ordering of events rather than the number of instructions executed, low level "machine" types would completely lose their concrete character. They would appear simply superfluous and awkward.

Interfaces and Performance

I see nothing wrong with an memory interface as such, but there needs to be provision for allocation and deallocation for writing the garbage collector. A program cannot assume to have access to the whole memory of the machine, and it needs to know how much memory there is, and what to do on a failed allocation.

Unfortunately I just don't see counting events instead of instruction cycles as a realistic measure of performance. I understand that actors optimise for parallel execution, but the inter-cpu communication is very inefficient (due to the cache misses). It really only works with 'shared-nothing' designs. Many programs simply don't parallelise that well, and even if they do, sequential execution can be faster. I don't think parallelising at the 'function' level is going to be a performant approach. High performance parallel systems tend to build the parallelism at a much higher level of abstraction. Look for example at protein folding, each client takes a work-block and has a complete implementation of the algorithm. Breaking this into smaller chunks on each node, say at the function level would be vastly less efficient due to the increased communication overhead.

The actor model is elegant because "everything is an actor", but in reality most of those actors need to get converted into function calls or possibly inlined. This seems are really hard problem. There are so many possible message-passing / function-call / inlining possibilities, and the optimum configuration probably depends on runtime information (like number of nodes). There are also many balancing issues, like one actor outputting events faster than another can process them, leading to a queue that gets bigger and bigger.

Right now, I don't believe anyone can write an efficient compiler from "actors-all-the-way-down". I think therefore it is better to leave the algorithm parallelisation to a human. I think I would use the actor model as an abstraction for inter-process communication, but have a low-level imperative language for writing the actors. I think I would also want to use a different type-system, with type erasure to avoid having to deal with runtime-type-information.

fixing some misunderstandings (re interfaces and performances)

I see nothing wrong with an memory interface as such, but there needs to be provision for allocation and deallocation for writing the garbage collector.

Is there some reason a garbage collector could not be written using raw addresses and the ability to modify memory, including storing instructions there, and creating actors out of raw bits, as I sketched?

Unfortunately I just don't see counting events instead of instruction cycles as a realistic measure of performance. [....]

Good because it is not a measure of "performance". It is a measure of logical dependencies and their measures. (The rest of that quoted paragraph seems to take the first sentence as a premise of relevance.)

The actor model is elegant because "everything is an actor", but in reality most of those actors need to get converted into function calls or possibly inlined. This seems are really hard problem.

OK. I'm pretty sure I don't agree but OK.

There are also many balancing issues, like one actor outputting events faster than another can process them, leading to a queue that gets bigger and bigger.

Actors don't have queues between them. One actor can provide a queue to schedule messages from other actors but programs impose that structure explicitly or not at all.

Right now, I don't believe anyone can write an efficient compiler from "actors-all-the-way-down".

OK. Via an analogy, "pre-scheme" suggests it might not be too hard.

I think I would use the actor model as an abstraction for inter-process communication, but have a low-level imperative language for writing the actors.

Arguably, that is the status quo of all existing programs, though they vary wildly in how well they make use of that perspective on their structure.

Performance of message passing

Okay so we don't agree about the performance of message passing. Is there some assumption I am making that is wrong? Let's take the factorial example. This is going to result in N inter thread messages for calculating N factorial recursively. When passing the message we need to make sure that there is not another message being handled already, so there would be a mutex? This is already going to be slower than a plain function call. If actors were pure we could skip the mutex as every thread could use thread-local stack storage. Having seen examples of stateful actors, we must assume only one message can be handled at a time by the actor, and hence the mutex locks will make the performance much slower than the simple recursive function. If we can have pure actors then we can skip the mutex, and if we construct messages on the top of the stack we have something that is the same as a normal function call. Sender constructs message on top of stack, hands ownership of this to the receiver, which processes it and 'returns' to the sender after freeing the memory

What machine level operations do you see happening when sending a message more generally where senders might be several, and actors are impure?

Regarding writing the GC, you still need to deal with getting memory from the OS, which including mmaping files and shared memory may be non-contiguous. Still interfaces for these features could be added, so its not too big a deal. However the proposed interface for memory with integer addresses is less safe than typed pointers, and nowhere near using linear types to track the type of data stored in each address.

General message passing can be as performant as for procedures

Message passing in general can be as performant as procedure messages because it is basically the same to the sender.

PS. In general, it is a good idea to pass a message in the registers.

How do you avoid the mutex lock?

Can you explain how? If Actors are impure, then they must gate messages to ensure more than one message is not processed at a time. In a multi-threaded system this will involve a mutex lock. How do you intend to avoid this performance limiting lock?

re more than one message is not processed at a time.

If Actors are impure, then they must gate messages to ensure more than one message is not processed at a time.

The extent to which an actor is shared is trivially determined statically in many common cases.

(Your worries here remind me of "The myth of the expensive procedure call" that helped give rise to the RABBIT thesis.)