## Implementing the communication semantics of actors

It would be interesting to see an implementation of the actor semantics for communication and concurrency. Personally I would like to see a description of the execution flow that is concrete enough to show any similarities and differences from other models. There have been some threads on LtU that have languished on these details. Generally it appears (to me, at least) that there is an interest in seeing a description detailed enough that it could serve as the basis of an implementation.

In particular, if I squint a little at the SimpleCounter example (which sadly I cannot cut and paste here directly) then I see a FSM with access to an RNG, a timer, and a message queue. In previous discussions (from memory) I believe that Carl has stated several times that there is no queue. It seems a little unclear exactly what kind of RNG would be used, with respect to fairness and justice. In order to examine the semantics of the message passing and execution flow within an actor - that is erase all value computations and look only at an abstract model of concurrency and communication - then I would sketch out the following as a starting point:

Each actor is a set of message handlers and a set of incoming messages.

• To send a message is an asynchronous operation - add the message to the target message set.

• To start a computation is a synchronous operation - fire the asynchronous message and start a timer.

The execution of an actor proceed locally as is a series of steps in which either:

• A timer has expired and a synchronous computation propagates failure instead of receiving a value.

• Pick a random message from the set and process it.
• If the message handler returns a value then fire it back as an asynchronous message.

From this starting point I would then ask:

• Is this an accurate description - are these true statements about how the system behaves, and where they are not how does the system differ in behaviour?
• Is this a complete description - where are the holes in this description that allow a diverge from the behaviour of actors?

## Comment viewing options

### Filling in the blanks

I'll bite once again. In my view, Hewitt proposes a number of underspecified formalisms he then makes wild claims about.

Of course, Hewitt is always right with his claims since he never becomes precise enough to discover whether his claims may hold.

You can try to fill in the blanks, but that really is Hewitt's task, and -moreover- a fruitless endeavor since Hewitt will simply claim you didn't get it right, he's open to suggestions, it is future research, you should read his papers, it is only to be discussed by academics, or whatever card he plays as an out on not being rigorous.

It doesn't make sense to discuss unsubstantiated claims over an underspecified formalism where you start guessing what Hewitt might mean. It's Hewitt's task to make his formalisms precise but most of his claims only (may) hold since he is imprecise, so I suspect he never will.

### Nothing to bite

As a continuation of the thread on prep that I was causing to veer on a tangent this thread has started precisely as a discussion on how to fill in any blanks that are present. It is entirely pointless to start such a discussion by tossing around allegations about what different people may or may not do. A contribution from you would be more welcome if it were actually about the topic of discussion, rather than your feelings about another poster. Would it take a weak or strong notion of fairness for an infinite number of marcos to express a constructive contribution to this discussion?

You're always free to have any discussion you feel like, but from my perspective, these are not allegations but a very mild summary of Hewitt's behavior on this platform.

We wouldn't be having this discussion if he had worked out his theories and got them commonly accepted.

If you feel like, discuss, just see how far you'll get.

### Thanks for your constructive contributions

Andrew,

Thanks for your constructive contributions to these discussions.

Cheers,
Carl

### Actors are implemented

marco, I generally agree with you, but IIUC (I'm no expert) the theory of actors is actually accepted in the community — there are old papers on actors by multiple authors, and nowadays implementations abound (ActorScript is a different story.). For instance, in the Scala community Actor libraries are used in completely non-academic contexts. So people should be able to figure this one out.

Now, I'm still not an expert, and I don't know whether all those libraries are accurate implementations of actors, but such a discussion should be easier.

### actors != Actors

Carl has very specific notions that, put together, lead to non-obvious and non-trivial properties. For example, I suspect this requires restrictions beyond what Erlang allows. I don't understand them, but yes, agree that a reference implementation seems overdue. Heck, even a model in say PLT Redex or Alloy.

### Actors are intended to model *all* digital computation

Actors are intended to model all digital computation thereby placing many constraints on the Actor Model.

### Echo chamber

Carl, intent aside, even a trivial reference implementation would enable more people to more constructively communicate with you.

### This is most constructive

This is most constructive comment I've read on LtU in a long time, and very good advice to take to heart!

### Like Quantum Mechanics, the Actor Model can't be "implemented"

However, there are lots of implementations of "actors." :-)

### Simulation

Any physical process can be simulated. If the actor model is physically realisable then it can be simulated.

### Second Order Semantics

My limited take on this is, the description of the ActorModel is in second order logic. As such it defines a single model (up to isomorphism). Such a thing is not concretely realisable. You can build a first order model, but that would only be one possible model, and not all the possible models.

So you can build systems that conform to the specification, but you cannot build the specification.

Another way to think of this is the second order definition of the natural numbers defines a single model (up to isomorphism) but I cannot actually write any of these numbers down. To write the down I have to choose a representation like:

succ(succ(zero))

or

s(s(z))


These are both implementations of natural numbers, but they are different, however they are isomorphic, and hence compliant to the second order specification.

So this is why you cannot implement the actor model, as when you choose a representation you are no longer talking about all possible representations (which the specification does).

### Quantum: simulate electron; Actor Model: simulate account

In Quantum Mechanics, an electron can be simulated. In the Actor Model, a financial account can be simulated.

### While true, these statements

While true, these statements are not exhaustive. We can also build a simulation of QM within which the behaviour of any energy/matter can be shown. Should we not be able to build a simulation of the Actor Model in which the behaviours of actors are shown?

### Actor Model should be able to simulate any digital computation

The Actor Model should be able to simulate any digital computation.

### Why don't you just separate out a

Why don't you just separate out a useful-to-engineering, physically realizable subset.

I'm sure you've already done that. You're just being coy.

And I get that the feature set of the computing version of Actors is more designed for modeling electronic systems than it is a good feature set for programming or implementing them.

If you expect programmers to use Actors then you should separate out a third set of semantics that's optimized for programming rather than modeling.

I'm getting tired of reading pronouncements that are more like the Tao Te Ching than like engineering.

Here call them: Mathematical actors
CS Modeling actors
Programming actors
And always keep it clear which you're talking about.

I recommend for programming actors you allow various limitations on messages, such as being delivered in some kind of order, or allowing you to specify which messages have a possibility of failing delivery (or failing execution) and having ways to wait for success or roll back on failure, or propagate failure, or run different code on failure.

### The goal of ActorScript is to specify implementations of Actors

The goal of ActorScript is to specify implementations of Actors including exception processing, timeout processing, lists for ordering, as well as many other capabilities.
Suggestions for improvement are greatly appreciated.

However, as in IP packets, if message M1 is sent before M2, then M2 can be received before M1.

### Inconsistent specifications or Delphic Obscurity?

From the terse little comments of Hewitt's that I've read, it has sounded to me as if what he says is trivially self-contradictory.

I dealt with this by assuming that he wasn't communicating well or couldn't keep the context of the conversation straight. ie whatever he said, since it made no sense, I assumed it was just wrong.

If I remember correctly he's said:
1) there is only on message in transit at a time [God knows the context, if that means in a specific program or between any two actors ever, or in some abstract sense, ever]
2) messages are not buffered
3) messages from the same source to the same target can be executed out of order

1) sounds like a bad idea, it implies that the source unnecessarily wait after sending to be sure that the message is received.
2) sounds like a worse idea and makes actor programs almost synchronous. In such a model if you send two messages to the same source then you have to wait for the first message to be received processing on it to begin before you can send the second.
3) Seems impossible if 1 and 2 hold. If two messages can't be sent at the same time, and so the newer can't pass the older one in transit, and if they aren't buffered, so the newer one can't wait in parallel with the older one, then it is not possible fo them to be executed out of order.

### Confusions

I've had the same misconceptions myself, so perhaps I can offer a little clarity. My initial starting point was to think of actors as processes that sent messages to one another. Conceptually similar to most message passing implementations: there is a representation of a message that is "owned" by the system in transit between two processes. The processes have private memory spaces so once a message is in transit they have handed authority for it over to the system.

This is not the kind of system that the actor model describes, although it is one possible implementation of it.

A more helpful mental picture is to think of the runtime in a simple imperative language that uses the stack to implement recursion. Each instance of a procedure has an activation record that stores local data. These activation records are essentially private memory spaces for independent execution contents. Normally these pieces of execution occur in sequence, not in parallel. But when a call is made not every expression in the program that follows is dependent on the result being passed back and there is an opportunity for parallel execution.

Calling a procedure / activating a new record is also a possible implementation of an actor, and one that might be more helpful conceptually. There are no messages in transit - just frames being built from parameters and return values being calculated by eating them. Some of these activations / program paths / procedure calls may be valid to execute in parallel so we can think of an implicit fork being inserted on the call.

The system is then responsible for choosing which order to process these execution contents in. Suspending / delaying one will not affect the execution of any others that were determined to be independent before forking. So the program defines a partial order of executions over these actors, and the system is free to pick any total order that is compatible.

1. There are basically no messages in transit at any time, but there may be suspended computations where the number of active "threads" is greater than the number of processors.
2. Messages are not required to be buffered - there is no mailbox / message queue per actor. The system as a whole may need to hold some kind of buffer as mentioned in point 1. If the system is large enough (number of processors > number of concurrent actors -> bandwidth in the communication graph is lower than some constant defined by the implementation) there should be no buffering or waiting and the computation fans out across the possible resources kind of like a fork bomb.
3. There are different orders in different contexts: each actor has a partial / total ordering of messages received. The system as a whole may hold a different partial / total ordering of messages processed that must be compatible with all of the smaller orders. Points 1 and 2 are compatible with point 3 in some cases - it is not strictly impossible.

### Actor implementations are a work in progress

Actor implementations are a work in progress.

The following points are relevant:
1) At any time, a large number of messages can be transiting in the infrastructure.
2) In order to avoid silent failures, the infrastructure (in accordance with policies in place) can throw an exception for any request for which a response has not been received.
3) For similar reasons as packet switched networks, if a message M1 is sent before another message M2, then M2 can arrive before M1.

### Thanks for the new topic :-)

Andrew,

Thanks for the new topic.

In order to better organize the discussion, I will respond with individual posts on a number of subtopics.

Regards,
Carl

### Random Number Generator (RNG) not required

The Actor Model does not require a Random Number Generator (RNG).

Instead, indeterminacy (as in physics, e.g. quantum theory) arises out the model.

### Implementation of CreateUnbounded reiterated

For convenience, the implementation of CreateUnbounded is re-iterated below:

CreateUnbounded.[ ]:Integer ≡
Let aCounter ←  SimpleCounter.[ ]｡    // let aCounter be a new Counter
Prep □aCounter.go[ ]｡  // send aCounter a go message and concurrently
□aCounter.stop[ ]▮ // return the result of sending aCounter a stop message


CreateUnbounded.[ ] is an unbounded integer.

As a notational convenience, when an Actor receives message then it can send an arbitrary message to itself by prefixing it with “..” as in the following example:

Actor SimpleCounter[ ]
count ≔ 0,   // the variable count is initially 0
continue ≔ True｡
implements Counter using
stop[ ] →
count  // return count
afterward continue ≔ False¶ // continue is updated to False for the next message received
go[ ] →
continue �
True ⦂ Prep count ≔ count+1⚫｡    // increment count
hole ..go[ ] ⍌  // In a hole of the cheese, send go[ ] to this counter
False ⦂ Void▮  // if continue is False, return Void

Interface Counter with
stop[ ] ↦ Integer,
go[] ↦ Void▮



### No timer manifest in implementation of CreateUnbounded

There is no timer manifest in implementation of CreateUnbounded.

### Implementation of CreateUnbounded is not a finite state machine

The implementation of CreateUnbounded is not a finite state machine.

### No message queue manifest in implementation of CreateUnbounded

No message queue is manifest in the implementation of CreateUnbounded.

### Every message sent to a functioning Actor will be received

Every message sent to a functioning Actor will be received. I.e., if an Actor continues to function then then every message that it is sent will be received.

### Order of Sending

Is it possible that aCounter receives 'stop' before 'go'? I would guess not because sending is a partial order, but usually concurrency implies both tasks start at the same time. This would mean (and I hope I am not getting ahead if myself) that the semantics of 'small circle' are not concurrency but an ordering where order is preserved if there are side-effects, and order is not necessarily preserved (IE concurrent) only if there are no side-effects?

### From the comments it can get the stop first

in which case stop returns 0
and the go, just returns void

### Are Actors a Good Model for Computation

I am going to start a new topic, as I don't want a repeat of the last question I asked about this example. It follows on from the thought that if I don't know which message will be received first, does this make it harder to write useful programs.

### Unbounded number of go[ ] messages can be received

An unbounded number of go[ ] messages can be received before the stop[ ] is received.

### Non-deterministic ordering

There is always at least one go message that has not been processed when the actor processes a message. The choice of ordering is completely non-deterministic: every possible ordering can be chosen without respect to fairness. At any point in the execution there are exactly two messages that have not yet been received. One ordering of the possible pair chooses the go message next. This results in a fixed point: the configuration of the whole system (the set of not yet received messages) is the same as the previous activation. The next set of orderings is equal to before. This fixed point allows an execution of infinite length and violates your first condition of unbounded non determinism.

The claim that this program is valid under non-deterministic ordering of messages contradicts the requirement that every message is received in a finite number of steps.

### Whence "without respect to fairness"?

The choice of ordering is completely non-deterministic: every possible ordering can be chosen without respect to fairness.

Where are you getting this possibility? You yourself prove that it's contradictory with the requirement of unbounded nondeterminism. Are you arguing from the standpoint of an implementation rather than the requirements, then?

### From the published work on the actor model.

From Carl's comments on it over the years. From the descriptions of the partial orders of message receptions in an actor system. I am not producing this possibility - it has been argued all along. Read some of Marco's exchanges with Carl in the past five years.

### I haven't found what you're referring to

I'm sorry, I'm probably opening my mouth without knowing lots of that background information. Five years ago, I only read a few LtU threads here or there when I followed a link from somewhere else. I haven't ever looked up Actors in depth--mostly just read about them on LtU. So, I don't know what kind of established research on the Actor model would prepare me to disagree with what Hewitt currently says the Actor model is.

If I look at the LtU thread "What is computation? Concurrency versus Turing's Model" from November 2010, I still see Hewitt talking about how unbounded nondeterminism lets the Actor model model an always-halting machine that computes integers of unbounded size. So as far as I can tell, this isn't a new claim or even a new example. Is there some old conversation or paper you have in mind in particular?

### Indeed

I know which thread you mean as I read it again recently. The example of createunbounded is not new at all: Carl has presented it as an example that exists within the actor model, but not on a Turing Machine. The conclusion of that 2010 thread inconclusive in only one respect: the NDTM formulation was not guaranteed to terminate although there is only a single non-terminating trace possible. That was close enough to a guarantee of termination for most commentators, but it was not entirely conclusive on a minor technicality.

The new claim, not discussed at the time, is that the example program in the actor model suffers from exactly the same issue with exactly the same divergent trace. That is: the example does not show a difference between TMs and actors. AFAIK this has not been raised in the following five years of discussion, although the claim of a difference in computational power has been made repeatedly. I think that claim now requires some substantiation beyond the example presented as the contradiction shows that this program cannot be a valid interpretation of the semantics of the actor model.

### Isn't the point that the

Isn't the point that the semantics of the actor model are not to repeatedly randomly pick which of the remaining messages is delivered? Rather, the semantics is approximately like this: when a message is sent, a delivery time is picked at random from the set of unpicked delivery times. Thus the 'stop' message will always be delivered after finite time and thus after at most finitely many 'go' messages have been sent.

It seems plausible to me that this is a convenient property to have in a model, though I don't think it's particularly interesting or important.

### circular argument?

If we can pick some unbounded but finite time in the future, we certainly can use this to produce an unbounded but finite integer. But how would we do the former? Actors semantics seem to simply assert it is possible, something like an axiom of choice. But (relevant to the OP topic) I don't know of any physical process to implement those semantics that can be distinguished from the potentially non-terminating NDTM.

OTOH, it also seems valid to interpret actors model non-determinism as freedom for the implementation (rather than a requirement). For example, it's perfectly valid under actors model semantics if CreateUnbounded ALWAYS returns 0 in a particular implementation. (Similarly, it's valid if CreateReal always returns an infinite stream of zeroes, unless ActorScript asserts some probabilistic properties on the choice construct.) Hence, we could reasonably argue that actors model is trivial to implement. But we should not assume all the properties Hewitt claims have a physical or computational realization.

### Yes

I've recently posted a comment addressing your first point, but I can't find it. Physically, the NDTM is exactly what we can implement, but it can still be a useful fiction to assume eventual message delivery. You can even model actors using NDTMs, but just have to be careful to interpret statements about eventual delivery as being true "except on a set of measure zero". The advantage is that reasoning about eventual delivery is easier than computing probabilities and showing that the odds of non-delivery goes to zero.

I also agree with your second point. In general, it looks like you can model actors and message delivery with a global queue such that messages are delivered in FIFO order, one per tick. Slow and inefficient, but works as a reference implementation. Has Hewitt argued that a physical manifestation of ActorScript is supposed to allow all possible traces present in the mathematical model? That would be an odd constraint to insist upon.

[Ok, I really need to stay out of these actor threads for a while. If you see me post to one, please yell at me. ]

### I do not believe so

But it is not a particularly strong belief. If the actor model is a semantics for computation then it should decide which programs are valid for execution on a physically realizable process. It should answer the question: given a particular program is T a possible trace of the program execution.

I am certainly no mathematician, but roughly speaking the actor model claims that for the createunbounded program the set of traces is compact and complete for all traces up to, but not including omega. The argument about whether or not that is a feasible claim has played out at length in one of the previous threads.

My interest is much more specific than that: is there an effective decision process such that given the set of outstanding messages a possible ordering of them can be judged as valid or rejected. This decision would actually need to be encoded in a simulation of a machine that implements the actor semantics.

The claim above roughly translates to: there does not exist a decision procedure that can judge which possible orderings of the outstanding messages are valid in the actor model. After I finish reading Clinger I will translate it into a more appropriate language as it is currently somewhat fuzzy as it is stated.

### Unpicked delivery times

Rather, the semantics is approximately like this: when a message is sent, a delivery time is picked at random from the set of unpicked delivery times.

When I read this a few days ago an objection sprang to mind, which was that it is not possible to take a uniform sample from N (which presumably is the set of future delivery times). If it was possible then the output of createunbounded would be needed to do so, so it seems that the implementation requirements are a bit circular.

Reading gasche's latest submission today I came across this lovely pun on one of the slides:

sto·cha·stic /stō-ˈkas-tik/ adj. fancy word for "randomized"

Which seems particularly apt as one of the definitions of randomized is "shuffled". What if the set of unpicked delivery times becomes a simple list containing the total order of undelivered messages? Picking the delivery time is inserting the new message into a random position in the list. This would remove the need to decide if a message is delivered by flipping a coin, and admitting an infinite trace where no delivery occurs. It seems plausible as a concrete way to enforce the guarantee of eventual delivery for any program that does not enter livelock.

### Flipping coins by way of shuffling

When I read this a few days ago an objection sprang to mind, which was that it is not possible to take a uniform sample from N (which presumably is the set of future delivery times).

That crossed my mind too, but "picked at random" conspicuously doesn't say "uniformly." It could be a hypothetical operation that not only has a zero probability of taking infinite time, but in fact has no possibility of taking infinite time. Not that we could tell the difference.

Picking the delivery time is inserting the new message into a random position in the list. This would remove the need to decide if a message is delivered by flipping a coin, and admitting an infinite trace where no delivery occurs.

In the example of CreateUnbounded.[], we keep removing and adding "go" messages, so that would be just like flipping a coin.

### In "Prep", "｡" separates preparations from return expression

In Prep, "｡" separates preparations from the return expression.

### demystifying the "message queue" question

To demysitify a bit:

"The Actor Model is a mathematical theory that treats “Actors” as the universal primitives of digital computation." - Actor Model of Computation: Scalable Robust Information Systems

That should be understood quite literally, I believe. A weakness of the recent presentations is that this theory is not plainly and compactly exhibited!

Nevertheless, we have some key axioms that describe message sending and message receipt:

"The Actor model makes use of two fundamental orders on events [Baker and Hewitt 1977; Clinger 1981, Hewitt 2006]:

1. The activation order (⇝) is a fundamental order that models one event activating another (there is energy flow from an event to an event which it activates). The activation order is discrete:

  ∀[ e1 ,e2 ∈ Events] → Finite [{e ∈ Events | e1 ⇝ e ⇝ e2 }]


There are two kinds of events involved in the activation order: reception and transmission. Reception events can activate transmission events and transmission events can activate reception events.

2. The reception order of a serialized Actor x (→x) models the (total) order of events in which a message is received at x. The reception order of each x is discrete:

  ∀[ r1 ,r2 ∈ ReceptionEventsx ] →
Finite [{r ∈ ReceptionEventsx | r1 →x r →x r2 }]


The combined order (denoted by ↷) is defined to be the transitive closure of the activation order and the reception orders of all Actors. So the following question arose in the early history of the Actor model: “Is the combined order discrete?” Discreteness of the combined order captures an important intuition about computation because it rules out counterintuitive computations in which an infinite number of computational events occur between two events (à la Zeno).

Hewitt conjectured that the discreteness of the activation order together with the discreteness of all reception orders implies that the combined order is discrete Surprisingly [Clinger 1981; later generalized in Hewitt 2006] answered the question in the negative by giving a counterexample: [....]

The resolution of the problem is to take discreteness of the combined order as an axiom of the Actor model:

  ∀[e1 ,e2 in Events] → Finite[{e ∈ Events | e1 ↷ e ↷ e2 }]


- ibid.

The non-determinism inherent in the Actor model arises simply because the axioms of message transmission and receipt do not allow a definite delivery order to be determined deductively.

Instead, these axioms only assure that messages are received by a given actor in a total order, that they are sent by all actors in a partial order, and that the transitive closure of these two orders contains only a finite number of events between any pair of events.

Of course, if you thought you were asking about how to implement ActorScript on your workstation so you can play with an ActorScript interpreter and get a feel for it, the mathematical explanation is informative but gives little guidance.

The most general answer to "How can ActorScript be implemented?" is that it can be implemented in any way that satisfies the axioms, at least with regards to ActorScript programs that can actually be constructed on that implementation.

Because message passing is taken as fundamental in the Actor Model, there cannot be any required overhead, e.g., any requirement to use buffers, pipes, queues, classes, channels, etc. Prior to the Actor Model, concurrency was defined in low level machine terms.

It certainly is the case that implementations of the Actor Model typically make use of these hardware capabilities. However, there is no reason that the model could not be implemented directly in hardware without exposing any hardware threads, locks, queues, cores, channels, tasks, etc. Also, there is no necessary relationship between the number of Actors and the number threads, cores, locks, tasks, queues, etc. that might be in use. Implementations of the Actor Model are free to make use of threads, locks, tasks, queues, global, coherent memory, transactional memory, cores, etc. in any way that is compatible with the laws for Actors [Baker and Hewitt 1977].

-ibid.

### Levels of abstraction

The most general answer to "How can ActorScript be implemented?" is that it can be implemented in any way that satisfies the axioms

I think that this is a key point. And one that is often missed in discussions of the Actor model. The basic Actor model is a fundamental and very general set of axioms that are intended to define how "things that interact through message passing" might behave. There are many possible implementations that may satisfy the axioms. Presumably a queue-based system could satisfy the axioms, as might a bag-based messaging system.

That said, I think that some of the questions about the Actor model that seem to get repeatedly raised have to do with whether or not some of the axioms are realistic and genuinely implementable. Guaranteed delivery seems to be a particular sticking point. But at that point we get derailed into implementation-level details to do with time-outs, exceptions, and various other ways of dealing with the reality that messages will not always be delivered. Which we could probably ignore if we stuck with Actors as a model. But instead we're told that Actors are both a model, and a full-blown implementation (that apparently includes some undefined run-time infrastructure to handle time-outs and exceptions). Unsurprisingly, people get confused by the shift in levels of abstraction.

### re people get confused by the shift in levels of abstraction.

[....] people get confused by the shift in levels of abstraction.

Very well said (the whole comment).

A constructive and potentially very interesting exercise might be to pick a particular communication's media (e.g. email or HTTP or hard-wired circuits), come up with some mathematical theory that describes that form of communication and its failure modes, and then show how objects of type Actor can be constructed while satisfying the additional constraints of those communication media.

### Instead of arguing over what what Hewitt intended

it would be more useful to just come up with useful semantics that are similar and embed them in a language.

1) because Actors don't have to return values except by sending a new message, we can have (the equivalent of) locking without any possibility of deadlocks. No more than one lock is held at a time in a chain of execution. That's a great advantage.
2) A model that simple can be optimized so that the cost of locking is (on average) much less than a normal critical section.
3) The model can be transparently extended to network communications rather than multiprocessing.

That's a hell of a lot of advantages over common methods. We don't need anything more from Hewitt, we don't have to care about the rest.

I would say we should:
1) implement this at a low level in concurrent systems so that we can see performance improvements
2) Even Hewitt sees "Actorscript" as an add-on to existing languages. We don't need any language to experiment with being "actors all the way down," it can be a tool that we mix with existing tools.

We shouldn't be arguing "what did Hewitt mean" we should be arguing "what is most useful way, or ways?"

The fact that messages don't have to arrive in order simplifies things, but we probably want to include the easy to implement "messages from a single source to the same target arrive in order".

Why not?

### Time-out contracts

If one wanted a useful system that could handle a process stalling or a machine dropping out of a network, then you'd have to have some sort of delivery contract.

One can imagine optimizable kinds of contracts with simple triggers and possibly more complex tests to run if the triggers go.

Simple triggers could be either time-out or some sort of phase change (a phase of processing ends, all unfinished messages relating to that phase have to be handled).

 The question of how you handle a process that gets out of sync, if it finally responds to canceled messages is interesting too. Perhaps there can be some idea of messages having a pedigree - ie Message B is a response to Message A, if A is canceled then B is invalid... Or the actor that received the canceled message should revert?

### Josh, we formed this thread

Josh, we formed this thread explicitly to discuss the Actor model, not your separate ideas.

### We have to discuss speculative topics as subthreads

Because LTUs policies state:
"It is a forum for informed professional discussion related to existing work. "

I am not, technically, allowed to create a purely speculative topic as a separate thread.

So you'll have to put up with people who want to have useful discussions within other topics.

### A message sent to the same Actor before another could arrive OoO

A message sent to the same Actor before another could arrive Out of Order; just like two packets on the Internet. Uniformly requiring otherwise can impose overhead.

I think within a shared memory model, on current hardware, in order when it's the same source would be cheaper.

Messages on the same processor are just jumps.

Messages to other processors might be cheapest as appending to a ring-buffer that is only written by the processor doing the writing and is only read by the processor that will read and (? maybe only for that actor).

Although if you used an atomic pushdown stack you could get rid of having to deal with overflows - but then you'd also have to use atomic instructions (very slow). In that case, messages would always be reversed time, but they could be batch removed and then read in reverse order to fix it.

For writing to a network, well what's significant overhead depends on the design of the network.

### Overhead depends on acknowledgment latency for first message

The overhead depends on acknowledgment latency that the first message has been received so that the second message can be sent.

### lock step messages

Well it's interesting. So actors serialize acknowledgement going back to a specific actor - that actor can't proceed until its previous message is acknowledged.

Obviously, for an shared bus/memory system, you don't have to wait - there could be no message failure except on extreme hardware failure that would bring down the computer.

For other systems, waiting for ack suggests that a system can only have high throughput if it has other work or has a huge number of actors running in parallel.

An alternative would be to have different levels, not requiring lock-step acknowledgement for some messages, such as communicating over TCP where messages are serialized and the error, if it comes will be for the stream not a specific message.

Given that you have a model that minimizes how far ahead you can get of an error, lock step, do you have an error handling framework for that? Also what if the acknowledgement got lost but the actor on the other side DID get the message and acted on it?

### No lock step in Actor Model

In general (e.g. might involve Internet packet communication), to guarantee that second message is received first, there must be some ack for first message before second message is sent.

### How is that not lock-step

within the actor (not relative to other actors)?

I realize that multiple actors run in parallel, so it's not holding back others from sending messages.

### Do you mean "recieved" or "acted on"?

One reason a message may not be acted on is if the previous message handler never terminates.

When you say "acknowledged" do you mean "successfully sent across a network" or do you mean "the target actor started acting on the message?"

### A message must be received and processed in order to be acked

In terms of modeling, a message must be received and processed in order to be acked.

(Sorry for the cross posting. This topic was forked.)

### No ack needed for indeterminate message ordering

For other systems, waiting for ack suggests that a system can only have high throughput if it has other work or has a huge number of actors running in parallel.

Right. I think Hewitt's saying the Actor model doesn't guarantee an ordering on message arrival, so it doesn't need to wait on an ack, so it doesn't have poor throughput in the sense you describe.

Also what if the acknowledgement got lost but the actor on the other side DID get the message and acted on it?

Hmm, I might not know what I'm talking about, but:

The ack message itself would enjoy guaranteed eventual delivery. However, this situation relies on one Actor successfully actually sending the ack message and the other Actor successfully processing the fact that it has been received. Implementations of Actor designs might be a somewhat different Actor than intended due to bugs in the implementation, such as being implemented on hardware that suffers a power outage.

### Oh you mean he meant manual ack not automatic

I see.

It did sound like that, but before that he wrote "The overhead depends on acknowledgment latency that the first message has been received so that the second message can be sent." Which implies that the second message can't be sent until there is an ack on the first.

I guess the more likely interpretation is that when he talks about acks he means manually acking.

It's frustrating that these short posts are ambiguous.

He also said elsewhere that there's a guarantee of eventual delivery, which implies that an ack is expected, eventually.

Ok, so he recommends a mode where all acknowledgement is manual?

The lock-step mode that I though he meant would make error recovery easier, so it's not useless.

Also, one answer to my earlier question: "Also what if the acknowledgement got lost but the actor on the other side DID get the message and acted on it?" - if messages are signed/checksummed/have-counters then we can say that the receiver is the final arbitor of what messages it got, since it can verify on its own, that they're real. The other side can resend a message as well and the receiver can detect if it's a duplicate.

But everything is easier if you use a serial protocol like TCP where the whole stream is kept in order and correct.

### Answering Andrew Moss' two questions

Is this an accurate description?

I think you have sketched something that, if implemented, could almost but not quite satisfy actor axioms.

Bug 1: The use of a timeout clock is a special case that does not apply to actors-in-general. Actors-in-general are only constrained by an axiom that says between any message sent to an actor, and when that message is received, only finitely many messages are sent or received globally. There is not an upper bound on the time to process a message. There is only a finiteness requirement on how many message-related events happen before a sent message is received.

Note: the timeout clock might be perfectly suitable for Actors in a special-case environment. It's not an invalid implementation technique. Just not a fully general one.

Bug 2: An Actor can not simply "pick a random" next message to execute. In the ordinary sense of "pick a random", that would allow some message to never be picked, which would violate the aforementioned finiteness axiom.

Is this a complete description - where are the holes in this description that allow a diverge from the behaviour of actors?

It is incomplete in that the system you describe could accurately implement some but not all actors.

From a utilitarian perspective, the system you describe could be a perfectly useful specialized dialect of an actor language, that facilitates writing programs for (a big class of) special cases.

### Outdenting

I will try to respond to several different points, currently they are in different branches of this topic. I would like to tie several of them together, hence replying to the least-common-ancestor of the posts :) I think this responds to some (but not all of Carl's responses, partially at least).

Thomas’ pointer to [Baker and Hewitt 1977] and the key sections within it has cleared up a lot of my confusion. I can see now that my first attempt at sketching an implementation contained more bugs than sense. My confusion was over what an actor within the system is, how it interacts with the system, and what storage it contains. So, roughly all of its definition. I jumped back to [Hewitt 1974] to see some more examples, and for an introduction to the semantics. I think that Allan’s point about the shift in abstraction level creating the potential for confusion is entirely central.

A second attempt at sketching out the execution semantics would begin with a definition of an actor:

• A source script and some local state as a continuation.
• A unique name

That is, there are no messages in the configurations of the system, rather the message passing is an operational step on the overall configuration. This configuration would be a set of actors as above.

Rather than jumping rashly into an attempt to define how these configurations would evolve, I need the clarification of an example to understand the creation / execution and destruction of these activation records / actors. The simplest example that I can stretch towards would be fibonacci, expressed in a source script something like this?

Actor Fibonacci[n]
n?
1 -> 1
2 -> 1
otherwise -> Fibonacci(n-1) + Fibonacci(n-2)


I believe that this does define the computation but it feels quite abstract as the message passing / activation order is somewhat implicit. In order to unpack this a little I would attempt to compile it into an intermediate form that is hopefully more concrete. This would look something like this (apologies for abuse of syntax where appropriate) :

Actor Fibonacci[] :
Request[n] ->
n?
1 -> 1
2 -> 2
otherwise ->
partial = 0
Fibonacci(n-1)
Fibonacci(n-2)
Response[n] ->
state?
0 -> partial = n
otherwise -> partial + n


An attempt at describing a trace of the system would then resemble:

Initial request: Fibonacci[3]
Create an actor: Fibonacci#1 with no inital state
Process the message: Fibonacci#1.Request[3]

The execution of this handler would then create the local state partial for Fibonacci#1 and then “send” two messages into the system. The system would implement this send by creating two new actors Fibonacci#2 and Fibonacci#3 (I use an arbitrary numbering for the names here as it would depend on the partial ordering). Each would be waiting to process their initial requests.

Fibonacci#1 would then exist as an actor with name and partial state but no current execution as it is waiting for a message. Meanwhile Fibonacci#2 can process its request which creates an immediate response. At this point I should note that I’ve left the details of return names/addresses as implicit for now. In some some order the system could then choose to process the response inside Fibonacci#1 or the request in Fibonacci#3. The partial state is used to implement a join, as described in [ II.6, Hewitt and Baker, 1977] although in this particular instance the information about arrival ordering is deliberately discarded as the plus operator is symmetric. The system would continue with Fibonacci#2 and Fibonacci#3 being deleted from the configuration as they terminate with a result, and then ultimately producing the result from Fibonacci#1 and terminating.

tl;dr Yes, I was barking up a tree in the wrong forest the first time, perhaps this would be closer to the right tree?

### Delightful

Looks great, will have a read.

### re unbounded non-determinism

It may be helpful to understand unbounded non-determinism as a property of programs not (necessarily) implementations. (Following [Clinger AITR-633].)

For example:

Consider the Actor program CreateUnbounded.

Actor semantics let us prove:

1. All faithful implementations of the program return a natural number.

2. There is no upper bound on what number might be returned.

A faithful implementation of ActorScript might always return the number 8. Other faithful implementations, including those that are non-deterministic, are also possible.

It might or might not be the case that every implementation of the program has an upper bound on what number might be returned.

Nevertheless, if we are only concerned with what can be proved from the semantics of faithful implementations in general -- if we are only concerned with what can be proved from the program alone, without regard to a specific implementation -- then there is no upper bound on what number can be returned (yet we can prove that a number will be returned in every case).

### testing and debugging

Proofs are great. It would be wonderful if they were more widely used.

Testing and debugging and empirical feedback seem to be primary ways software developers achieve working software systems. It seems difficult to test a subprogram if, when run in a testing context, it tends always to return 8... yet might return something very different when run in context of your actual program (e.g. due to extra interactions with a message or thread scheduler, use of an optimizer, etc.). My experience is that this is not an unusual deviation.

### re testing and debugging

OK?

testing and debugging

In contexts where the emphasis is on testing, presumably the test environment should be faithful to the field conditions at which tests are directed.

### that would be nice

Yes, but should be is an agenda seldom followed, due to hardware (and legacy software) constraints. In software designed to scale, one usually must simulate two or more orders of magnitude times the number physical boxes at hand. And some devs wrote lots of the code to use "real" interfaces defined by the OS, so not using hardware is impeded by all the hoops needed to virtualize what could have simply been abstract. So it would take the equivalent of a major system port to undo direct platform api dependency.

Then it's common for QA teams to run suites of tests whose degrees of freedom are already exhausted, so tests already do not imitate expected field conditions. Bizarre reported numbers cause you to say "but you need to run tests corresponding to what customers will actually do", but rewriting the test suites does not happen, because they grew organically over several years to cover an armature of weird first assumptions. You don't hear stories about what real world QA organizations do, because ... um, that would be bad marketing.

Every non-ideal layer with visible cruft has another layer underneath, with it's own non-ideal cruft. As a result, the less you are allowed to do at the top, exercising this stuff, the less data you collect about potential faults, and ignorance is not bliss in this case. In summary, faith to realistic field conditions is a bit exotic and seldom the case.

### In 30 years

doing real-world development, I can't say I've ever seen testing that was faithful to realistic field conditions.

Even my own.

### Trials

With some of our software that has to be reliable, we conduct full-scale trials where we get volunteers to role play and conduct days of testing. You still don't cover all eventualities, so you have to simulate some problems that you are aware of as risks. For example randomly rebooting network hardware, pulling out cables. This is a kind of simulated real life use plus chaos monkey approach.

### Programs cannot exist

Programs cannot exist without precise, and concrete, semantics. Neither the values computed, nor the feasible traces that yield those values are well defined in the absence of precise semantics.

If there is a mechanism to implement guaranteed delivery of a message over an unbounded interval, then semantics that include that guarantee are physically realisable. If no such mechanism exists then the semantics describe a Zeno machine. If we do not know of such a mechanism then we do not know if the semantics describe something that is computable, or a form of hyper-computation.

Actor semantics claim that all messages are delivered eventually. None of the documents that I have read so far (including Clinger) put forward an argument that this claim is physically realisable. My interpretation of this state of knowledge is that we have no proof that the actor model is physically realisable, and that any claims that it does model a real form of computation are currently unsubstantiated. I would like to find some substantiation, or some evidence that such a mechanism is impossible.

I doubt that your first point can be proven with any degree of rigour. It is based on the claim that the interval until the receipt of the stop message is bounded, but only because it is claimed (somewhat axiomatically) that it must be. As a comparison consider a NDTM to which we add the claim that all programs terminate. I literally mean that we add the claim, without any further details, to the computational model that describes the NDTM. Would this produce a well-defined computational model that would describe a set of Turing-capable total computations, or would it describe a form of hyper-computation?

Roughly speaking, this is what happens when we add a Halting Oracle to a machine. But we cannot then claim that the resulting model is physically realisable. In the case of the oracle we know that we are dealing with a magic box that cannot exist. In the case of a guarantee on delivery times in a non-deterministic system we do not seem to know if this is a magic box or not.

If we remove the magic box then we have a system that is physically realisable (for example the implementation that always returns 8) - but it is certainly less powerful. We do not know if anything that is physically realisable can be faithful to the semantics that include the magic box.

Please contrast this with the definition of a Turing Machine - that is physically realisable in principle. Sure, we cannot make an infinite tape, but we do have a series of smooth approximations to an infinite tape that allows us to build a series of machines that approximate the model (execute a larger number of feasible programs correctly) with ever-increasing precision. This does not appear to be the case with the guarantee of delivery - it is not only a magic box, but it also a magic black box.

In the absence of any argument that this magic black box can exist, I wonder if a computational model that relies upon it is actually well-defined and can be said to define the semantics of any program at all?

### Minimal Specification

There's my limited take on this from what I have been able to understand so far.

Actor semantics are a specification for a machine, not the machine itself. You are free to implement a machine that is more deterministic than the specification, but then you lose optimisation opportunities.

Actor semantics are sort of the answer to the question "what parellisable machine specification gives the greatest opportunity for optimisation whilst still allowing complete specification of algorithms". Is this the most general model?

With regards to messages never arriving, this is a violation of the model semantics, but it can happen in reality. This is just like division by zero in most languages, the model does not handle it, so you have to throw a runtime exception. In an implementation of actor semantics, if a message goes missing you might throw an exception and halt the whole parallel machine. As this is costly, you might spend a lot of time in the implementation making sure this is unlikely to happen.

In this respect it is like the analog circuit model for electronics, it is the most general model, but that makes it very difficult to design large systems with, so I would want to restrict the semantics to something more manageable (communicating sequential processes for example) for as much of the system as possible. This can of course be defined in the actor model, but not the other way around.

### If a response is not received, then an exception is thrown

If a response is not received for a request, then an exception is thrown by the infrastructure in accord with the policies in force.

The justification is that silent failures should not be tolerated.

### Do all messages require a response?

I thought the response was optional. I guess I should have said, if you can detect that a message has gone missing (lack of response for example), it would throw a runtime exception.

### re do all messages require a response?

Every message received includes an implicit continuation parameter.

### How long to wait?

How long should an actor wait for its continuation to be called before throwing an exception? It would seem hard to know, especially when the sending a message to an actor doing something complicated like proving Fermat's last theorem.

### Infrastructure throws exceptions based on policies in place

If a response is not received for a request, the infrastructure throws an exception based on policies in place.

### Request requires a response; one-way message doesn't

Request requires a response or the infrastructure will throw an exception. However, a one-way message doesn't require a response.

### semantic precision re Programs cannot exist

Andrew, good questions. I'll try to answer plainly.

This is the part I most want to answer:

I doubt that your first point can be proven with any degree of rigour. It is based on the claim that the interval until the receipt of the stop message is bounded, but only because it is claimed (somewhat axiomatically) that it must be. As a comparison consider a NDTM to which we add the claim that all programs terminate. I literally mean that we add the claim, without any further details, to the computational model that describes the NDTM. Would this produce a well-defined computational model that would describe a set of Turing-capable total computations, or would it describe a form of hyper-computation?

Actor message delivery is characterized by these definitions and axioms.

Definitions:

A reception event is the receipt of a message by an Actor.

The reception order of a serialized Actor x ("⇛x") is an order on the reception events of actor x

   Notation:

If e1, e2 ∈ Events are reception
events at Actor x then:

e1 ⇛x e2

means receipt of e1 causally precedes e2.


Note: each reception order ⇛x is defined to be a total order.

A transmission event is the sending of a message by an Actor.

The causality order of events (⇝) is induced by these rules:

   For e1t, e1r ∈ Events:

If e1r is the reception event triggered
by transmission e1t then:

e1t ⇝ e1r

On the other hand, if e1t is a transmission
sent while handling the message received at e1r then:

e1r ⇝ e1t

And of course for any events:

∀e,f,g ∈ Events → (e ⇝ f ⇝ g) ⇒ (e ⇝ g)


The combined order of events (↷) is defined:

      ∀[e,f,g ∈ Events] → (e ⇛ f) ⇒ (e ↷ f),
(e ⇝ f) ⇒ (e ↷ f),
(e ↷ f ↷ g) ⇒ (e ↷ g)


Fundamental Axiom:

Here is the controversial Axiom attributed to Clinger:

  Axiom: The combined order is discrete.

∀[e,g ∈ Events] → (e ↷ g) ⇔ Finite[{f ∈ Events | e ↷ f ↷ g}]


Lemmas:

Orginally, Hewitt and Baker posited these lemmas as axioms and tried to prove the Clinger axiom, but Clinger showed no such proof is possible.

  Lemma: Causality order is discrete.

∀[e,g ∈ Events] → (e ⇝ g) ⇔ Finite[{f ∈ Events | e ⇝ f ⇝ g}]

Lemma: Reception order is discrete.

∀[e,g ∈ ReceptionEventsx] → (e ⇛ g) ⇔ Finite[{f ∈ ReceptionEventsx | e ⇛ f ⇛ g}]

(Both proofs are trivial.)


Unbounded nondeterminism

On the basis of those definitions and the fundamental axiom, it follows that the program for ComputeUnbounded (see an earlier comment) always eventually return a natural number. It can also be proved that there is no upper bound on what number may be returned.

Normal axiomitations of (for example) non-deterministic turing machines do not share this property. A non-deterministic turing machine that could return an integer of unbounded size might instead never halt.

Physical Realism

Can we physically implement a non-deterministic turing machine?

Yes and no.

Yes we can because we can build a finite automata and we can keep building more and more storage (as needed) to connect to that automata.

No we can't, on the other hand, because we won't actually build more storage for the machine forever.

Similarly, can we physically implement the Actor model?

Yes we can because we can always keep building out the network used to transmit messages, and the storage for the states of individual actors.

No we can't because of course we won't build forever.

Real implementations of either system have no upper bound on their size but implementations will fail when they run out of capacity.

Nevertheless, Actors are more physically realistic!

Why? The actor model captures the concpet of computation taking place at physical nodes that are joined by communication.

The reception order (⇛x) represents temporal order in a physical frame of reference corresponding to a single actor. It is the order of (reception) events on a "world-line", in relativistic terms. (The universe has more than one world-line, hence the need to index reception orders by an actor.)

The causality order (⇝) represents a critical physical constraint on the propagation of information spacially and over time.

The fundamental axiom (that the combined order is discrete) captures the notion that we are talking about computation that can be physically constructed. (The universe may or may not be a continuum, but our machines are discrete things, so to speak.)

So now, to this question:

I doubt that your first point can be proven with any degree of rigour. It is based on the claim that the interval until the receipt of the stop message is bounded, but only because it is claimed (somewhat axiomatically) that it must be. As a comparison consider a NDTM to which we add the claim that all programs terminate. I literally mean that we add the claim, without any further details, to the computational model that describes the NDTM. Would this produce a well-defined computational model that would describe a set of Turing-capable total computations, or would it describe a form of hyper-computation?

No, Clinger's axiom that message-related event ordering is discrete is not equivalent to the assumption that all non-deterministic-turing-machine's halt. For example, the Fundamental Axiom for Actor messages does not imply that all programs halt.

I'm not at all sure how you could fix the proposed NDTM axiom to avoid absurdity but why bother? Turing machines are an awkward abstraction.

### Very interesting food for

Very interesting food for thought, I shall need some time to digest this properly. Thanks for taking the time to answer in such depth, you've laid out the connections back to the original material with incredible clarity. This is very helpful.

### Clinger's model

I am making somewhat slow progress towards understanding Clinger's model. Currently I do not see how CreateUnbounded would be a legal program within that model. I will now attempt to assemble some thoughts into something vaguely coherent:

1. The global ordering g is not static.
During execution actors can send new messages. These are inserted into the arrival and activation orderings for the target actor. The global ordering then updated to remain an irreflexive partial order that embeds (is consistent with) all of the activation/arrival orderings in the system. The order $$g$$ is actually a series of orderings $$g_0, g_1 \ldots$$ that are updated during execution.

2. "Update" or the definition of $$g_n$$ from $$g_{n-1}$$ must satisfy each of the axioms and laws in the system.
Any update to g, that is any execution step in the semantics of the system, must obey these rules. Not every program is legal: only programs with executions that meet these constraints are well-defined.

3. The execution of the transmission of the start message:
i) creates an interval between the current global time, and some future time.
As the global time is a partial order the end-points of this interval may not be uniquely defined in execution steps, but will uniquely identify a set of execution points that have no causal relations between them (e.g. they execute weakly in the same time step).
ii) The interval must have bounded finite length (by Clinger's Fundamental Axiom).
iii) There must not be a trace of unbounded length that has a causal relation to both ends of the interval (constraint [5] on pg.28).

Aside: point iii) is interesting because it would appear to restrict where open recursion can occur for a program to be be legal within the model.

In the case of CreateUnbounded open recursion is not the issue. The update of g caused by sending the start message is the issue. It increases the number of causally-dependent points in g' between an interval of bounded duration. This violates constraint [5] directly;
• There is an accumulation point within the interval.
• For any valid projection E there are more points within the interval after each execution step.
• Without an explicit bound on the number of times this step can occur this potentially creates an infinite number of images of E within the bounded interval.

Clinger's model does not admit a program with a potentially unbounded number of interactions within the interval of a message's guaranteed delivery. The claim that this program is valid and that the model proves that it always terminates is not true. This program does not meet the constraints imposed on execution by the model.

Emphasis: I have not begun to dig into the Hewit-Baker model in any depth so I would not make any claims at all about programs in that model.

### re Clinger's model

1. The global ordering g is not static.

Sure, assuming non-determinism (e.g. the time when (and if) a given message is sent is not necessarily determined).

Changes to the global ordering of events are "monotonic" in the sense that for events a,b, if a ↷ b is true before time t, then a ↷ b is true after time t as well. Sending a new message doesn't alter the previously determined ordering of messages.

During execution actors can send new messages.

Right. That is an "activation event" (I usually say "transmission event" because I think that's clearer.)

[Newly sent messages] are inserted into the arrival and activation orderings for the target actor.

I can't tell from the way you said that if you have the right idea.

Every message corresponds to two events which we could dub the transmission event and the reception event.

Call the two events et and er.

Call the sending actor A and the receiving actor B.

By definition:

  et ⇝ er
ⓘ et causes er


There is a total order of all events received at actor B.

  ∀[er' ∈ ReceptionEventsB] ⟶
(er = er') ∨ (er ⇛B er') ∨ (er' ⇛B er)

ⓘ a pair of reception events at the same actor are either the same event,
or one arrives before the other.


If actor B, while handling er, transmits a new event, et' then:

  er ⇝ et'
ⓘ reception events cause the transmissions they trigger


In the combined order:

   et ↷ er ↷ et'
ⓘ transmission at A precedes reception at B which in turn precedes a transmission it triggers

The global ordering then updated to remain an irreflexive partial order that embeds (is consistent with) all of the activation/arrival orderings in the system. The order g is actually a series of orderings g0,g1… that are updated during execution.

The combined order ("↷", "precedes") is, indeed, an irreflexive partial order of events in general.

The transmission timing and origin of a message determines, once and for all, the complete set of events that precede that transmission. Further, the transmission of a message determines the existence of one future event, the corresponding reception event, which the transmission precedes.

I think you have a misunderstanding and believe that transmission determines the total set of events that will precede the eventual reception.

That is not the case. Only the actual reception itself determines what precedes it.

By analogy, suppose that the postal mail never loses a letter. Nevertheless, the day on which I send you a letter doesn't determine the day on which you receive it.

Any update to g, that is any execution step in the semantics of the system, must obey these rules. Not every program is legal: only programs with executions that meet these constraints are well-defined.

I can't tell what you mean but it sounds confused. No program can can send a message backward in time.

3. The execution of the transmission of the start message:

i) creates an interval between the current global time, and some future time.

I don't know what it means to "create an interval between the current global time, and some future time".

Transmission does not determine when reception will occur. We allow there to be unpredictable variations in arrival time.

As the global time is a partial order the end-points of this interval may not be uniquely defined in execution steps, but will uniquely identify a set of execution points that have no causal relations between them (e.g. they execute weakly in the same time step).

That is incorrect in a subtle way. You're on the right track, I think.

The timing of a message transmission does not determine the set of events which are not causally related (aka "space-like separated") to the reception.

In terms of the math, you can not (in the general case) infer from the fundamental axiom and an actor program, the set of events that are necessarily concurrent (not ordered by "↷") with a given message reception.

The timing of each transmission and reception adds new information to the computational universe.

The actor universe is deeply non-deterministic.

ii) The interval must have bounded finite length (by Clinger's Fundamental Axiom).

OK, but it is not determined at the time of transmission.

iii) There must not be a trace of unbounded length that has a causal relation to both ends of the interval

That is a less formal restatement of the fundamental axiom.

Aside: point iii) is interesting because it would appear to restrict where open recursion can occur for a program to be be legal within the model.

Hah. That's not correct but here I think I do understand what you mean.

I think you are confusing the axioms, which are about 1-way message transmission, with the higher-level concept of a message from which either a reply or an exception is guaranteed.

ActorScript programs are free to hang before sending a reply. (However, if a reply is sent its transmission will be an activation event and its arrival at the actor waiting for the reply will be a reception event.)

(constraint [5] on pg.28)

Don't lose track of the rhetorical purpose of the list of constraints.

They are nothing more or less than an informal rationale for the Fundamental Axiom (which is more clearly restated as above in this LtU thread).

The axiom asserts simply that the combined order ("↷", "precedes") is discrete.

Informally, intuitively, at a given time a computation involves only finitely many actors. In a given interval of real time, those actors can only undergo a finite number of transmission and reception events.

In the case of CreateUnbounded open recursion is not the issue. The update of g caused by sending the start message is the issue. It increases the number of causally-dependent points in g' between an interval of bounded duration. This violates constraint [5] directly;

I am having some trouble with what you mean by "It increases the number of causally-dependent points in g' between an interval of bounded duration."

I think you mean that transmitting a "start" message in the program brings into existence, at some future time, a corresponding message reception after some finite interval. Since (you argue) that increases the number of events in an interval twice as long, we have an "accumulation point"?

If that's a fair summary the answer is no, that is not an accumulation point because no matter what the number of events between (by "↷") any two events remains finite.

Without an explicit bound on the number of times this step can occur this potentially creates an infinite number of images of E within the bounded interval.

The Fundamental Axiom allows you to prove there is a limit on the number of times that step can occur within a finite interval.

Clinger's model does not admit a program with a potentially unbounded number of interactions within the interval of a message's guaranteed delivery.

Yes but I don't think you are drawing the right conclusions from that.

You can conclude that no computation begins with an already given infinite set of actors.

You can conclude that in a given interval of time, no actor can create any but a finite number of new actors.

You can conclude that between event transmission and reception there is a non-0 interval of time.

The claim that this program is valid and that the model proves that it always terminates is not true.

The proof is pretty trivial and follows directly from the Fundamental Axiom so you must have made a mistake.

Emphasis: I have not begun to dig into the Hewit-Baker model in any depth so I would not make any claims at all about programs in that model.

I think you are on the right track.

### Static ordering

I think that we diverged roughly on the first point; whether or not the global activation ordering is defined (statically) by the program or (dynamically) by the execution of the program. In reading the work by Clinger (and also comments from Carl) I have assumed that the execution order is mutated by actions in the program: the semantics of the actor model define an update function on a partial order.

I'm reading your response as a claim that the ordering is fixed, independently of the execution of the program. If I've understood your point properly, then how can this function from a script to an ordering be total for a Turing complete language?

This would be the difference between the global ordering for CreateUnbounded allowing an arbitrary integer in response, or the program (and its execution semantics up to and including the global ordering that would result) being ill-defined in the model. To be slightly more precise that does seem to mash together two separate possibilities:
1. There are no validity constraints on the precise combination of activations and receptions in the model, and the function from script to ordering is total.
2. There are validity constraints, that CreateUnbounded matches, in the model and there is a well-defined total ordering for the program that is defined from the script (not a series of mutations from the execution of the actor).
3. There are validity constraints, that CreateUnbounded does not match, in the model and the program has no defined ordering in the (partial) function from scripts to orderings.

I thought that we differed between possibility (2) and (3) before your response, and so now I am confused as I've read your response as closer to (1). Is this correct?

### re static ordering

Actor laws constrain but do not, in general, fully determine the ordering of events.

Legal executions of CreateUnbounded (meaning, executions that satisfy the Actor laws) may return any natural number.

Which of the possible event orderings a particular run of the program exhibits depends on the weather.

The role of the Fundamental Axiom is that without it, we could not prove that CreateUnbounded returns.