Parallelism and Concurrency in the Actor Model

The Actor model is a mathematical theory that treats “Actors” as the universal primitives of concurrent digital computation. The model has been used both as a framework for a theoretical understanding of concurrency, and as the theoretical basis for several practical implementations of concurrent systems. Unlike previous models of computation, the Actor model was inspired by physical laws. It was also influenced by the programming languages Lisp, Simula 67 and Smalltalk-72, as well as ideas for Petri Nets, capability-based systems and packet switching. The advent of massive concurrency through client-cloud computing and many-core computer architectures has galvanized interest in the Actor model.

An Actor is a computational entity that, in response to a message it receives, can concurrently:
• send messages to addresses of Actors that it has;
• create new Actors;
• designate how to handle the next message it receives.

There is no assumed order to the above actions and they could be carried out concurrently. In addition two messages sent concurrently can be received in either order. Decoupling the sender from communication it sends was a fundamental advance of the Actor model enabling asynchronous communication and control structures as patterns of passing messages.

The Actor model can be used as a framework for modeling, understanding, and reasoning about, a wide range of concurrent systems. For example:
• Electronic mail (e-mail) can be modeled as an Actor system. Mail accounts are modeled as Actors and email addresses as Actor addresses.
• Web Services can be modeled with endpoints modeled as Actor addresses.
• Objects with locks (e.g. as in Java and C#) can be modeled as Actors.
• Functional and Logic programming implemented using Actors.

Actor technology will see significant application for integrating all kinds of digital information for individuals, groups, and organizations so their information usefully links together. Information integration needs to make use of the following information system principles:
• Persistence. Information is collected and indexed.
• Concurrency: Work proceeds interactively and concurrently, overlapping in time.
• Quasi-commutativity: Information can be used regardless of whether it initiates new work or become relevant to ongoing work.
• Sponsorship: Sponsors provide resources for computation, i.e., processing, storage, and communications.
• Pluralism: Information is heterogeneous, overlapping and often inconsistent. There is no central arbiter of truth
• Provenance: The provenance of information is carefully tracked and recorded

The Actor Model is intended to provide a foundation for inconsistency robust information integration.

See article at following location: Parallelism and Concurrency in the Actor Model

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Anything else about concurrency and parallelism?

Above post is continuation of discussion here: Anything else about concurrency and parallelism?

Sorry this is long -- probably better done in one shot.

Let me apologize in advance for saying nothing new myself. I likely won't say anything you didn't already know. I was hopeful at first I might get you to say something new to me. But after reviewing the paper again and planning remarks now, I stopped expecting a surprise. My best hope might be asking what certain phrases mean.

Thanks for reposting that summary; it's a verbatim copy of the whole first page of the linked paper. The internal structure of the paper seems odd, with less than nine pages of content, from a total page count of 33, before supplementary material is added in the form of acknowledgements, bibliography, and appendices like ten pages of historical background.

(If I praise a remark on an earlier page, someone might think I endorse slightly weird bits, like the "interaction creates reality" section on page six, next to the surprising appearance of iOrgs which is not obviously a part of the Actor model. There's no way for me to put those parts in context, let alone comment or approve, so some parts of the paper I'm just ignoring after a skim.)

I wish your paper had more definition, argument, development, and explanation. But it seems mostly an itemized catalog of Actor model concepts and related ideas, slightly heavy on the side of bullet lists creating slightly context-free phrases that require a reader to supply the missing associations. I'm not getting an insight about concurrency or parallelism I didn't already have. I suspect if you expanded on something slightly more, there'd be something to think about.

Elsewhere I said I'd summarize an idea to provide context for comments about your paper. After thinking about it longer, it just sounds like a concrete instance of a system with entities like actors, but it wasn't designed to match Actor model per se. However, it's reasonable to say "that's just an instance of an actor model", since the generalization present in the model applies. There's no reason why you'd find useful hearing another concrete example described, so I'll minimize it at the end.

Here are some specific questions I have about the paper. What does quasi-commutativity mean? Searches give me relatively useless abstract math links. Does it mean more than being able to process multiple messages at once? (Some protocols like IKEv2 in IPsec have an idea of window-size representing the number of concurrent requests an endpoint feels like supporting at once before a reply is strictly required before more new requests. Since some operations are semantically independent, their order doesn't matter, and this sort of thing might match the idea you meant.)

What makes the Actor model a mathematical theory of computation? This sounds interesting, and I'd like to get it, but so far I'm only getting a justification that sounds like "because math." Or maybe a vaguely Descartian, "I axiomatize, therefore I math." The concept seems stubbed out.

What does "inherent overhead" mean in the following:

Because message passing is taken as fundamental in the Actor model, there cannot be any inherent overhead, e.g., any requirement for buffers, pipes, queues, classes, channels, etc.

Managing resources and defining behavior when exhausted is part of concrete instantiation, but this principle stated so abstractly sounds like "implementations don't have to make sense." I assume you didn't mean that. The paragraph afterward almost clarifies, but I'm only getting the idea you mean there's no requirement for a specific way of doing it concretely. Sure, okay, but "inherent overhead" is still undefined unless you mean the model doesn't dictate detail. Does "required" fit better than inherent?

Does page five say anything about the Actor model? Most of it is applications of moving bytes around, which can be done without an Actor model (unless you want to argue everything is the Actor model).

Is there a short layman's summary of the Computational Representation Theorem on page seven? I get no hint of what it means. What does "cannot be implemented" mean in the following bold-italic phrase. I suspect you mean there's no guaranteed global consistency.

concurrent systems can be represented and characterized by logical deduction but cannot be implemented

I want to stop here, but I owe detail on that idea I mentioned. To make this shorter, let me phrase it like a recipe to bake a cake from ingredients, which is more construction than explanation. Start by admiring tools like Erlang and techniques like green-thread based coroutines. Now assume a library implemented in the C programming language, because you need something embeddable in a C runtime, because currently shipping product is using C, and forbids anything but C called inside. Instead of turtles all the way down, it's C all the way down. If drama is necessary, cue a terrifying fall into a spiral of doom like James Stewart's panic attacks in Hitchcock's Vertigo. Actually, that would make a good Youtube video about teaching Stewart about continuations. I guess a comic book version is easier.

Next write a library with no static external dependencies, because it calls only symbols it defines. Everything about the environment is a virtual call that must be supplied dynamically (from the library's POV, since the executable can still link a specific environment binding statically). The result is portable when it depends only on things you inject at runtime. For example, all C library calls dispatch through a vtable that defines what each method is required to do. You can satisfy those function pointers any way you like. Anything with a Linux flavor can be replaced another way on Windows for example. A host environment instantiates a VM instance running in this library, provides all memory used, and polls a device-style api on the VM, which advertises non-blocking behavior — the VM makes no blocking system calls. (Blocking calls get queued out to a worker thread pool, or whatever else the host env supplies to service blocking calls.) Give the VM a lightweight process model with fibers implemented as green threads, whose scheduler processes queues of many sorts.

The lightweight processes look a lot like actors. Each requires at least one fiber. There are no address space semantics, which you attach instead to a vat concept bound to each VM instance. (Plan to let VM instances share immutable space mapped by vats, so an untrusted sandbox VM can share large amounts of common immutable runtime when the sandbox cannot modify any of the shared state.) Let each fiber have a continuation, which must have at least one stack activation frame, which executes the next time a fiber is scheduled, which only happens when a fiber is unparked. Fibers park instead of blocking, where park means "green-block". When a fiber makes an async call, it parks until async reply. If a fiber needs to make a blocking system call, it instead parks while waiting for the async reply from the host env which services the request, perhaps via worker thread pool. Each lightweight process responds to outside stimuli only via unparking fibers. For example, to handle signals or message, one or more fibers park waiting for a signal or message to wake them up. Minimum space cost to fork a new lightweight process is the size of a lightweight process header struct, a fiber header struct, and at least one stack activation frame, and these come from the VM's vat, which allocates everything from pools.

To actually test such a VM library, next create a specific host environment ABC in the operating system where you want it to run. On Linux, you wire up as much non-blocking i/o as possible and write daemons as lightweight processes that listen for new connections, which typically fork new lightweight processes. Write a shell to service command lines and scripts and attach a shell to connections to a port specified when you launch the app. By connecting to a VM daemon this way, you can start new lightweight processes in the VM like it's a toy user-space operating system. While you're at it, write an HTTP daemon lightweight process in the VM that services connections to an HTTP port specified on the command line or in a config file, so you can browse the the state of the VM from a browser. Given this host env wrapped around the VM, you can create a peer-to-peer client for other instances of the same VM anywhere else you launched it, or clients for a different host env. For example, in your flagship product XYZ, embed a VM instance with a host env connecting the VM to XYZ interfaces so it can service async operations in XYZ for new features. But now from an ABC command line, you can drive this highly constrained VM inside XYZ provided the XYZ runtime accepts connections from ABC. So even though it's really hard to drive through all the possible states using normal XYZ loads, you can reach them all by white box testing through ABC scripts distributed on as many nodes as you like.

Programming languages come up in several places. A shell language was already mentioned—that's one, even if it's not that interesting. Next is a C-to-C compiler, because you want to be able to write lightweight processes as if they were standalone C apps, but compile them to continuation passing style (CPS) that runs on the library's VM which requires coroutines follow a specific style of ABI for non-blocking execution and cooperative pre-emption under the VM scheduler's control. Finally, you can compile any other language to C whose interfaces know about VM primitives, than run that in turn through the C-to-C compiler. This let's you write VM coroutine tools in whatever syntax you like. For example, if you like Lisp, you can write your scripts in Lisp but have them run in a non-blocking C runtime suitable for high connection count servers. The language part isn't all that interesting in this context though.

No required overhead in Actor message passing

Thanks for your comments. Below is an improved version:

Because message passing is taken as fundamental in the Actor model,
there cannot be any required overhead, e.g., any requirement to use
buffers, pipes, queues, classes, channels, etc. Prior to the Actor
model, concurrency was defined in low level machine terms.

thanks

You shouldn't have gone to so much trouble.

You are *very* welcome.

Actually, we should be thanking you! Your comments helped to improve the article :-)

Concurrency: axiomatized but not implemented using logic

Concurrent systems can be axiomatized using mathematical logic (including the lambda calculus) but in general cannot be implemented.

The following Actor system can compute an integer of unbounded size:

Unbounded ≡ 
   Start[ ]→
    Let aCounter←CreateCounter.[ ]◊
      Prep aCounter.Go[ ], 
           answer←aCounter.Stop[ ]◊ 
        answer

CreateCounter.[ ] ≡
  Actor thisCounter with count≔0, continue≔true◊ 
    implements Counter using
     Stop[ ]→ count afterward continue≔false¶
     Go[ ]→ continue ¿ True ↝ Exit thisCounter.Go[ ] afterward count≔count+1;
                       False ↝ Void ?

By the semantics of the Actor model of computation [Clinger 1981] [Hewitt 2006], sending Unbounded a Start message will result in returning an integer of unbounded size.

The procedure Unbounded above can be axiomatized as follows:

∀[aRequest:Request]→ 
   Unbounded sentaRequest Start[ ] 
      ⇒ ∃[anInteger:Integer]→ SentResponseaRequest Returned[anInteger]

Theorem. There are non-deterministic computable functions on integers that cannot be implemented by a non-deterministic Turing Machine or by a non-deterministic Logic Program.

Proof. The above Actor system implements a non-deterministic function that cannot be implemented by a non-deterministic Turing machine or by a non-deterministic Logic Program.

performance and locality

The concrete model I described does not follow this rule about locality:

Locality and security mean that in processing a message: an Actor can send messages only to addresses for which it has in formation by the following means:
1. that it receives in the message
2. that it already had before it received the message
3. that it creates while processing the message

That's in your section about locality and security, and appears to be about inability to forge a capability in the form of an actor's address. If this is required for an actor system, the lightweight process model doesn't qualify when it doesn't try to make forging IDs cryptographically difficult by default.

However, several things improving locality are a good idea for both performance and debugging. The latter (for debugging) is easier to describe. If every object has a generation number that is (say) 8, 16, or 32 bits in size, then a message can only be received when a sender knows both ID and generation number. This greatly reduces odds of receiving a message targeting an old entity then having it serviced by a new one re-using the same ID. (Alternatively, you can think of ID as also including generation number, in addition to internal bookkeeping name, so ID re-use is rare and thus mistakes are usually revealed.)

Locality is also important for performance when using a model that encourages naive context switching in proportion to fine granularity in organization. For code clarity and isolation of responsibilities, a coder can aim to perform small steps with clear purpose, before handing off data for further work via message or pipeline. This can hide what you want done from a scheduler, unless producer/consumer relationships have been revealed so a related consumer can be scheduled next so the same data is warm in the cache when touched next. Without any clue, a scheduler can context switch too eagerly for sake of fairness, when fair scheduling will reduce performance, for example by thrashing the L2 or L3 cache.

Especially when you are i/o bound, you want data to stay continuously in the hands of fibers prepared to touch it next, as long as it's in cache. A lightweight process runtime can support this with the moral equivalent of process groups in the VM, so a pipeline job can keep running until that job's timeslice is exhausted, even if different fibers park and unpark in that job during one timeslice. This takes advantage of warm cache, just like passing args from function call to function call, even if indirectly passed through a shared pipe for immutable data. But it requires CPU resource accounting across multiple entities that can be scheduled, so they share a timeslice.

In a more complex vein, you also want space resource accounting done in larger granularity than individual lightweight processes, to prevent context switching on bounded buffer limits. If a producer and consumer share the same resource budget, data handoff causes zero net change in usage, and should make neither park from hitting a size limit. In a shell for a lightweight process VM, you would want a job pipeline to share one "process group" for accounting purposes, so you can still enforce size limits without inducing context switches away from related pipeline members.

Actor Laws: devil is in the details

With respect to Actor Laws, the devil is in the details as to how the Actor Model can be used to model a system. There is extensive discussion on the paper referenced at the beginning of this forum topic.

p.s. memory conflict in lane model

(Keep in mind I wanted to talk about concurrency and parallelism, then you posted about actors again. But while I noted the concrete model I described amounts to a instance of the Actor model, I'm not really talking about actors, as such. Instead I'm talking about a lightweight process model. Whether it corresponds to actors or not is irrelevant to me. Only whether something interesting occurs in concurrency matters. So if you feel an urge to say "this is not actors", be forewarned my reply is just "I don't care.")

In my model I use term lane to denote a lightweight process, so the word process is reserved to talk about what occurs in a host operating system. (It ends up being necessary to talk about OS processes a lot, so re-using the same overloaded term is tedious.) If we tried to use acronym LWP for light-weight-process, that would be pronounced "loop", which is a bad idea. If you look on a road for cars, or in a swimming pool, lanes are a loose guide to avoid conflict by staying in a lane to avoid interfering with peers. A lane is a tenuous container, loosely marked, with minimal identity, hosting something else within — fibers in my case. A lane is a collection of one or more fibers with identity so you can belong to a job, send messages, or kill the task a lane represents. A lane is just a bookkeeping nexus. Resource limit accounting is done in job granularity, where a job is a collection of lanes in a group, typically a pipeline if constructed from a shell command line.

In effect, a lane is just a set of associated fibers with shared identity, the same way an OS process is a set of threads with shared identity. When you kill an OS process, or a lane, you want resources they were using to get cleaned up and freed. This is part of the purpose of a lane in a lightweight process VM: a fire break limiting scope of damage when killed, so terminating them with prejudice is a simple and practical affair, done very easily and cheaply. You could still ruin other lanes or processes, if they depend on killed services not easily replaced, especially if you never planned for this. But it's the scope in which interference or lack of it is handled.

In a shared mutable memory system, cleaning up memory resources can be complex. That's why I spent more time thinking about what happens to memory than anything else. With no separate address space, a lane no longer magically reclaims all memory when killed. Instead it must be done systematically, by queuing state needing release so it gets freed incrementally by some form of sweeper. The bulk of this amounts to fiber continuation stacks, which themselves hold all roots that need release. After a lane is killed, code in its fibers can no longer run because all authority is revoked, and this means all roots must be visible through standardization, or reflection. In my case I like incremental (async) refcounting to collect garbage from zombie activation frames left behind by dead lanes.

Not everything referenced by a lane's fibers is reclaimed when a lane exits, because releasing a reference only means you are done with it, but not anyone else still holding an independent reference. Immutable state passed between lanes can live as long as outstanding references stay around in fibers still alive. A single mistake, anywhere, made by a fiber that did not honor immutable status of shared state could cause memory corruption very hard to debug, and this is one of the principal frightening things about a lightweight process VM running C style tasks using convention for immutability.

Mutation is an opportunity to interfere with concurrent code using the same state, so lanes either need to designate state as immutable, or else use one of several possible sorts of synchronization mechanism to disambiguate access conflicts. Collectively you might call them all "locks" even if the specific flavor varies one to another. When a killed lane's references are all released, this also includes all locks held by a lane when it died, or otherwise system deadlocks can occur.

To cooperatively verify immutable objects are not modified, shareable state should have an associated change count that increments whenever state is updated. But once immutable, change counts must never alter, so copies of original change counts can be captured at initial reference time, then compared later to assert they have not since changed. This ought to occur often and pervasively in fibers working with shared state, to provide empirical evidence about correct behavior based on expected invariants. Failure of a pre-condition — no one has changed my immutable input data — should kill a lane, or perhaps even kill a VM if the debug level has been set that high.

When a VM has an assertion failure, it marks a magic field in the VM to say "I am dead" and returns to the host environment. Henceforth, calls to the VM error-out with "I'm dead" status, until the host environment decides to reinitialize all VM state to reboot the VM. Or if a developer is debugging, one would like all VM state to stay completely unchanged for inspection (perhaps from another VM still alive).

Part of this description aims to convey concern for avoiding conflict among lanes, and another part aims to show the same can be done between VM state and host environment state. When you put all your eggs in one basket and do something very complex, something will be wrong and diagnostics are very important.

Indeterminacy and Composablity are inherent to Actor Model

Indeterminacy and Composablity are inherent to the Actor Model. But periodically there have been complaints. For example, see Actors are overly non-deterministic.

Partly, the source of discomfort comes from a feeling of loss of control. Many programmers are control freaks and to them indeterminacy represents an extreme loss of control. (Einstein voiced similar feelings when quantum mechanics was developed.) It pains control freaks that programs now operate in a humongous many-core networked environment in which the use of resources is not under the micro-control of their programs.

Also, people hate change (in their bones) and some demand to know why functional composition (as in the lambda calculus) is not sufficient--which critics have translated in to the slogan "Actors do not compose." However, Actors exhibit scalability and modularity in a more general way: organizational composition. Of course, functional composition is a special case of organizational composition :-)

However, there is an even larger emerging challenge to control freaks: Inconsistency Robustness. See Inconsistency Robustness 2014.

What does 'organizational

What does 'organizational composition' mean, precisely? What is your operational definition independent of actors model? Functional (more generally, categorical or algebraic) composition models are very formal, and their utility is well founded. But your 'organizational composition' seems at best ad-hoc and at worst a meaningless buzz phrase. Can you provide a definition and a convincing argument that 'organizational composition' is meaningful and useful?

I feel you're misrepresenting the position against pervasive indeterminacy. But I don't feel it would do anyone any good to enagage you on that subject.

Indeterminacy: *necessary* not omnipresent in concurrent systems

Indeterminacy is *necessary* but not omnipresent in concurrent systems. We aim to manage (not to maximize) indeterminacy!

Functional composition is easily axiomatized because it is an overly simple model of computation. Because functional composition is easily axiomatized, we can prove that Functional programs (e.g. lambda calculus) are less powerful and in practice can be exponentially slower than programming with Actors. Nevertheless, despite their limitations, Functional Programs and Logic Programs are useful idioms for Actor systems.

People understand very well the concepts and principles of organizational composition because of their extensive personal participation in human organizations and their use of electronic organizations. Being much more powerful than functional composition means that organizational composition is not so easily axiomatized. However, because something is not easily axiomatized (e.g. quantum mechanics) does not mean that it is meaningless.

What is your evidence that organizational composition is a "meaningless buzz phrase"?

What is your evidence that

What is your evidence that organizational composition is a "meaningless buzz phrase"?

I have been unable to identify a clean definition - formal, objective, precise. It seems you are unable to provide one.

Really, "meaningless buzz phrase" is the default when you put phrases in bold without even defining them. That's marketing, not CS.

Organizational compostion is *not* a "meaningless buzz phrase"

Your accusation that organizational composition is a "meaningless buzz phrase" is without foundation.

Research on organizational composition is an active research topic. Casting aspersions at ongoing research programmes is not productive.

There are some preliminary results on organizational composition and related topics in: Parallelism and Concurrency in the Actor Model

Research is always ongoing.

Research is always ongoing. You shouldn't make bold claims before you're prepared to defend them. If your claims have a valid foundation, I would like to see it. Starting with a precise definition of 'organizational composition'.

I am suspicious of more than a few claims from that actors model brochure you linked me to. A couple examples:

Because humans are very familiar with the principles, methods, and practices of human organizations, they can transfer this
knowledge and experience to iOrgs.

Have you actually researched this? How familiar is the average human with the structure of large human organizations? What makes you believe that intuitions regarding human organizations will transfer effectively to inhuman ones?

iOrgs achieve scalability by mirroring human organizational structure.

What makes you believe that human organizations are especially scalable? (Much less composable?) From what little I know of them, it seems human organizations go through terrible growing pains, and often fail in the effort.

Organizations scale, but the limits are unknown

Organizations scale, but the limits are unknown.

The wording of the second quotation can be improved as follows:

iOrgs achieve scalability using methods and principles similar to
those used in human organizations.

Thanks!

Simple Question

What does the Actor model do for us that we aren't already doing? When I look around I see concurrency everywhere. It permeates the natural world, the biological, the cultural, and most of all the world of computing. It is working! It always has worked! Why are we talking about this?

The science of concurrent computaton is in its infancy

The science and engineering of concurrent computaton is in its infancy.
The Actor Model is a good start on foundations, but its engineering development and deployment is just beginning.

First, You didn't answer my

First, You didn't answer my question. Second, the science of concurrency is a old as computing.

Actor Model provides a foundation for Concurrent Computation

The Actor Model provides a foundation for Concurrent Computation.

Initial models of computation (e.g. Lambda Calculus and non-deterministic Turing Machines) did not provide for concurrency.

Hartmanis and Stearns

I see the foundation to be serial/parallel decomposition. Basically it comes down to algebra and a little common sense about connecting and synchronization. The original handbook on the subject is "Algebraic Structure Theory of Sequential Machines" by Hartmanis and Stearns, from 1966. For anyone who wants a mathematical theory of synchronization, monads are a good start.

CCA

Causal commutative arrows are a very nice foundation for concurrency and synchronization, especially if used with a reactive model.

Cellular automata are also a nice foundation for concurrency with locality properties and flexible interactions.

Hewitt seems to assume (or define) that 'indeterminacy is necessary for concurrency'. I consider indeterminacy to be an orthogonal issue.

Indeterminacy is fundamental to concurrency

Indeterminacy is fundamental to concurrency just as indeterminacy is fundamental to (quantum) physics.

Parallel computation (e.g. Functional Programs and Logic Programs) is a useful programming idiom, but it doesn't implement general concurrent computation.

There are many ways to fail to address issues of concurrency :-(

Indeterminacy is fundamental

Indeterminacy is fundamental to concurrency just as indeterminacy is fundamental to (quantum) physics.

Indeterminacy is not fundamental to QM. See for instance the de Broglie-Bohm interpretation of QM which is fully deterministic. Analogously, indeterminacy is only fundamental to some interpretations of "concurrency".

Indeterminacy in Actors is local in accord with standard QM

Indeterminacy in Actors is local in accord with the standard theory of quantum mechanics.

The de Broglie–Bohm theory is explicitly nonlocal: the velocity of any one particle depends on the value of the guiding equation, which depends on the whole configuration of the universe. Because the known laws of physics are all local, and because nonlocal interactions combined with relativity lead to causal paradoxes, an overwhelming consensus of physicists find the de Broglie–Bohm theory unacceptable.

Because the known laws of

Because the known laws of physics are all local, and because nonlocal interactions combined with relativity lead to causal paradoxes, an overwhelming consensus of physicists find the de Broglie–Bohm theory unacceptable.

QM is non-local in configuration space, period. de Broglie-Bohm simply makes this explicit. Causal paradoxes are not inevitable because you still need to be able to extract information from configuration space in order to create a paradox. de Broglie-Bohm has been extended to relativistic mechanics without paradox. Your claims that QM requires indeterminism are simply false.

Furthermore, I doubt you've performed or read any empirical studies on how acceptable de Broglie-Bohm is to the "majority of physicists", not that that's even a meaningful measure of value or truth. That's the last I'll say about physics.

The value of indeterminism to concurrency is a completely unrelated question to its value in physics. You're not doing yourself any favours by muddying the waters with such weak analogies.

It's worse than that

The QM situation is, I would suggest, even more ambivalent than that, as whether or not it's inherently non-local depends on how one defines locality; I'll refrain from linking to my blog post about that, as I can only agree that these properties of QM shed no light on the part of PL design ostensibly under discussion here.

Interaction creates reality locally

Interaction creates reality locally. Also, indeterminacy is very real in computation, e.g., implementations using hardware arbitration. See discussion in the following: Formalizing common sense

Programming languages need to implement and help manage indeterminacy in computation.

Unfortunately, because of the weakness of our mathematical formalisms, correlations are often calculated using mathematical tricks that make use of non-local regions of space-time. But this does not by itself mean that physical interactions are non-local. Instead of giving up on causal locality, we need better mathematical models for physics.

Of course, there are some outlier mathematical models of physics that deny indeterminacy and locality of causation. Such models are best ignored.

Muddling QM

Now that, I must admit, sounds like a straightforward misunderstanding of the relation between QM and locality. In addition to the already-noted inappropriate muddling of QM and concurrency.

Quantum physics: relevant to theory and practice of computation

Quantum physics is relevant to the axiomatization and implementation of computation.

According to [Laudisa and Rovelli 2008]:

• Quantum physics discards the notions of absolute state of a system
and absolute properties and values of its physical quantities.
• State and physical quantities refer always to the interaction, or
the relation, among multiple systems. 
• Nevertheless, quantum physics is a complete description of reality

Furthermore, according to [Rovelli 2008]:

quantum mechanics indicates that the notion of a universal
description of the state of the world, shared by all observers, is a
concept which is physically untenable, on experimental grounds. 

In this regard, [Feynman 1965] offered the following advice:  Do not
keep saying to yourself, if you can possibly avoid it, "But how can
it be like that?" because you will go "down the drain" into a blind
alley from which nobody has yet escaped.

Accidental duplicate post