Are Actors a Good Model for Computation.

Actors seem to allow messages to be recieved in a different order from which they were sent. This sounds a lot like the problems with simultaneity that occur with relativity, effectively messages may take different paths from sender to receiver.

Being an electronics engineer as well as a software developer, the actor model sounds a lot like an asynchronous circuit, which are really difficult to design so they work properly. There is a reason why synchronous design dominates computer architecture, and even this is hard to get right. In synchronous design all messages sent are received at the next clock tick, messages form a stream over pipes. Even so when designing chips we often build a model in 'C' to test the chip against because its so much easier to get sequential code correct.

The ideal solution is to take sequential code like 'C' and compile it to a parallel implementation. The reason, from experience, is that humans are just not good at parallel design. This is even more true of asynchronous design compared to synchronous design.

I am not disputing that actors model communicating processes running on different nodes of a distributed network well. My concern is more that this architecture is only used for very regular problems where a common kernel (or a few different node types as in a map/reduce machine) are distributed because it is difficult to program this way.

Are actors a good model for general computation, or should they only be used when necessary (and when might it be necessary)?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Are Actors a Good Topic for Forum Posts?

The answers to both of these questions are the same. The only way to stop actor overload is to stop discussing it.

Open Minded

I am interested in genuine answers to this question. Being open minded, I am wondering if I should be shifting the focus of my own work in language design. Initially I was sceptical of ActorScript, but there are a couple of interesting bits, namely the type system without universal quantification (and possibly the second order nature of it, and the inclusion of natural numbers), and the actor model itself for parallelism.

My concerns at the moment are regarding "actors-all-the-way-down", the dependently typed nature of "types-are-actors", and if it serves as a good model for computation. I already intend to take the good bits and investigate how they would affect my language design (experimenting with the adopting various features of the type-system, and using the actor-model for inter-process communication). The question for me at the moment is should I take the ActorScript as a whole thing or just the bits I like? That question seems to be about whether actors make a fundamentally good model of computation, that are easy for people to think about, easy for people to write correct algorithms in, and easy to implement on fast, commonly available hardware.

Not specific to Actors

Out of order message handling is actually fairly common in concurrency models as long as there are some constraints to how "out-of-order" they can be. If you have two actors that interact with a third actor even if the first sends a message before the second, unless they somehow synchronize, the first actor can't know that its message was sent first.

If your software does depend on this level of ordering then you need to make that explicit in the design and not just depend on it being right by accident.

As to your final question I think if you need to model concurrency rather than parallelism then the Actor Model has proven to be reasonably effective.

There are lots of actor systems, I bet none of them use

Hewitt's lack of order on messages.

Say I have something like "actors" that have the property that only one message is handled by a given object at a time and the system is optimized for this.

Further I guarantee (only) that two messages from the same source to the same target will be seen the the same relative order that they were sent in.

This seems like a situation that is perfectly practical, no matter what Hewitt says, and easier to reason about.

Hewitt "actors" as written don't seem like a good idea, you never want to create a combinatorial problem for yourself.

But these, whatever they are, sound useful.

If you really did want to write to a system like Hewitt's, then all of your context dependent information would have to be in the messages. Ie, you wouldn't so much have stateful objects, you'd have stateless objects passing around stateful messages.

That's an interesting idea to play with but very exotic.

Huge implementation difference

I created my language based on the modified Actor model you suggest. (Queued messages and guaranteed sequential delivery from any single other actor) With this system, I believe I have created automatic concurrency/multi-core programming without locks or program annotations.

I believe queued messages and some sequential guarantees are essential to efficient implementation of the Actor model which is strictly, quite different than the Actor model envisioned by Mr. Hewitt.

Channels

This sounds a lot like Haskell's Channels.

Messages might be similar but details matter

Please note that I said "create" rather than "invent", my particular version of message passing. I designed my message passing system without knowing how other programs did it or anything about Actors. I am sure many others have come up with the same sort of solution as independently as I did. My messages can require a response (function style), call other actors (continuation passing style) or just process and stop the message chain (log style).

I am not familiar with Haskell Channels but I wouldn't think they are very similar to mine. When SmallTalk originally was created, they sent messages for "If" statements and loops. This termed out to be quite slow so they switched the messages to function calls which arguably are one version of the class of all message types. My messages are implemented using Maps of variables where the order, number and type of arguments has full flexibility. My message model is used at a very high level so I can have this kind of flexibility without the excessive performance hit found in SmallTalk. At the local level, I use function calls as they are needed for efficiency and messages only at the macro level.

Sequential Streams

I am sure its not identical, but I think the queued synchronous messages you used are a lot easier to program with, and I think the developers of Channels for Haskell agree. A Channel in Haskell is a sequential stream of messages. See pp. 25-28 in http://research.microsoft.com/en-us/um/people/simonpj/papers/marktoberdorf/mark.pdf

My post was about being in good company using synchronous message queues.

Good ideas should be shared without jealousy

I did get that message passing in the Haskell way (and my way) is a good thing. My response could have been phrased in a more positive manner, sorry.

A message must be received and processed in order to be acked

In terms of modeling, a message must be received and processed in order to be acked.

CS needs foundations for massive concurrency

Computer Science needs foundations for massive concurrency in the context of the following:
* many-core computer architectures
* huge datacenters
* IoT
* requirement for no single point of failure

Hardware follows software.

I think outside of specialist uses, like uniform architectures (Google's map/reduce, and parallel scientific simulations) the highly parallel grid computer is of limited use. Certainly HPC is one market, but a relatively small one.

I think for general purpose computing the cost of software development is the primary driver. People will buy the hardware that gives them the best applications, and the application developers will develop for the most popular hardware that is easiest to develop for.

You can see the evidence for this in the development of languages like Swift by Apple, and C# by Microsoft. Further evidence can be seen in the comparative success of x86_64 over Itanium.

So highly parallel CPUs will only sell to the mainstream market if software development of software for them can be made as easy as for sequential processors.

Actors Criticism

I think there is a lot of reasonable criticism of actors.

A lot of my early language designs and implementations (between years 2005 and 2010) were based around actors model. I'm familiar, comfortable, and experienced with actors model.

But since mid 2010 I've favored more composable (algebraically) and predictable (deterministic up to network disruption) programming models. Large scale programming is a lot easier if you can reliably test and debug subprograms and services, which isn't feasible for highly non-deterministic models. The massive degree of non-determinism exhibited by actors systems isn't essential for scalability or performance.

At scale, some inspiring alternative models are Bloom and CALM, CRDTs, lightweight time warp, blackboard systems (above CRDTs). We can leverage temporal logics to model time and modal logics to model spatial distribution of information and computation (and substructural logics to model finite resources). My reactive demand programming is something of a sweet spot merging large scale techniques with the 'small' scale, pure, deterministic FP and FRP. (FP is fine up to the scale of entire virtual machines and simulated sub-networks, but we cannot readily model pure computations over an unreliable physical network.)

Multitudes of issues

There are multitudes of issues in the above linked postings.

It might be good to address them one at a time.

If your single line

If your single line follow-up post "Indeterminacy is inherent to massive concurrency" with identical text body is an example of addressing them one at a time, then addressing them one at a time probably isn't good.

re debugging non-deterministic models

Large scale programming is a lot easier if you can reliably test and debug subprograms and services, which isn't feasible for highly non-deterministic models.

Can you clarify?

For example, what you've written seems to me to imply that "it isn't feasible" to "test and debug" any modestly complex web service.

That seems absurd but then: in what sense is a web service not subject to a "highly non-deterministic model" or not a "subprogram [or] service"?

I've favored more composable (algebraically) and predictable (deterministic up to physical network disruption) programming models.

That also confuses me because it sounds as though you describe programming styles that you prefer where they are applicable, yet I don't see anything about actors that prevents you from using such styles.

Debugging becomes difficult

Debugging becomes difficult because observed bugs are difficult to reproduce or regression-test against. Testing becomes difficult because there are more possibilities to validate, and we cannot as easily test for negative conditions (e.g. that some bug has been eliminated). Testing and debugging are further hindered because non-deterministic doesn't imply probabilistic: there is no guarantee an implementation observes all the expressed outcomes. Bugs can be completely masked until seemingly unrelated changes (in the implementation, program code, physical networks, etc.) impact scheduling.

Most web services are not "highly non-deterministic". The web server, or transactions on the database backing it, frequently become a serialization bottleneck for concurrency. However, use of micro-services with lots of POSTs and PUTs between them, could approach the levels of concurrency and non-determinism seen in many actors systems.

Quantity matters. Scale matters. When we compose non-deterministic subprograms, we get at least combinatorial growth in the number of outcomes from any given set of initial conditions and inputs. At some vague threshold we exceed our ability to keep an effective model of a system's behavior in our head, or effectively test and debug. Embracing fine-grained non-deterministic concurrency only helps us reach our limits that much more quickly.

I don't see anything about actors that prevents you from using such styles

My standards for a good user experience are considerably higher than "doesn't prevent me". What a model makes possible is much less interesting and nuanced a subject than what a model makes easy or difficult. While I could design algebraically composable actors with predictable behaviors, doing so requires extra foresight, discipline, cooperation of library developers, and so on. Any programming language deficiency can be resolved by discipline and foresight. Relying on discipline or foresight is a sign of deficiency.

re Relying on discipline or foresight is a sign of deficiency.

Relying on discipline or foresight is a sign of deficiency.

Wow.

We are fundamentally at odds.

Every engineering discipline that I know of does two things:

1. It tries to expand the design space without much worrying about expanding the chances to make intractable designs.

2. It tries to develop discipline and foresight to map out tractable, useful, subspaces of the design space.

Rarely if ever, especially in large-scale and/or life-critical fields, do engineers ever expect to produce "fool-proof" design methodologies and tools.

There's no "never-fail" technique for designing a chemical processing plant, or a sewer system, or airplane, or...

If the combinatoric possibilities of a non-deterministic computing system have proved too hard to analyze, my first thought is that the way the design is using non-determinism involves some poor choices. I don't go immediately to the conclusion that non-determinism must be avoided in favor of supposedly easy-to-analyze models.

Strange difference.

Good and bad tools

Here's the worst case:
try to design concurrent, non-blocking algorithms when the most powerful tool you're allowed is atomic compare-and-exchange.

The combinatorial explosion makes algorithms impractical to analyze, and if you're diligent, you'll find that most of the published algorithms in that area have bugs. That's right, even the experts fail when they try.

Spend a few years at that, and you'll understand that you can't just accept the combinatorial explosion and expect to reason your way out of it, you need tools and practices to avoid it most of the time.

Actually I agree with both of you.

I think that engineers should be given access to the lowest possible level tools but that it's a mistake to rely on them overmuch.

Since software makes general purpose tools and metatools, if at all possible one should come up with tools that avoid problems or automatically solve them instead of leaving it to the programmer.

The problem with "messages can get out of order in all cases" is that it's horrible example of "You Aren't Gonna Need It!"

Yes, it's a general case. But it's a general case that's very expensive in terms of complexity and utterly unneeded in 99.99% of cases.

I am tempted to complain about other cases of YAGNI, but the main point is, don't make engineers pay for what they won't need.

Human attention is a

Human attention is a precious resource, and we shouldn't waste it. Requiring discipline and foresight isn't a good thing. But we don't know how to design a deficiency-free programming language. I have not advocated foolproof designs or never-fail techniques. Budgeting some routine discipline and foresight (e.g. for unsafe, explicit memory management) can contextually be a reasonable tradeoff if it means we gain something important in return (e.g. efficiency on commodity hardware). That's if we don't have a better solution; invention is the art of avoiding tradeoffs.

I don't follow your first point under engineering disciplines. If we're not worried about intractable or invalid designs, then the easiest way to maximize the design space is to remove discipline. Just throw infinite monkeys at the problem in a sufficiently general language. No discipline needed for that. Discipline seems to better fit the second point. And a good language/toolset makes the monkey's work a lot easier.

re Requiring discipline and foresight isn't a good thing.

Requiring discipline and foresight isn't a good thing.

I can't disagree strongly enough.

Tools should facilitate discipline and foresight, not replace it.

My (silly, exaggerated) advice to you: Avoid woodshops or anything like them.

Just throw infinite monkeys at the problem in a sufficiently general language. No discipline needed for that.

The business owners who do that kind of thing should be in jail. The monkeys, at re-education camp.

You misconstrue my position.

The position opposite "requiring discipline and foresight isn't a good thing" is closer to "we should use manual memory management BECAUSE it requires discipline, which is a good thing according to Thomas Lord". When weighing pros and cons for a proposed language feature, I would insist that requiring discipline is not a 'pro' - not a good thing - and should probably be a 'con' if the required discipline is for something routine. This is not the same as a rejection of discipline. I don't see how my position isn't obvious, even to you. You're attacking some silly straw man.

re misconstrue

I don't see how my position isn't obvious, even to you.

Tonally, I suppose I should rejoin that, coming from you, you could have stopped at "I don't see" and you would already have arrived at a tautology.

Nevertheless.

I believe you if you say I have misconstrued your position but that leaves me at a loss as to what your position is.

This is what I took to be your position:

Large scale programming is a lot easier if you can reliably test and debug subprograms and services, which isn't feasible for highly non-deterministic models.

I extract from that what I think is an assertion I would paraphrase:

(paraphrase): Reliably testing and debugging sub-programs is desirable but isn't feasible for highly non-deterministic models.

That seems to me to be patently false. Did you mean it as a joke?

Whether or not sub-programs in a highly non-deterministic model can be reliably tested and debugged seems to me to be entirely dependent on the design of the particular sub-programs.

You continued:

The massive degree of non-determinism exhibited by actors systems isn't essential for scalability or performance.

At scale, some inspiring alternative models are Bloom and CALM, CRDTs, lightweight time warp, blackboard systems (above CRDTs). We can leverage temporal logics to model time and modal logics [....]

(emphasis added.)

I am at a loss to comprehend how the Actor model is an alternative to the techniques that you enumerate. If anything, it seems to me that the Actor model is a fine foundation on which to define such techniques as you list. (Further, definition in the other direction -- Actors on top of those other models -- seems implausibly awkward at best.)

Removing the 'large scale'

Removing the 'large scale' in your paraphrase doesn't help. As I said above: Quantity matters. Especially in context of combinatorial explosions, limits of human comprehension, and economics of tradeoffs. Subprograms needn't be small. Consistency of design also tends to diminish at large scales (due to independent development, ad-hoc organic development and glue code, changing requirements), so appeals to design are much weaker than you seem to be assuming. I'm sure you could carefully design some subprograms for easy debugging. But those won't be representative of a normal programmer experience.

Regarding alternative models: Why assume I'd want to implement actors above whichever model I favor? If the alternative is sufficiently expressive, I'll need actors almost as frequently as I need to model SKI combinators above lambda calculus. To whatever extent actors model criticism is valid, it remains valid for actors above any other model. Which model makes a more convenient foundation for the other models seems a separate issue from which model gets real work done with the least fuss and mess.

But if foundations do matter, I'm confident you'll find implementing the alternatives above actors more awkward, and the other direction more plausible, than you seem to assume above. I say this from experience working both directions for a few of those alternatives. Even purely functional models of actors aren't difficult (modulo interference from a static type system): the full range of non-determinism can be expressed by separating or parameterizing a message scheduler.

Anyhow, I'm not going to invest more time or energy arguing on this subject. The ROI is depressing. I'm sure it's the same for you.

re Which model makes a more convenient foundation

seems a separate issue from which model gets real work done with the least fuss and mess

You are begging the question of what deserves the name "real work".

Plus, your complaint here is analogous to "criticizing" the lambda calculus on the ground that most guys in cubicles can get through their whole lives without hearing of it.

Whether or not sub-programs

Whether or not sub-programs in a highly non-deterministic model can be reliably tested and debugged seems to me to be entirely dependent on the design of the particular sub-programs.

If we interpret David's use of "feasible" as simply a measure of the efficient use of resources in a given market, it's sensible. C programs haven't become more "infeasible" to write if we stick to the types of programs we used to write, but it's certainly "infeasible" to write modern programs we expect with competitive times to market.

Suppose Rust successfully captured modern C++'s lifetime patterns and could check them statically, with little meaningful loss of expressiveness. One might expect that Rust would yield a meaningful competitive advantage, and if it were sufficient, at a certain point it would become infeasible to do new development in C++. C++ wouldn't have changed, but the world would have.

Wasting discipliine or foresight is a sign of deficiency

Any developer, no matter how talented or motivated, has a limited amount of time and attention.

If you must devote that time and attention to maintaining implementation details which could be handled by using a different abstraction, that effort is wasted.

It's not a matter of making things "fool-proof", it's a matter of saving your higher faculties for things that matter.

re Wasting discipliine or foresight is a sign of deficiency

If you must devote that time and attention to maintaining implementation details which could be handled by using a different abstraction, that effort is wasted.

OK, but that is not the issue here.

The issue is whether or not it is better to hobble tools rather than have tools that can be used for bad designs.

The issue is whether or not

The issue is whether or not it is better to hobble tools rather than have tools that can be used for bad designs.

It seems that the issue is whether those tools are truly hobbled at all. You seem to imply they are simply because they're aren't as general as actors, but that isn't a meaningful argument.

By this argument, OCaml is "hobbled" compared to C and assembly, but I could build systems in OCaml I wouldn't even consider building in C or assembly, even if they were the only tools available. That doesn't inspire confidence in such a definition of "hobbled".

re hobbled tools

By this argument, OCaml is "hobbled" compared to C and assembly,

That's not a good analogy since C and assembly both lack facilities for abstraction with which OCaml could be added "as a library".

Two examples drbarbour offered were blackboard systems and CALM tools like Bloom.

Such models for concurrent programming are hobbled relative to actors because it would be useful and straightforward to express a blackboard system or CALM tools as (e.g.) ActorScript (but you couldn't reasonably express ActorScript in those tools).

That still isn't a

That still isn't a meaningful definition of "hobbled". If CALM tools accept the vast majority of programs in which we're interested, and exclude the vast majority of programs which would have no use, and makes it easy to compose good programs into larger good programs, then that's a good tool, not a hobbled one. You would have to argue that the CALM tools do not do this to conclude they are meaningfully hobbled.

ActorScript includes the entire set of well behaved and badly behaved programs, and the composition any two given programs has few guaranteed properties of this sort, if I understand correctly. If one's goal is rapid development of working programs, it seems ActorScript would be the "hobbled" one given its relative disadvantages here. ActorScript might be a good tool to build other tools since it's so general, but that doesn't make it a good tool in the domain covered by CALM, etc.

The only way ActorScript would unequivocally, universally better is if you believe that no matter what domain-specific abstraction you build, like CALM within ActorScript, that either any actor plugged into said domain is inherently domain-safe (ie. respects domain invariants), or you could somehow enforce that only domain-safe actors are plugged in. I don't think that's the case, but please correct me if I'm wrong.

How should ActorScript be "hobbled"?

Which constructs should be removed from ActorScript?

Sequential programming is good enough.

Sequential programming is adequate for most tasks. You don't want to have to deal with the complexities of distributed programming unless you have an application that really needs it. The actor model might be the best model of asynchronous processes we have, but we should not pay the price for the complexity if we don't need it.

No single points of failure in important applications

An important application should not have a single point of failure.

Is it worth the cost?

Most people make do with software like word processors and spreadsheets. They would gain very little from the increase in cost necessary to engineer such things to have no single point of failure.

The chance of failure in many applications is low enough that the remaining risk is not worth dealing with. You simply accept the failure because it happens so infrequently. There are obviously some applications that benefit from a redundant approach, but this can be done at a much higher level, like the Space Shuttle's three computers that majority vote on the correct actions, each independently running a sequential program.

Why should people have to put up with failing applications?

Why should people have to put up with failing applications?

Money

Because they don't want to pay more to get non-failing applications :-)

Edit: or to put it another way, we can already write reliable enough applications. It's possible to achieve four nines (99.9999%) uptime using standard good engineering practice, and just writing software well. Five nines and higer is possible using databases with eventual-consisrency, and replicated stateless frontend servers hosted in cluster.

"Hobbling" can result from

"Hobbling" can result from both adding or removing constructs. ActorScript seems very general, so some domains would require adding constructs to "hobble" it and ensure some invariants cannot be violated.

re re hobbled

If CALM tools accept the vast majority of programs in which we're interested,

As I understand it, it is a built-in assumption of CALM that it is not a general model of computation (hence its "points of order" concept). It wants some more general model of computation to escape to.

I don't pretend to thoroughly "get" Bloom (I've looked at the introductions). With that qualification, it appears to me that Actors are a very natural model in which to express this.

For example, The description of channel collections and <~ operator in the intro material is practically begging for this. Consider this quote:

"Any Bloom statement with a channel on the lhs must use the async merge (<~) operator. This instructs the runtime to attempt to deliver each rhs item to the address stored therein. In an async merge, each item in the collection on the right will appear in the collection on the lhs eventually. But this will not happen instantaneously, and it might not happen atomically--items in the collection on the rhs may "straggle in" individually over time at the destination. And if you're unlucky, this may happen after an arbitrarily long delay (possibly never). The use of

In the Actor model: each value on the rhs is sent in an asynchronous message to the lhs. (The rest of the description for Bloom follows from this.)

You challenge:

The only way ActorScript would unequivocally, universally better is if you believe that no matter what domain-specific abstraction you build, like CALM within ActorScript, that either any actor plugged into said domain is inherently domain-safe (ie. respects domain invariants), or you could somehow enforce that only domain-safe actors are plugged in. I don't think that's the case, but please correct me if I'm wrong.

Actor interfaces are typed but it is true that an actor of type T might not satisfy additional axioms we'd like to associate with type T. (Example: We might want an axiom for type Collection◁eltType▷ that says if an element is added but not removed, the element is in the collection. The interface type Collection◁eltType▷ does not guarantee that the axiom holds true for every actor of type Collection◁eltType▷.)

A semantic constraint (like the axioms for Collection◁eltType▷) can be enforced at the point of definition or construction of an actor, at least if the constraints are themselves constructively expressed.

Analogy: a "Bloom block" satisfies CALM axioms by construction because such blocks are built only out of certain CALM constructs.

and Example: a domain-specific ActorScript compiler could demand that a definition that implements Collection◁eltType▷ be accompanied by some proofs that various axioms are satisfied.

Thus, a domain-specific actor system could require that certain "plugged in" actors be specified by construction, or that they also implement a protocol establishing that they have unforgeable credentials from some trusted proof-checker.

(Of course, in some cases the desired axioms about a type are simple enough that they can be encoded entirely in the interface type.)

In conclusion:

if you believe that no matter what domain-specific abstraction you build,

The phrase "no matter what" is too strong. The constraints (axioms) of the domain-specific abstraction have to be computationally verifiable.

"Rarely, if ever..."

> Rarely if ever, especially in large-scale and/or life-critical fields, do engineers ever expect to produce "fool-proof" design methodologies and tools.

To me that just proves engineers are all jerks. :-) Even more so in life-critical fields. I mean, we should all know we can't make it utterly fool proof but we should strive to make it not be asking for trouble.

Engineering simplifications

I don't go immediately to the conclusion that non-determinism must be avoided in favor of supposedly easy-to-analyze models.

And yet digital logic designers eschew asynchronous logic in favor of the easier-to-analyze synchronous model. Control system designers like to stick to linear models (instead of the more general nonlinear models) if they can, because they're easier to analyze and to get right. Safety critical software development often imposes limitations on the way a programming language can be used (see the MISRA rules for automotive software) for the same kind of reasons.

re engineering simplifications

And yet digital logic designers eschew asynchronous logic in favor of the easier-to-analyze synchronous model.

Conversely there's the designers of NFS, of payment systems, of trading systems, of hospital inventory systems, of JIT logistics chains, .....

And yet digital logic designers eschew asynchronous logic in favor of the easier-to-analyze synchronous model.

I'm pretty sure that's a Moore's-law driven sweet spot that is expiring.

Necessity and Speculation

The first group you mention, NFS etc, use as little asynchronous design as necessary. It's harder to get right so you use strictly only where synchronous design won't work.

As to the second point, I don't like to use experience as an argument, but as someone who has experience of VLSI design, I don't think you will see asynchronous logic in computers any time soon. It's just too difficult. It's fine when you are designing a system with a few node types, but it gets exponentially harder the more you scale.

See this article from 2001: http://www1.cs.columbia.edu/async/misc/technologyreview_oct_01_2001.html

I guess they have given up by now. The conclusion of the article is correct though, we will do this only when necessary. Unfortunately for fans of asynchronous, Moore's law is nowhere near finished, with graphene transistors, and recent single electron transistors where the gate is a single atom, and it can relibly switch individual electrons there may be a lot more to come. You can also power down the clock to unused sections of the chip, and this is being done already.

This comes back to my point, you don't want pervasive a asynchronicity, you want it only when strictly necessary.

So I think having actors in a language is good, but you want a sequential part of the language for the majority where synchronous is good enough.

An open question for me, is could a compiler of a sequential language use actors internally to parallelise sequential algorithms?

re The first group you mention, NFS etc, use as little asynchron

As a historic point:

The first group you mention, NFS etc, use as little asynchronous design as necessary.

Hmm. NFS was designed to be a stateless protocol running over UDP. Packets are expected to arrive out of order, to be dropped, and to be duplicated. Internally to a server, synchronization between request handlers can be pretty minimal.

NFS code is sequential.

The code running on each node is sequential 'C' code. The asynchronous design is limited to the network protocol, the code that sends and reassembles the packets is sequential.

Its worth nothing that UDP NFS never worked that well, and most people used the alternative TCP protocol when it became available as it was more reliable and had faster load times. By NFSv4 TCP had become the default protocol. Historically the pervasive use of UDP might be seen as a mistake. Modern NFS is a stream based design, with pipes, a bit like synchronous circuits.

Please don't make things up about NFS

The code running on each node is sequential 'C' code.

That is not necessarily true of either clients or servers. In practice, it was not always true. It was a design feature of the protocol that neither clients or servers had to be sequential.

Its worth nothing that UDP NFS never worked that well,

It really depended on what one used it for.

I am not making things up

I think you must misunderstand me, because I am not making anything up. The original NFS implementation was in 'C', it was in the original Sun version, and it is in Linux too. 'C' code is sequential. I don't mean that it does not use threads (although SunOS did not support threads, so it would have had to used processes) but that 'C' programs are sequences of statements that need to be executed in the correct order to preserve meaning. The implementation of the NFS server and client was written using plain-old-sequential programming, not actors, but "do...while", "repeat...until", "do A; do B, do C" etc. An interesting read: http://web.stanford.edu/class/cs240/readings/nfs.pdf

Let me revise my final statement: It is worth noting that TCP NFS works better than UDP, hence why it is now the standard.

Let's not lose focus though, you could model each NFS node (client or server) as an actor, and this is interesting. My whole dilemma with actors is about the "actors all the way down" part, and whether they make a good fundamental model for all computing. To me it seems the optimal mix is exactly this, to have plain-old-sequential code inside each actor. If you don't need concurrency the default is a single actor, and therefor a sequential program inside that single actor.

re NFS

Keean, you made this claim:

The first group you mention, NFS etc, use as little asynchronous design as necessary.

The claim is simply false. In order to simplify NFS implementation, the designers took asychrony as a strong constraint, and then even sacrificed traditional file system semantics to that constraint.

You are also incorrect when you say:

The implementation of the NFS server and client was written using plain-old-sequential programming,

First, in-kernel clients were not "plain-old-sequential programming".

Second, NFS clients included things like suites of shell scripts designed to operate, concurrently, over shared NFS file systems. Part of the trade, back then, was knowing how to manage the semantics of concurrent processes asynchronously communicating via an NFS file system!

Here:

The original NFS implementation was in 'C', it was in the original Sun version, and it is in Linux too. 'C' code is sequential.

Architecturally, an NFSv2 server is an RPC event loop. Messages to a server (dubbed "requests") arrive asynchronously. The RPC code acts as an Actor-style arbiter and sequentializes incoming messages.

Primitive server implementations act like a single actor, not handling a next message until a reply is sent from the current message.

More sophisticated server implementations dispatch messages to concurrent handlers. Expressed in terms of the Actor model, a more sophisticated server tail-sends each message to a handler and that handler will reply asynchronously. Meanwhile the server can receive the next incoming message. ("Swiss cheese.")

Concurrent handlers within a more sophisticated NFS server must synchronize at the FS level and/or a cache over the FS.

On the client-side, even in the early days, kernels would have multiple outstanding requests pending. If process A and then process B both called "read", for example, B did not have to wait for A's "read" to complete.

Still misunderstanding.

I see no evidence that the NFS design includes any more asynchronous design than was strictly necessary, and it seems you are misinterpreting what i mean by sequential programming. 'C' programming is clearly sequential, unlike ActorScript for example. I am not sure what you mean about shell scripts, but writing concurrent processes that use NFS is hard, and NFS is not really designed to cope with that, so you may end up with garbage in the file (it says as much in the design docs I posted).

However I agree with everything you say from "Architecturally" onwards. Your understanding of how NFS works sounds exactly like my own. In which case it is clear you are misinterpreting what I mean when I say it was implemented sequentially.

My point was the code is 'C' code not ActorScript. Yes the server can be modelled as an actor, but it is not composed of actors internally, and it is not actors all the way down. Inside the code it looks like a normal sequential 'C' program. There's some code taken from the server:

        nfserr = nfserr_acces;
 	if (!argp->len)
		goto done;
	nfserr = nfserr_exist;
	if (isdotent(argp->name, argp->len))
		goto done;
	hosterr = fh_want_write(dirfhp);
	if (hosterr) {
		nfserr = nfserrno(hosterr);
		goto done;
	}

	fh_lock_nested(dirfhp, I_MUTEX_PARENT);
	dchild = lookup_one_len(argp->name, dirfhp->fh_dentry, argp->len);
	if (IS_ERR(dchild)) {
		nfserr = nfserrno(PTR_ERR(dchild));
		goto out_unlock;
	}

Notice how one statement is executed sequentially after the other. This is what I mean by plain old sequential code.

not (re still misunderstanding)

I am not sure what you mean about shell scripts, but writing concurrent processes that use NFS is hard, and NFS is not really designed to cope with that, so you may end up with garbage in the file (it says as much in the design docs I posted).

Practitioners who wanted to write NFS-robust concurrent systems had a few options. One commonly used technique was to use a stateful locking server. The other commonly used technique was to leverage NFS' guarantees of idempotency and, sometimes, to (correctly) presume a guarantee of "rename" atomicity or semi-atomicity. (Atomicity meaning a rename takes place atomically; semi-atomicity means that the same file might exist under two names for an observable window of indefinite length.)

A late-in-history example of these design patterns can be found in the locking protocols used by the GNU Arch revision control system. (may it R.I.P.)

My point was the code is 'C' code not ActorScript.

While that certainly is a tautology it does not even pretend to support the claim you were arguing for. Regardless of your intent, you are moving the goalpost rather than admitting you were wrong in the claim I took objection to.

In any event, the C code of an RPC message handler translates very directly into pretty natural ActorScript code, assuming C-like types have been defined in advance.

Not moving goalposts

I am not moving any goalposts. The original point I made about 'C' code is the one I am still making.

Let's try and establish some common ground, and then maybe we can communicate. What do you think a sequential program is? Do you think a non-actor language like 'C' which is by definition a sequential language can express the RPC call semantics of NFS?

Sorry Keean

Not moving goalposts

I am unconvinced.

Common Ground?

You dont't answer the questions I asked as an attempt to establish common ground, so its hard to believe you are genuinely trying to understand what I am saying. You deny that 'C' code is sequential, but haven't defined what you mean by sequential, so you can keep shifting the goalposts in your response rather than accepting you misunderstood.

Have a look at:
http://eujournal.org/index.php/esj/article/download/1108/1142

Here's a key extract:

Today’s microprocessors are the powerful descendants of the von Neumann computer. The
so called von Neumann architecture is characterized by a sequential control flow resulting in
a sequential instruction stream.

So just like the underlying CPU, which the C language model follows closely, it is completely correct to say: The C language has sequential control flow, and a sequential instruction stream. I don't think there is any reason to be confused when I refer to C as a sequential language.

I understood from the beginning that the RPC calls can occur asynchronously, so you explaining that to me adds nothing to the conversation.

'C' is a sequential language, but the NFS protocol is asynchronous. It is not hard to see that sequential things can model asynchronous things given the addition of context-switching and interrupts.

As for shifting goalposts, my original claim was that the design of NFS only used asynchronous design where necessary (and one might argue the eventual switch to TCP and a stateful server demonstrates the failure of that approach). To support my claim I stated that 'C' is a sequential language, and only the RPC mechanism (the protocol) was asynchronous. This proved to be the sticking point, which is why we are now discussing the sequential nature of 'C' code. One point depends upon the other, there are no moving goalposts, simply the chain of logic. If you accept C is sequential my original point stands (else you are simply being inconsistent).

Not sure what you mean

Each process making a file related call runs in its own context. If a number of processes are accessing NFS files, you may have a number of concurrent active RPCs. The `lower half' (interrupt driven) kernel code may do significant amount of processing and it will be completely asynchronous as it can't rely on any user context (a response for user A's call may arrive while the kernel was processing user B's syscall). As it happens I have done 3 RPC implementations. When I started, I had only just read Bruce Nelson's thesis on RPC so the first version was a learning experience. The second version, a simpler & faster one, ran entirely asynchronously -- a finite state machine per RPC stream and there may be a number of such concurrent RPC streams. This FSM design fell out as I tried simplify the version one code and started factoring out common pieces. [Much later, I liked to use Scheme to design an FSM since it is just a bunch of tail-calls!)

Now I grant you that writing sequential code can be much easier but add in synchronization and you've lost the simplicity battle. So I would say sequential code is simpler when you are feeling your way around in some new domain and not much synchronization is involved! But understand the problem domain, dig deeper and often you'll find some FSMs.

TCP vs UDP not very relevant for this particular point. And also note that TCP implementations are also essentially state machines.

Sequential Code, Asynchronous Protocol

Clearly I am not communicating my point very well. Yes the RPC calls can be made asynchronously, and I did mention earlier that the NFS server or client might be modelled by one big actor.

The point I am making is that the code that implements each RPC call in the server is sequential 'C' code. It's obvious really as it is written in 'C'. I am trying to contrast this with actors-all-the-way-down, where even an addition or increment operation would be an actor, and the NFS server would be built from such primitive actors, permitting pervasive asynchronicity and parallelism in the code from the very smallest fragment up.

I am not at any time trying to suggest that the RPC calls must be made sequentially. The protocol implemented by the sequential 'C' code is clearly asynchronous.

Just because something is in C...

Just because something is in C does not make it sequential! Kernel code in particular can not be considered sequential even if most of it is in C because a) there are context switches and b) another thread (or interrupt handler) can very well access the same shared state and typically synchronization is used to pass things around. and c) there may be a number of cores executing in parallel.

An RPC protocol is completely synchronous (a reply to call N must follow call N!). All that state machine requirement is when you use datagrams (as opposed to TCP) and you have to worry about lost packets, out of order packets, lost network connection, peer being lost or rebooted etc. If you use TCP, the RPC state machine gets simpler but you have just replaced the original RPC FSM with a more complex TCP FSM (for other obvious benefits)!

[edited to remove some unnecessary text]

logically sequential

You surprised me (a lot) by your use of adjective sequential with implicit definition as synonym for physically uninterruptible, as if a critical section under mutex. Some folks expect logical, not physical, as implied by the definition Google offers now when searching on sequential: "forming or following in a logical order or sequence." As in: C code looks like it is sequential to a thread, modulo looping, branch, and jumps. That physical instruction order can change due to context switching does not change apparent logical order (despite it being really important to grasping critical sections).

Just because something is in C does not make it sequential! Kernel code in particular can not be considered sequential even if most of it is in C because...

Let me coin some terminology if necessary, or bend existing terms slightly. C code looks like a block-tree, where block here is short for basic block, meaning logical sequence of instructions. C functions, loops, and if-statements etc make a tree (actually graph) of code blocks, where execution logically walks that tree sequentially except for branches and jumps between nodes corresponding to blocks.

(Here's the new terminology paragraph, if it becomes necessary to argue about what the last paragraph means. Say BBT is an acronym for bas-blo-tr, pronounced base-blotter, and is short for basic-block-tree. Or granting basic as obvious, just say blotter for blo-tr, which is short for block-tree.)

I think Keean is using sequential to mean "looks synchronous", meaning each continuation joins a subroutine return, so caller and callee do not run in parallel, pending async return of a callee. I have more to say about async organization, but it's not worth saying until we agree on terms to describe things.

(Edit: yes, path cycles mean code is a graph, not a tree, but terms sound better with tree in them, while graph is clunky.)

An OS kernel is a collection of threads

An OS kernel is a collection of threads and interrupt handlers. These act on shared state and mediate access using synchronization. Even if you consider a subsystem such as storage or networking, there may be a number of threads active and/or cooperating to get some job done. Something like NFS will make use of a number these threads for any given operation. Given this, a statement such "The (NFS) code running on each node is sequential 'C' code." simply doesn't make sense. Yes, if you narrow the scope to a single routine that, for example, looks at a received packet, it is sequential C code but that does not capture the whole story. For that you have to wade through a lot of code, and a lot of it running concurrently.

All nitpicking aside, is it better/easier to program using a collection of threads than a language based on the Actor model? Unlike a lot of you I haven't made up my mind. I think one should be able to build useful abstractions in an actor language so it is not as if you are going to have to program in an actor assembly language! What I do know is that the shared state assumption doesn't hold in general for distributed systems. For such systems the building blocks are messages and programming logic at each node. With faster and faster processor clocks even a single "CPU" with multiple cores can take on some of the same characteristics. Will it be Actors or some such new fangled model all the way down or will be use different models for local and remote communication?

Being more a practitioner than a theorist I tend to think we will first come up with an implementation that works extremely well and only later will we be able to fit a theory around it (which may likely have been already invented as a branch of some abstract mathematics!)

agreed in sentiment

I agree with you almost completely, in whole and in part, except I'm not particularly sanguine about an actor language bringing abstractions to the table we don't already have. Better formulations are good though.

I want to write in a high level language, then see what "actually happens" in C as an intermediary language, because that's what I'll have to debug when the problem is figuring out how a particular piece of memory got a surprising bit value inside. (No, there's no amount of math that will stop surprising bit patterns from showing up in some memory locations, because ideal limits are not reached.)

But I'm perfectly agreeable with wanting to see a plan as a high level spec, as opposed to a bit-slinging scrum of crazy imperative trench warfare. The story about what happens should be about messages and handling them, so the actor mythos feels like the right one generally, as a protocol overview.

I get a sense from your prose that curse-of-knowledge is a problem, that you feel no local perspective is true, that one must always say what is known globally about all the software taken together, with all exceptions and provisos noted. That makes it hard to say things briefly, if everything terse is patently false.

RPC protocol is not synchronous

An RPC protocol is completely synchronous (a reply to call N must follow call N!)

RFC 1057:

NB: The xid field is only used for clients matching reply messages with call messages or for servers detecting retransmissions; the service side cannot treat this id as any type of sequence number.

So for example, NFS clients may issue concurrent requests and the order in which replies are received is not defined.

I was using "protocol" in a very narrow sense

I was using "protocol" in a very narrow sense but yes, you are right.

Some common ground.

Clearly we agree the protocol is neither sequential nor synchronous.

We need get to the bottom of whether the 'C' language and the underlying model of the microprocessor it is based on is sequential. There is a third possibility here and that is that I am using the wrong word to describe the property of 'C' that I see as being different to ActorScript.

Obviously we must agree that 'C' and actor script are fundamentally differert, otherwise ActorScript is not novel and is another 'C' like language.

So what is the fundamental difference between 'C' and ActorScript?

no we don't (re clearly we agree...)

Clearly we agree the protocol is neither sequential nor synchronous.

As far as I'm concerned you and I are not engaged in a conversation. We consequently do not agree about anything. Personally, I think you are writing distracting gibberish: that there is nothing there to agree with or disagree with.

We need get to the bottom of whether

"We need get to"? Perhaps you have a frog in your pocket.

So what is the fundamental difference between 'C' and ActorScript?

Wow. That's where you are moving the topic? Lovely.

Not Synchronous Dependent on Who is Talking?

I just reiterated something you posted to someone else, in your post entitled "RPC protocol is not synchronous", you state that is is not synchronous. When I state "Clearly we agree the protocol is neither sequential nor synchronous", you then disagree? So whether it is synchronous depends on whether you say it or I say it? What are you trying to do here?

My question regarding the difference between 'C' and ActorScript was an attempt to get you to define your position, rather than just attacking mine, so that we might attempt to find some common ground. This is a completely reasonable thing to do when two people are failing to understand each other. The only reason I can think of why you fail to answer this question is because you can see my point, but don't want to admit it.

Further, how can I be posting distracting gibberish in my own topic. I started this topic to discuss the issues I was having. Therefore I completely understand the issues I wanted to discuss. What are you trying to achieve with your posts?

What problem context defines the scenario?

After the sorta mean reply from Lord (which you are permitted to ignore), I'll play along slightly, since you started the topic and might have something you want to work out. I think you should have a specific issue in mind when looking for a fundamental difference with respect to it.

So what is the fundamental difference between 'C' and ActorScript?

Comparison needs a context for motivation. Why compare them? (The further apart two things are, the weirder it is to compare them, because it can feel like unfocused free association without purpose. Thus purpose making a question relevant ought to be voiced. I didn't see it above, and might sound like gibberish to some.)

On a continuum, C seems closer to assembler than ActorScript, so it has a quality of not following very many rules, other than those imposed by convention. (That folks don't write async fiber oriented code is just convention.) In contrast, ActorScript has a lot of high level organization and agenda, one expects for a good purpose.

Problem Context

When designing a network protocol, you have to design asynchronously, and you might end up with something that looks like an actor.

The question is about how that actor is implemented internally. We could implement the actor in a language like 'C', or we could implement it by further breaking down into other actors, until we get down to primitive actors, like registers, adders, etc.

What fundamental differences would there be between the 'C' code and the ActorScript code. Clearly they both have primitive arithmetic and logic operations, so the difference must be in how they are put together. To me this difference would primarily be expressed in the control-flow, so perhaps I can be more specific, and say:

What is the difference in control-flow between 'C' and ActorScript?

lots of historical baggage with C

The question is about how that actor is implemented internally. We could implement the actor in a language like 'C', or we could implement it by further breaking down into other actors, until we get down to primitive actors, like registers, adders, etc.

Ah, path-of-least-resistance implementation strategies. That varies depending on who does it. Most junior C devs want to go low-level too fast, so you just want to slap them. Here's a bad Star Trek metaphor. You could have code with roles, where the captain tells the navigator to lay in a course to a new star system (so you can violate the prime directive), then say "make it so" after a new course is confirmed. Or you can do it the typical junior dev way, by opening the navigation panel to solder in new wires, because making a rational user interface is too fancy (and role separation is for wimps). This behavior is encouraged when the "nav panel" is just a big pile of wires from the last junior dev.

Eventually actors are going to run native processor code, it's just a matter of how quickly you go there, and whether you want want internal detail to continue looking organized for as long as possible, preferably using the same metaphors. Let me compare this to Smalltalk.

Early Smalltalk (per green and blue books) had a set of primitive operations you could see invoked in code as <primitive: 3>, for example, to fire the magic operation numbered 3 in the virtual machine. That is, the primitives were enumerated and described, so the set of them could be grasped and studied as a whole. You could instead just let a compiler recognize some things as primitive, and just do it. Maybe you would use syntax hinting you thought that should occur in some places.

Most syntax in Smalltalk looks like message sends (actually synchronous method lookup via selector key derived from method name), including if-statements, which look like message sends to blocks. Often people report Smalltalk does everything with messages, including conrol flow, but that's not true. Instead a Smalltalk compiler merely recognizes a statically typed message send to a block, and inlines it, so it's not actually a message send. You end up with test-and-branch instructions just like elsewhere. So the purpose of blocks was just to express delayed evaluation, semantically, in the sense block content does not eval unless reached.

It would depend on your design approach, whether to go low-level early or incorporate as many semantically high level actors in the internal decomposition of actor code. But at least the syntax and style would encourage you to stay high level longer. If you wanted that true in C as well, you would want a high level library in C that followed high level rules, and use that. If your library was foo, you might call that C-under-foo (and perhaps write foo/C) to mean C following some rules, instead of being a free-for-all.

Edit: I replaced a slightly vulgar word with wimp because it has probably gotten far more vulgar in the last thirty years, now that every word with at least one offensive sense is judged to always mean that one offensive sense.

Maybe I'm wrong

Maybe I'm wrong, not about 'C' but about ActorScript, and it is more sequential than I thought. If you could implement an NFS server as a single actor, then I have been thinking about this the wrong way.

However my understanding was that with ActorScript you have to compose expressions of other actors (much like in Smalltalk everything is a message) and that means every operation has asynchronous semantics which would make debugging and proving correctness hard.

Hence my use of 'C' to illustrate a language where individual operations like '+' do not have asynchronous semantics. To me it seems easier to write a correct sort algorithm in 'C' than in ActorScript, because you don't have to deal with the asynchronicity.

c vs. as

I am not sure how much you do/not know. Consider each actor as a little vm that runs a single c program. That program has an input message Q and an output message Q. It has a main event loop, where it gets to read 1 message, do stuff, and emit multiple messages. Then the loop starts again. Make several of these c-vms, that's a network of actors. I mean I assume I am right in saying it this way, cf. E, Erlang, etc. So yes each actor internally is a single threaded sequential event loop, nothing special in some sense at all. Since it can't read any other c-vm's data other than by message passing, it is also a little object that keeps its own data inside.

Actors all the way down

I have written Erlang, and RPC services in 'C', so I think I understand the actor model a little bit.

My comments are around 'actors all the way down' where each actor is not a little VM that runs a 'C' program but is in turn composed of smaller actors, all the way down until you get to primitive actors, like an actor for addition at the bottom.

My understanding is ActorScript is actors all the way down, and therefore a different model from 'C' and Erlang.

C could be Actors all the way down

C could be Actors all the way down too, with Actors for C addition.

Hewitt said elsewhere in this thread, "A goal is to model all digital computation using Actors. Consequently, it must be possible to model C, Erlang, etc."

What is it like to program?

Most statements in 'C' would not use the full power of actors. For example consider:

int fact(int n) {
    int x = 1;
    for(int y = 2; y < n; ++y) {
        x *= y;
    }
    return x;
}

There is no message passing, no message order issues, the algorithm is sequential and easy to read, understand etc.

There are two different questions here:

1) Do actors provide a good internal language for compilers, which are compiling a sequential language like 'C'.

2) Do we want to write everything as actors (like everything is an object/message in smalltalk)?

In this branch of the discussion I was contrasting what 'C' is like to program in compared to actors-all-the-way-down. Does actually writing everything as an actor make it harder to write simple sequential algorithms correctly (due to message ordering etc)? If we were writing as actors we might need to instantiate the actors we want, and then plumb them together into a 'circuit'.

There is a difference between programming with actors, and the compiler using actors to model computation.

word

Sorry above, I wasn't trying to be rude, I was just lost as to what you were asking any more. :-) I too find that the difference between how we do things in Erlang vs. ActorScript-actors-all-the-way-down is interesting, and unclear. Thanks for re-stating your question again.

Both sequential, but one finite and one infinite

Obviously we must agree that 'C' and actor script are fundamentally differert, otherwise ActorScript is not novel and is another 'C' like language.

I don't think they're very different in the sense of "sequential." I think ActorScript has sequentiality in how it processes f.[g.[x]], not to mention operators like "Prep" and "afterward." Perhaps code written in ActorScript is more likely to be non-sequential by accident, as opposed to code written in C; I can't say that for sure though.

So what is the fundamental difference between 'C' and ActorScript?

IMO, C is primarily interesting for programming unverified systems under finite memory. IMO, ActorScript is primarily interesting for programming verified systems when the formal requirements must be scalable to actually infinite numbers of composed subsystems.[1]

Whether either of these is a practical concern depends on your point of view, I think. I'm just representing my own point of view, and I'm interested in both, though I hope C and ActorScript aren't the last word on their respective topics. :)

---

[1] If every subsystem took turns (sequentiality), how would we specify a guarantee to process every single one of them sooner or later? Unbounded nondeterminism. If every subsystem defined its own types, how would we let them coordinate on these types? Infinite syntax.

Microprocessors and sequential execution

Do you agree that a microprocessor executes instructions sequentially, IE one after another?, according to the address in the program counter (branches just create a dynamic sequence).

Let's just state that I agree with Rys, sequential does not mean uninterruptible, and it does not mean single threaded. Sequential means the processor runs one instruction after the other. One instruction finishes before the next starts. Given:

x = 0
x = x + 1
print x

There is no ambiguity of the value printed, it is always "1" because the instructions are in sequence, and they are a sequence if they are in an interrupt handler, or an RPC handler, or in any other thread, kernel or otherwise.

Consider what happens when

Consider what happens when there is an interrupt just before the print x that changes x. Or consider what happens when another core executes identical code on the same x. Final value of x depends on exact interleaving of access from each core.

Nothing happens to x

If an interrupt occurs, nothing can happen to 'x' unless it is shared with the service routine. Here 'x' is not shared so nothing outside the lexical scope you can see can change 'x'. Same applies to cores. This is because this is a sequential language, the model does not include asynchronous events like interrupts or cores sharing data. You need to do something outside the language to work around this like use mutexes or atomic sections. Neither of these are part of the 'C' standard, so you need to rely on inline assembly or special primitive functions to define the operations necessary for mutexes or atomic sections.

If you were to share 'x' with an interrupt, then you would need to put accesses to it into an atomic block. Something like:

x = 0
define_interrupt(isr, &x) // set interrupt handler and pass a reference to x
:
: // do other stuff
:
interrupt_disable()
x = x + 1
print x
interrupt_enable()

You would not want to use mutex's in case the interrupt happens whilst the mutex was locked, which would result in a deadlock in the interrupt service routine, which would probably lock up the whole machine. Using atomic sections is a quick-and-dirty way to deal with interrupts much like the old Amiga operating system did.

Probably the safest way to deal with interrupts is to generate an new event structure in the interrupt and queue it as a message. Then you can synchronously deal with the messages by having an event loop. Just allowing the interrupt to asynchronously modify variables is not a good way to do things, because it is very hard to debug.

what about at small scales?

I'm wondering if actors+locally-ordered-messages are a replacement for mutexes on small scale systems.

I would have to work out how various patterns of mutex use map into messages and actors.

Actors as mutexes

James Iry has a nice discussion of the relationship between Actors and locking (including an Erlang-based implementation of a mutex) on his blog.

In the interests of completeness, it's probably worth pointing out that these issues are not restricted to the Actor model. Pretty much all of his points also apply to process calculus models. In fact, some approaches to model-checking shared variable programs involve building process-calculus models in which shared variables are represented as processes that respond to set/get messages, and mutexes are represented as processes that respond to lock/unlock messages (see here for an example).

actors prevent races not deadlocks?

But I'm wondering if there's a programming style with them that prevents deadlocks too.

One where you never wait for a resource, you always send a message and it's always received and acted on eventually.

Ie instead of "request resource" "use resource" "release resource"
you only ever have "use resource" in a single message

[edit] the cool thing is that actors lock themselves (one message at a time), the question is how do we avoid needing more than one lock at a time?

Actors don't prevent races

Actors don't prevent races either. They do prevent races on shared variables (because there are no shared variables). But it's entirely possible to have a race on processing of messages in a multi-message transaction. In a way, that's the whole point of Hewitt's "unbounded integer" example: the number you get back is nondeterministic precisely because there is a race between the "go" and "stop" messages.

That's preventing races in a practical sense

since the unit of processing represented by a message can be engineered to be one that keeps the processing sane and consistent and free of requiring combinatorial analysis of the low level algorithms which is what's missing with shared variable access.

If they replace mutexes, it's important that the underlying actor system can be designed in a way with much less overhead than mutexes - at least in the case where communication is within one machine.

Sure

Sure. As I said, you avoid races on shared variables. But you still need to think about what you're doing to avoid races altogether. You don't just get "race-freedom" for free.

Combined messages

One, incomplete though, sounding answer to my question:
How do you use actors to prevent deadlocks, I imagine that one ends up with similar style to functional-programming:
One appends more and more data to messages back and forth.

Ie the context, the state is in the messages not the actors.

In my mind it sounds like the inverse of object oriented programming, instead of passing messages to objects, you pass collections of (unique) objects and commands between actors who modify them and pass them on without accumulating so much state themselves.

This suggests that fusing messages is useful.

Perhaps I'm imagining turning messages into something like SQL stored routines - send a whole routine rather than something small.

temporal logics etc

Are there logic-programming languages that use temporal logic to control things?

Or did you mean using it as a pencil and paper method for modeling?

Or perhaps you can put temporal or modal logic into frp?

temporal logic

Look into "Dedalus: Datalog in Time and Space", a paper which documents one such logic language and at least mentions some similar historical languages used in real control systems (e.g. for fly-by-wire). Dedalus's spiritual successor, Bloom, I already mentioned above. Synchronous reactive programming (e.g. Esterel, Lustre) was another early use of temporal logic, preceding FRP by over a decade, albeit constrained in a similar manner to FRP. A bunch of music synthesis languages (pure data, max/msp, ChucK, etc.) also use time-based control for concurrent processes.

Modal logic for spatial partitioning is a more recent study. I've read a couple theses and slide-decks on the subject, but haven't seen it used much in practice yet.

Temporal logic?

Synchronous reactive programming (e.g. Esterel, Lustre) was another early use of temporal logic, preceding FRP by over a decade, albeit constrained in a similar manner to FRP.

Small nitpick: I'm not sure I understand the precise meaning of "temporal logic" here. To me (and Josh, apparently) the term generally refers to modal logics in the style of LTL or CTL, which were explicitly shunned by the founders of synchronous reactive languages. Programs in these are written in imperative (Esterel) or functional (Lustre) style, with some twists due to the presence of logical time. Their formal verification tools do not rely on temporal logic either.

On the other hand, there is a recent line of work on LTL-inspired type systems for functional reactive programming, see LTL types FRP.

re: temporal logic

My meaning by 'uses temporal logic' is more informal, not specific to any particular temporal logic: if your language (and especially the expression of state change and concurrent behavior) is explicitly based around a logical model of time, you've at least the essence and inspirations of a temporal logic.

I'm sure you can formalize Lustre and Esterel without temporal logic. But IIRC, when/where I read about them (ten years ago, so my memory is unreliable at this point), they were presented in terms of temporal logic.

A lot here to look at and learn

First I was surprised to see "blackboard system" since that is usually mentioned as an AI technique not a control or dataflow technique.

You may have some insight that only you could provide examples for.

Also I hadn't heard of CRDTs, Bloom and CALM or lightweight time warp.

Indeterminacy is inherent to massive concurrency

Indeterminacy is inherent to massive concurrency.

Edit: Bringing indeterminacy back on the table as a point of focus.

Asynchronous is Too Hard

But is it too hard for humans to write correctly? Asynchronous circuits have indeterminate arrival order of messages, and that has caused them to fail against synchronous circuits. Even synchronous circuits are hard to design correctly. The biggest advance in VLSI design, and making it more accessible have been programming languages like VHDL, that allow sequential descriptions of logic to be compiled into parallel implementations. This is particularly common with FPGAs.

So the size of circuit it is possible to design increases as you go along this line:

asynchronous laouts < synchronous layouts < VHDL

I think it's easier to describe an algorithm as a sequence of steps. What we need are compilers that can parallelise a sequential algorithm.

Indeterminacy: exponentially faster than parallel lambda calc

Indeterminate computation can be exponentially faster than the parallel lambda calculus.

Ease of development more important

I would argue that ease of development is more important than performance. The desktop computer is already fast enough to run your word-processor or spreadsheet. The difficulty of programming GPUs is more of a problem than their lack of performance.

Ease of development is about having a model for algorithms that humans find easy to understand, and about how easy it is to get things correct relying on intuition.

actor-like semantics seem possible in older languages

I almost posted a followup to your last exchange with Thomas Lord, about C code in an NFS server being sequential (locally), but decided to go back to your original post, which I quote here. Since I'm mostly going to agree with your tone, there won't be much to learn from my remarks. (I find surprise a lot more educational.)

Note I'm not talking about Hewitt's flavor of actors per se. Rather, I mean anything that sends async messages with similar style of organization, including Erlang processes. So what actors are defined to do or not do won't be relevant to my remarks, since definitions are subject to change.

Actors seem to allow messages to be received in a different order from which they were sent.

Lack of message order seems a good idea, since intermediaries along the comm path might (be free to) make some messages slower than others (in the extreme case, pausing in a debugger for human latencies, for example). Constraints like "must be ordered" get more expensive as things scale and get complex, until eventually a difference in quantity becomes a difference in quality.

If message order between a sender and a receiver is needed, it's fairly easy (but perhaps tedious) to use sequence numbers in a protocol to make order visible so out of order messages are buffered. Doing this in a generic way is likely not hard, so it need not be done over and over for different protocols. Something declarative about sequence numbers and window sizes (to ensure bounded buffers) might inform implementation so detail is not foisted upon every developer.

The ideal solution is to take sequential code like 'C' and compile it to a parallel implementation.

That matches what I talked about a lot a while back, but with a step transpiling to continuation passing style (CPS) while still in C syntax as a good idea before lower level compilation. The problem here is defining what happens when code parks while waiting on replies. Without an explicit fiber model that represents parked continuations, terminology used to describe system behavior would get a little crazy. (Like, when beginning programmers talk about thread semantics without reference to threads or how they behave, yielding cargo cult mysticism and worse.)

Are actors a good model for general computation, or should they only be used when necessary (and when might it be necessary)?

I'd say yes, but only when necessary. When local code looks (and is) sequential, this is clear and easy to understand. But as soon as local code interacts with something that might occur remotely, concurrently, and asynchronously, there's an awkward boundary. Given a fluid transition to actor style semantics there, a dev can organize code so it makes sense depending on roles needed.

Goal: model *all* digital computation using Actors

A goal is to model all digital computation using Actors. Consequently, it must be possible to model C, Erlang, etc.

the reverse works for me

I get that it's heretical I want to model all digital computation using C (and other low level languages), so it must be possible to model Actors, etc.

Modeling all digital computation using actors.

There are two questions here, that I want to treat separately, could you reply to each separate to start a thread about each:

1. The word "model" seems important. I agree that we can model 'C' with actors. Is this a good model? Is compilation and optimisation tractable? Should the core of my compiler be using primitive actors instead of functions, and how would this implement parallelisable code that is still as fast as straightforward function calls. It seems nievely that we need to know the number of cpus, and that we might have an intermediate actor representation that we JIT compile based on number of cores and their relative performance.

2. Do we want to directly program everything in actors. As it is harder, it would seem preferable to write sequential code where possible, synchronous parallel where necessary, and asynchronous parallel only when really needed. That this gets compiled to an intermediate actor representation is the subject of question (1). Here I am asking about making programming as easy and programmer friendly as possible.

ActorScript versus Actor Model

With respect to your questions:

1. The stated goal is to model all digital computation directly using Actors. Procedure calls are not general enough to model all digital hardware.

2. The state goal is that ActorScript can directly specify the implementation of any Actor system regardless of whether it is sequential, parallel, or concurrent.

.

(wrong spot)