Unifying Actors and Objects?

I'm starting on my honors project where I'll be investigating concurrent programming languages. I've been reading up on concurrency models and Actors seem interesting (I've been reading Gul Agha's papers). Also reading on CSP, but that's a topic for another day. My question is thus: actors seem to share some of the features of OO languages (messaging, actors are independent entities) so has there been a move to unify actors and OO-languages? From what I've seen even the more modern OO-languages like Io and Scala have Actors separate from their core sequential object model.

For my thesis I'm planning a model with prototype-based objects, but each object is also an Actor (backed by some sort of lightweight threading model). There are some parts of the Actor model I don't quite fully understand (in particular, I have no idea how to let an object-actor set its behavior), but I'm working my way through the literature. I hope to keep the flexibility of modern OO-languages (Ruby, JS) but add the actor model on as cleanly and seamlessly as I can.

Comments suggestions welcome.
Thanks.

PS. I saw this discussion on Reactive Objects and I'll reading through the paper.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Some work

Actors came from objects (in a sense). Hewitt's original work on Actors was influenced by a talk that Alan Kay gave on his ideas for object-oriented programming. The concurrency inherent in Kay's early vision of OO went away as Smalltalk developed, but it stayed in the Actors model. That said, there are some languages that unify objects and Actors. For example, you might look at Reia, or perhaps Eiffel (granted, its concurrency model is more like CSP than actors). On the theoretical side, Aki Yonezawa, among others, has been doing work on "Concurrent Objects" since the 70s. The tricky part in unifying objects and actors lies in reconciling inheritance with concurrent behavior - see here for example. Of course, this may be less of a problem in prototype-based language.

Active Objects

I believe there is some overlap between actors and active objects, but I don't really know what I'm talking about.

Sounds (and looks) to me like nonsense

What the authors of the so-called Active Objects pattern are explaining is a bunch of ways to avoid coupling in object-oriented programming languages, while providing features the language itself does not.

For example, one reason why an "active object" may control a set of objects is to ensure a bunch of interdependent entities don't deadlock one another by any arbitrary one choosing to block while they all share the same thread. So one solution is to make sure they don't share the same thread. You can do this other ways, too. e.g. AspectJ In Action mentions using aspects to select what thread to run an object on, so that Swing background worker threads are used transparently. But this can be unsafe and depends on the order of aspect composition i.e. the isolation provided by this cheap trick is a you-get-what-you-pay-for. It is much better to design an actor-based language from the groundup to avoid such nonsense.

Actors and Active Objects

It might be worth pointing out that Akka is an actor system (and much, much more) that presents an actor API to Scala, but an active objects API to Java.

Active objects are just

Active objects are just objects that manage their own state machine lifecycles; the design patterns article linked off Wikipedia really fusses over implementation details (you can immediately conclude this by seeing that they are using Booch Notation to illustrate the pattern, which is a red flag that they are writing procedural code specs with diagrams, rather than building models with diagrams).

The articles above use various techniques like stereotypes to indicate there is a hardwired timing between the two objects. In other words, Stuff Can Go Horribly Wrong if two objects' flows of control intersect in certain points in the event stream; yet, the fact two objects collaborate should be decoupled from how they interact. If they need an interaction specification, it should not be at the level of You Do This Then I Do That.

As I've said, I think you can gain more insight into language design by not reading this stuff (I suggested this by calling it nonsense in an effort to dissuade). It simply clouds the issue by encouraging constructs that support poor engineering methodologies: You Do This Then I Do That specification of a system is hard to wrap one's mind around, and simply doesn't scale, nor is it likely to draw the engineer's attention towards division of responsibility in the system being designed.

Scope of Critique

This is all well and good, but is it limited to the Wikipedia article or does it also extend to any, or all, specific implementations calling themselves "active objects" that you're aware of? In particular, your response was to a post in which I linked to Akka. Is Akka subject to your critique after a look at its code, subject to your critique by analogy to Wikipedia, or not subject to your critique because, presumably, it uses the term "Active Objects" to refer to an architecture you're happier with?

I don't know

I only heard about Akka when I saw Viktor Klang's implementation of Dining Philosopher's Problem in response to Dale Schumacher's blog entry from a few weeks ago.

I asked Jonas Boner on twitter why he renamed ActiveObject to TypedActor, and he responded by saying people found the ActiveObject name confusing.

Rather than worry about what the X ways Y number of languages do Z, let me just say that I think the benefit of modularity needs to be paid attention to, especially once you start developing systems with irregular structures that can't be trivially parallelized; we already have optimizing compilers that know how to parallelize code with regular structures, and it is the complicated data structures that kick compilers in the ass by stressing their analysis abilities.

For example, by making the only dependencies of a state machine the event stream, rather than other objects' states (turning the event stream into a direct function of other objects' states), the system is a lot easier to understand because it is more deterministic and does not require total history of I/O to resolve the meaning of why something happened. Techniques like Functional Reactive Programming use similar flow-based computing paradigm tricks. For this reason, I don't like actor languages with nested receive/react, because such sequential coupling indicates to me a modularity problem, and I've never seen good methodological reasons for nested receive/react and the only time I see it is in the context of performance and can't figure out why even then.

But the other practical benefit, besides determinism, is performance. There were research ideas for this in the late '80s and early '90s -- like Concurrent Aggregates (CA) -- which looked at the idea of multi-access data abstraction, essentially allowing multiple things to go on at once behind the actor abstraction in a very easy to understand manner. Nested receive/react isn't so important here, because the very idea of CA is to get away from serialized actor abstractions by using powerful ways to define orthogonal regions. (There were a handful of actor languages in the 80s that had the built-in engineering defect of serialized actor primitives, and most people conflate a language implementation they've seen with the raw Actor Model. Case in point: Erlang.) As far as I can see, Akka supports a variation on multi-access data abstraction: Software Transactional Memory ("readers don't block writers, writers don't block readers").

Nested Receive/React?

What do you mean by 'nested receive/react'?

Google Search :)

The first page of Google search result for 'nested receive/react' shows many links with Haller's work on scala's built-in actors, including:

You do not need to support nested receive/react if you support partial specifications, and all guarded messages can be proven to at least try to fulfill the specification. See AI Memo 505, Pg 17, Section VI, Partial Specifications of a Hard Copy Server. In other words, rather than support partial functions, support partial descriptions. Haller uses the Scala type system in an interesting way to potentially achieve the same end goal, but I am not sure if it really does intends the same end goal; I've not read Haller claim as much and he only really refers to nested receive/react as a notation convenience.

Note that whenever Hewitt wants to serialize message sequences he creates serialized actors instead, and when a behavior is finished it migrates the actor synchronously to a new behavior, implicitly unlocking the actor. In this way Hewitt gaurantees Run-to-Completion semantics (although he doesn't mention anything about RTC Semantics in his papers).

Intuitively, this makes sense, because if you don't need a serialized actor, then the order in which the two messages are received is irrelevant; Order Does Not matter. At best, nested receive/react is effectively a binary operator that gives you a way to internally re-arrange two messages. A system with first-class messages innately supports such message reordering abstractions.

In addition to queued,

In addition to queued, asynchronous, non-blocking message sends to "actors", one might also support a synchronous, blocking message send to the same "actors" as a relatively "regular" object message send.

At first glance, I would imagine such a blocking message send would be queued as well, to maintain the actor/receiver's internal state and invariants, with the results returned to the blocked sender upon availability - somewhat in a "cooperative-multitasking" coroutine style supported by lightweight threads (which also support the asynchronous "actors") - or even less overhead in the runtime if possible, i.e., the language processor detects such synchronous "normal" message sends and can compile them to more efficient runtime behavior (code).

S.

Humus

You may be interested in following Dale Schumacher's blog, about how actors are used to solve problems in his language, Humus.

E order

E language supports both asynchronous 'far sends' and synchronous 'near sends', with careful ordering semantics to avoid race-conditions involving fewer than three vats.

E - structured asynchronous programming, Erlang - goto

I would like to note E language approach has much better usability than Erlang because somewhat higher level constructs are used. In E, asynchronous operations are more easy to compose, since language has concept that represents outcome of asynchronous operation (Promise) and it has a relatively good support for it.

It somewhat puzzles me why language designers prefer Erlang-like approach when designing new languages, rather than more high-level approach. Direct sending of message to actor is asynchronous variant of Goto. In contrast to it, E enables structured asynchronous programming. Goto is more flexible, but there was reason why language designers avoid it for some time. The same reasons could be applied to the direct message passing. In this presentation the described pain is just too similar to pain of using gotos in sequential programming.

Also E vat maps to the traditional notion of actor, objects in E represents a kind of sub-actors. And objects in the same vat could easily and relatively safely share the mutable state (think about closures). Erlang-like actors are too coarse grained and they have problems with sharing the mutable state.

I have created implementations of E ideas in Java, Groovy, and Scala (http://sourceforge.net/projects/asyncobjects). Groovy and Scala versions looks much more usable than plain Java version.

Groovy and Scala versions are pre-alpha and available only in Git repository (git://asyncobjects.git.sourceforge.net/gitroot/asyncobjects/asyncgroovy and git://asyncobjects.git.sourceforge.net/gitroot/asyncobjects/asyncscala).

The biggest difficulty that I have encountered is language support is required to implement asynchronous equivalent of tail recursion and to support causality tracing.

Join Calculus

Cω and other Join-calculus object models offer some interesting synchronization primitives for concurrent objects.

Join calculus resource

Is there a book or set of papers I can read to get a basic working knowledge of Join Calculus? I'd like to know more about it (and Pi Calculus too)

JoCaml and Polyphonic C#/C_omega

These two languages are a starting point. Recently Claudio Russo of Microsoft Research wrote a general .NET API for join-style concurrency, which became Concurrent Basic.

Historically, Microsoft Research has tried several actors model or join calculus inspired languages: Polyphonic C#, C_omega, and now Axum.

One of the keys to the join calculus is support for strong mobility. In other words, there is no assumption baked in about the topological structure of the network, and no arbitrary two edges of the network can be assumed to be statically connected. The upshot of this is that a system defined in terms of the join calculus will not allow you to shoot yourself in the foot by building a "Static linker". (e.g., Sometimes mobile nodes do not have the same transmission power, and sometimes some nodes in a network want to receive communications but remain radio-silent.)

For Pi Calculus, you can read Milner's book as well as his more general follow-up title

Besides Agha's papers, you can also purchase his Ph.D. thesis used (or, presumably, MIT has it for free on dspace somewhere). MIT recently reprinted this thesis, but it costs more to get the reprint than the original, with no real value added. Agha's thesis is sort of a refinement and continuation of Clinger's Ph.D. thesis under Hewitt.

Happy reading.

Tea Time

Tea Time developed for Smalltalk (and since ported to Ruby and others) is a protocol for deterministic distributed replication of 'islands' of code (used in VPRI's Croquet project).

As a matter of course, one is able to schedule messages to be received after some soft-realtime delay (potentially 0 ms). This allows a queuing discipline, and allows 'temporal tail recursion', and concurrency (without much parallelism, though you can have parallelism between islands).

This is a relatively simple means to unite synchronous objects and asynchronous actors.

What about the become construct?

I've been looking up some of the work previously referenced (especially Agha and Hewitt's theses) and one major hole that I can see in trying to unify actors and objects is the notion that Actors can specify their behavior for future messages (the become construct in Act3).

I'm trying to establish some sort of mapping from actors to objects (hopefully backed up by some denotational semantics) but I have no idea what become might look like in an OO system. My best guess so far is that it would be like an object being able to change what all references to itself pointed to. Alternatively, it could be done implicitly (if somewhat clumsily) by changing state within the object.

Neither erlang nor the Scala actor library seem to support this concept.

You seem to have described

You seem to have described the two main ways to implement become (and similar operations).

Early smalltalks indirected every access to an object through a global object table. The table allowed simple compacting GC, and a trivial implementation of become.

PCL (a fairly widespread, performance-oriented, implementation of CLOS) also uses indirection to implement CHANGE-CLASS (and class redefinition, etc.): each object contains a pointer to its class definition (which guides dispatch), and to a vector containing its data (slots).

I believe that some smalltalks take advantage of the fact that become is very rarely used and implement it by walking the heap to replace every reference.

Finally, as a compromise, you can use forwarding pointers to avoid indirection in the common case, without walking the heap to update pointers. If your GC already requires the runtime to support forwarding pointers, this might even come for free.

Actors changing behavior

Besides using explicit state variables, effectively an Erlang/Scala actor can "become" something else by having multiple receives. It's common to follow a pattern like the following (in Erlang-ish syntax)

state1() ->
   receive
       Message2 ->
           someAction();          
           state2();
       Message3 ->
           anotherAction(),
           state3();
   end.

state2() ->
   receive
       Message1 ->          
           yetAnotherAction(),
           state1();
       Message3 ->
           moreAction(),
           state3();
   end.

...etc

That works well in Erlang because it supports full TCO. The JVM does not so Scala achieves the equivalent in actors by trampoining with exceptions at some cost to performance. A library called Akka avoids the trampoline by going back to a "become" operation .

Smalltalk had "become." Here's a post where Gilad Bracha talks about it. A comment on that page led to Three Approaches to Object Evolution, by Cohen and Gil, that discusses ways to change and extend object behavior inside of a static typing discipline.

But is it necessary?

I'm wondering if a become construct is necessary in an OO-language. In Actor languages like Act3 it seems like changing behavior with become is the main way of representing and changing state. A new behavior essentially represents a new state for the Actor. However if your state is represented by the contents of instance members (slots, etc.) qhich you can change directly then the become construct is somewhat redundant.

Smalltalk's become: is of course an extremely powerful and would probably make translating Actor code to OO code easier, but I can imagine that there might be security issues involved.

Become is how Hewitt models effects in an open system

Hewitt argues why this is necessary in AI Memo 727. Off the top of my head this is his earliest argument, and he reuses some justification provided by Clinger's thesis. He also explains in this memo the differences in serialized actor abstractions.

Smalltalk become: doesn't really make any sense insofar as it is dependent upon some global knowledge viz a viz the lookup table. What is your point of composition going to be for such a global variable? Hewitt's become is a local transformation. If you need global knowledge, then you are cheating in how you encode state.

This is one of the problems with languages that claim to implement some portion of Hewitt's model. For example, Joe Armstrong only discovered the Actor Model while writing his thesis and explaining the design decisions in Erlang. Too many people I run into assume that a given language (or implementation) is meant to be a literal interpretation of a concept, like become.

I'm wondering if a become

I'm wondering if a become construct is necessary in an OO-language.

If you read Gilad's thread, you'll see where I show that 'become' is essentially equivalent to (dynamic) aspects. Doing what you suggest seems like an unsound (or non-local) form of the aspect version. Anyways, I don't know non-Smalltalk 'becomes' (references would be great!), but I suspect you'll see a similar sliding scale as to what you see in aspects. Once you've made this connection, it's a short leap to say 'become' is a fundamental part of how we think about OO.

Unfortunately, traditional aspects for static languages are not modular in the sense of ocaps, so there's something strange about viewing them as fundamental to OO. I believe this can be solved as well in that we can limit advice to what you have a reference to. It would be great to see more work in this topic; I hit it a bit with my work on extending the proxy/membrane pattern ("object views") and with JS interpositioning ("ConScript"), but static OO languages have more interesting class systems going on -- I'd love to see an ocap-safe aspect extension to Joe-E. Interestingly, I suspect 'becomes' for Newspeak might just be this already!

I had always assumed that

I had always assumed that 'become:' simply reflected the fact that objects can progress through different classififications, with different states, properties, and responsibilities, during their lifetime; e.g.: think of egg -> larva -> pupa -> adult for an insect.

In effect, yes

The equivalent representation of Hewitt's Actor-oriented become in an Object-Oriented system would be the GoF State Pattern. Note that the GoF State Pattern is really about migrating between roles: It captures at the static structural relationship level the collaboration among an object and the responsibilities it must fulfill in different roles.

But it is important to realize become is there to capture effects in an open system. Encoding tricks like James Iry mentions demonstrates differences between Armstrong's actors and Hewitt's actors, since Hewitt does not couple implementation with specification. The emphasis in Hewitt's model is that become represents binding the actor to the effect, not assignment. In a purely object-oriented system, this logical relation would be implemented logically by the actor object "navigating" from one role to the next by traversing the static structural relationship.

e.g.: think of egg -> larva -> pupa -> adult for an insect.

The use cases are meant to be more powerful than that. For example, some queries an agent may never be able to produce an exact answer for because the queries would require a stop-the-world reply. The answer we get back may never be consistent. Your example might be ridiculously easy to implement depending on how the agent derives those transitions; the emphasis needs to be on the behavior, not the states.