Worlds: Controlling the Scope of Side Effects

Worlds: Controlling the Scope of Side Effects by Alessandro Warth and Alan Kay, 2008.

The state of an imperative program -— e.g., the values stored in global and local variables, objects’ instance variables, and arrays—changes as its statements are executed. These changes, or side effects, are visible globally: when one part of the program modifies an object, every other part that holds a reference to the same object (either directly or indirectly) is also affected. This paper introduces worlds, a language construct that reifies the notion of program state, and enables programmers to control the scope of side effects. We investigate this idea as an extension of JavaScript, and provide examples that illustrate some of the interesting idioms that it makes possible.

This introduces a new programming construct that's just the kind I love: they stimulate the imagination and provide simple and strong dynamic invariants to make programs easy to reason about.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

call/cc, STM

Two things came to mind:

- the "sprout" method feels like a dual version of (call/cc (lambda (x) x)), current continuation and world state being roughly duals of one another.

- The commit mechanism reminded me of STM. It wouldn't surprise me if different worlds could be executed in parallel and synchronized only via STM-like commits.

I also have a feeling that commit might be dual to actually calling the continuation, or something like that.

Cool paper! It had me at the backtracking example. Making a largish python application exception-safe is pretty much impossible...

Delimited Continuations, Zipper File system

The zipper filesystem discussed back in 2005, that was a while ago, has "commit" and "refesh" controlled views of the state of a file system.

This was indeed done win continuation, but delimited ones instead of a global call/cc. The Worlds paper and the ZipperFS both have transaction-like commands to propagate or undo changes.


Cool. I can hardly believe my favorite language is now becoming a language to do PL research in.

... except

... except they went right up to the point of explaining everything but how they created the world object.

Edit: It helps if I read the footnotes:
Link to Sample

They explain it. It's a

They explain it. It's a "sprout" method on an existing World (page 4), and the program starts in its own World.

I can't find...

I've looked through all the source, and still have no idea what the world object looks like. How does it segregate the scope of the base classes like Number? Is this all done in the compiler or via code? I guess this is what I was getting at earlier. I wanted to see how they do this.

Found it through some dynamic loads he makes with the include
The Worlds_Library


If you want more details you can load up the online evaluator and use 'View Page Source' to see where the sources are too.


I've played with similar ideas, though in my design worlds were finer-grained, more elaborated models, which allowed for complete redefinition of world semantics from within the language itself, e.g. you can redefine methods for reading and writing cells, the atoms of worlds. This may seem to run into trouble with circularity, but this is where reifying worlds as objects in themselves comes to the rescue: if you want to redefine the semantics for world W1, you simply have the redefinition code execute within another independent world W2; independent, meaning that an access to W2 may not delegate, directly or indirectly, to W1, or you run afoul of circularity (which would manifest as infinite recursion). You can have one sibling world redefine the semantics for another sibling, or you can have a parent world redefine the semantics for a newly cloned child world; the commonest case I've found to be the latter, which is analogous to redefining the semantics of level N in a reflective tower of interpreters from level N+1.

With this design you can easily implement transaction logging and rollback and many other such things, without having them be hard-coded pieces of functionality. Reactive programming is simple: redefine the cell reader method to track dependencies and the cell writer method to propagate changes to dependents. Transparent persistence is possible without further ado, and also Smalltalk-style become() and freezing (protecting selected cells from change for some duration). Worlds do not even natively support delegation to other worlds in a child-parent fashion, or cloning; you implement this yourself as library code. An initial system-supplied world is all that's required to bootstrap. You have enormous flexibility in customizing both the back-end implementation strategies of worlds and their front-end features. An aggressively inlining and run-time specializing compiler makes it possible to compete in performance with a more traditional native implementation of the same functionality.

My ideas surrounding this originally grew out of wanting to have the flexibility of a full reflective tower (in the Lisp tradition) but without the headaches. Limiting the semantic hacking to state, i.e. worlds, and reifying worlds turned out to make the problem easily tractable while letting me do all that I wanted to do. In my first attempt I did not fully reify worlds as objects: the contents of a world were reified, but not the world itself, which thus remained half-implicit. This meant that I had to add a bunch of apparatus to the language to handle the problem of circularity during redefinition. Once I came upon the idea of reifying the whole world as a unity, all these problems vanished: you could then gradually stage the changes in world semantics without effecting them immediately.

If either of the paper's authors comes across this thread, I'd be curious to hear if they have considered similar ideas, or what they think of them in any case.

This sounds very

Per: Your worlds-like project sounds very there any public information you can point us to? I get the feeling the project lives inside a business... :)

What's that sound?

you can easily implement transaction logging and rollback and many other such things, without having them be hard-coded pieces of functionality

Alarm bells always go off in my head when I see 'you can implement' pushed heavily as a feature. In my experience, it means that groups/libraries/frameworks/etc. will not implement it the same way and it will become almost as expensive to integrate code between them as it is to entirely rewrite several of the components. For small-scale or experimental languages, I can appreciate this flexibility. But greater interstitial stability is desirable for 'real' languages.

In any case, your approach reminds me very much of the Concept Oriented programming model, which also models the spaces in which objects live.

As far as reactive programming goes, I suggest you explore it further before claiming a solution to be 'simple'. Hooking those dependencies up is not a small problem, especially when starting with a language and syntax built for a different paradigm.

Isn't a World a monad?

I don't have much experience with monads, but a World strikes me as a monad with all the language's primitives lifted into it, and where commit is a tail call into the target monad (thus replacing the previous program/World with the new program/World).

Of course, many of the advantages of Worlds for imperative languages simply mimic functional programming (produce new state, don't mutate old state -- Worlds simply updates all the old state in one atomic step).

Worlds and Monads

Yes, you can easily expose this paradigm through, say, a version of the ST interface wherein STRefs have non-standard semantics. But by saying this you have only described an interface, not a semantics. Any arbitrary funkiness you can imagine that involves state could be exposed in like manner. Indeed, you could do the same in any object-oriented language by having a World interface with get() and set() methods that take cell tags as arguments. That loses you the syntactic transparency, but so does the monadic approach.

In Moggi's view, the whole point of computational monads was to allow the semantician to modularly extend the semantics of a pure lambda calculus, the object language, with various kinds of effects, among them state, from within the meta language. With Wadler's revision the distinction between object and meta language blurs and they become one and the same, so in that sense you are extending the language's semantics from within itself whenever you write a monad, with types now effecting the separation that was formerly provided by the object/meta dichotomy. The language is suddenly a semantics laboratory. Though this might sound odd, I think F#, in which monads may more transparently extend the standard semantics, takes this philosophical view of monads more seriously than does Haskell. And there is no reason for why both approaches cannot peacefully coexist, at least in a call-by-value language.

An operational view of a monad is that it is an interface that lets you hack the semantics of a pure call-by-value lambda calculus; if your language provides mutable references out of the box, as F# does, then the logical extension is to also allow the mutable reference semantics to be hacked as well. This is exactly what I have done in the design I described elsewhere in this thread; I restricted the discussion to worlds to keep it simple, but in my language what I call simply "a semantics", explicitly reified as an object, encompasses both the standard monadic interface as well as mutable reference semantics and a few other things. In case you're wondering, my language is dynamically typed. You can define a new semantics from within an existing, independent semantics. The whole kernel of the language's proof of concept implementation is about a thousand lines of code. Modular, hackable, reified semantics I've found to be a very simple, elegant and expressive construct, though I suspect theory purists would find much to dislike.

(Sorry, that became more digressive than I intended.)

some related work

Here's an old paper I wrote on this subject a while back:

J. G. Morrisett. Refining first-class stores. In Proceedings of the ACM SIGPLAN Workshop on State in Programming Languages, pages 73-87, 1993.

ealier, more comprehensive related work

These ideas seem to me to be largely prefigured in or subsumed by the notions of "object history", "pseudo-temporal environment", and "possibility" described in Dave Reed's 1978 dissertation (MIT-LCS-TR-205); moreover, Kay has previously presented other work (Croquet) based on Reed's ideas.

What then is nature of the contribution made by the "Worlds" paper? Is it simply that one tiny fragment of the functionality enabled by these more primitive (and powerful) ideas can be conveniently wrapped up, rebranded, and added today to a familiar language like Javascript?

It's how you use 'em

The examples are what I find interesting. There are zillions of nifty ideas that could be expressed in terms of monads / continuations / transactions / etc but that doesn't make them any less nifty :-)

It's like when VMware snapshots came along and suddenly you could fire up your Linux machine and write rm -rf /. Trivial in some sense but very refreshing in another :-)

recent, less comprehensive related work?

A variation on this idea of scoping side effects appeared this year at the ACM Dynamic Language Symposium: contextual values, ie. values that actually depend on the context in which they are looked at and modified. It is basically a generalization of the idea of thread-local values (and so has no such thing as a "commit"). Also, compared to "worlds", the granularity is finer (per value) and more general (a "world" is but one kind of context). A construct similar to "in", called scoped assignment regions, is also described. However, worlds seem to give a better structure when it comes to reasoning about a given context.

(Thanks Michael and Greg for the pointers on early related work)