Programming with Algebraic Effects and Handlers

Programming with Algebraic Effects and Handlers. Andrej Bauer and Matija Pretnar, arXiv preprint.

Eff is a programming language based on the algebraic approach to computational effects, in which effects are viewed as algebraic operations and effect handlers as homomorphisms from free algebras. Eff supports first-class effects and handlers through which we may easily define new computational effects, seamlessly combine existing ones, and handle them in novel ways. We give a denotational semantics of eff and discuss a prototype implementation based on it. Through examples we demonstrate how the standard effects are treated in eff, and how eff supports programming techniques that use various forms of delimited continuations, such as backtracking, breadth-first search, selection functionals, cooperative multi-threading, and others.

Eff has been discussed here before, and it's nice to see some more progress and a much more complete introduction. The paper is intended for a general audience (well, a general audience of PL enthusiasts). It's quite clear and contains a wealth of examples.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

This is very interesting.

This is very interesting. Since they can express both laziness and non-determinism, I wonder if this can be used to give a nice implementation of lazy non-determinism for functional logic programming (aka needed narrowing). Perhaps they explicitly disallow this:

If the thunk triggers an operation, eff reports a runtime error because it does not allow operations in resources. While this may be seen as an obstacle, it also promotes good programming habits, for one should not defer effects to an unpredictable future time. It would be even better if deferred effectful computations were prevented by a typing discipline, but for that we would need an effect system.

Operations are disallowed

Operations are disallowed only in resources. If you were to implement laziness with handlers, you could combine it with nondeterminism. Maybe I'll try and code up a short example demonstrating this.

That would be great!I have

That would be great!

I have a further question, if you don't mind. Resources feel very much like global mutable state. Handlers feel very much like multi-prompt delimited continuations (with your new corresponding to new_prompt allocation, handle corresponding to push_prompt, and each operation corresponding to a function that calls shift). What are the essential differences between them? Or are they roughly the same thing with a different syntactic sugar?

eff : shift/reset = while/if-then-else : goto

I am pretty sure eff could be simulated with a certain kind of delimited control, perhaps exactly what you mention is good enough. I think the difference is in how things are organized and which aspects of control are emphasized. For example, while code using explicit shift/reset often looks quite puzzling (to me, and I suspect I am not the only one), the same objectives can often be achieved in eff in a perfectly natural way. Perhaps the correct analogy is that

"eff" : "shift/reset" = "while/if-then-else" : "goto"

Nice

I share your view about the puzzling quality of shift/reset code, and if your analogy proves to be roughly correct, I think it's a very nice slogan.

IIRC, Balat, di Cosimo and

IIRC, Balat, di Cosimo and Fiore gave a normalization-by-evaluation algorithm for the lambda calculus with sum types (in their 2004 POPL paper, Extensional Normalisation and Type-Directed Partial Evaluation for Typed Lambda-Calculus with Sums, and Balat's 2009 NBE paper Keeping Sums Under Control) that made a really clever use of delimited control in order to handle the commuting conversions.

I'd be interested in seeing what this would look like with effect handlers.

Algebraic effects vs. delimited continuations

Andrzej Filinski has shown that each monadic effect (which algebraic effects are) can be represented using continuations and state. Furthermore, by Church-Turing thesis, each (Turing complete) language is just a new syntactic sugar for the same thing in a sense. General statements aside, the main differences that come up to my mind are:

  • The control is finer as you have different operations instead of shift applied to different functions – paving the way for a more precise static analysis (that we yet have to implement).
  • The control is reversed: if you program with delimited continuations, shift (the counterpart to operations) determines what to do with continuation, while in eff, the handler (the counterpart to reset) determines it.
  • This allows one to use the same effectful computation inside different handlers to obtain different results, which is the main idea of handlers. I do not know delimited continuations all that well: can you use shift without an enclosing reset?
  • Also, as Andrej mentioned, this frequently results in a more elegant code.
  • There is a very natural mathematical semantics for both algebraic effects (Plotkin & Power) and handlers (Plotkin & Pretnar).
  • The semantics immediately leads to first-class effects and handlers. Again, I am not that familiar with delimited continuation: can you pass around shifts and resets as arguments?

You don't need delimited

You don't need shift/reset for delimited control, you can also use yield/run, which should be familiar to anyone who has used coroutines and generators.

Yield/run

For those interested, yield/run can be defined in eff as:

type ('i, 'o) yield = effect
  operation yield : 'o -> 'i
end

type ('i, 'o, 'r) iterator = Result of 'r | Susp of 'o * ('i -> ('i, 'o, 'r) iterator)

let run y = handler
  | y#yield x k -> Susp (x, k)
  | val x -> Result x

with run having the type ('i, 'o) yield -> 'r => ('i, 'o, 'r) iterator as expected. You can then write shift/reset as in James & Sabry paper with

let shift f = y#yield f

let rec reset thunk =
  interp (with run y handle thunk ())
and interp = function
  | Result r -> r
  | Susp (f, k) -> reset (fun () -> f (fun i -> interp (k i)))

(I decided to fix an instance y : yield).

The code doesn't run due to a cyclic type error. I was too lazy to try the different the suggested solutions in the paper, but if you turn off type-checking, it runs as expected (I tried the yin-yang puzzle and it works).

Pure + Effects?

After reading the paper, I am somewhat unclear. Is Eff a pure language + effects allowing for impurity? Or is there a way to perform effectful actions outside the effect system?

If by "effect system" you

If by "effect system" you mean a type system for effects, then well, Eff doesn't have one (yet).

If by "effect system" you mean the way you perform effects in Eff, then yes, Eff is a pure language + effects. But isn't that statement true for any language? The C language is pure + effects, the effects being mutable state. Similarly, Eff is pure + effects, the effects being a much more powerful form of effects than mutable state.

I agree with Jules. May I

I agree with Jules. May I also add that operations are the only way one can perform effects in eff (unless you count divergence to be an effect). This gives a programmer a much greater control over effects and enables handlers.

Ah, right, my thinking was

Ah, right, my thinking was confused. I was asking if the language was pure plus effects. I get that I covered all languages that way.

I believe I was trying to ask if the use of effects is isolated in such a way that is obvious either syntactically and/or via types. It sounds like the syntactic "isolation" - via operations & handlers accomplishes some of this - but that there is no typing of the effects.

The only way to trigger an

The only way to trigger an actual effect is to use one of the predefined resources (currently only std#write and std#read). Anything else is "pure", i.e., even if you define a new effect and handlers for it, they will ultimately be translated to pure code (unless they somehow use std#read and std#write). However, at present the types do not express information about which operations might happen.

Caution about conflating shift with yield

One has to be careful saying that the familiar yield is equivalent to
the delimited control shift. There are many variations of
generators/coroutines -- and most of them are strictly less expressive
than shift. The `distilled' operators yield/run in James and Sabry
paper are emphatically (very) uncommon among all variations of
generators. So, the yield that is as powerful as shift is not
familiar.

First of all, many implementations of generators let us nest
generators but not to run them in parallel,
side-by-side. Shift-implemented generators do run side-by-side (see
the famous `same-fringe' problem).

Here is the second, subtler difference. I often liken shift to
resumable exceptions, such as those in Common Lisp. Consider the
following pseudo-code.

exception E of int * (unit -> unit)

try .....  raise E 1 ...
with E (x,resumption) -> print_int x; resumption ()

Here the exception carries not only the payload but also the
resumption, which, when invoked, continues the interrupted computation
(which may raise another resumable exception, etc). One can say that
"raise E 1" is yield in disguise. The connection between generators
and exceptions is in fact very deep, see

http://okmij.org/ftp/continuations/generators.html

Now consider the following code:

let r = ref (fun () -> ()) in
(try .....  raise E 1 ...
 with E (x,resumption) -> print_int x; r := resumption);
!r ()

The exception handler didn't resume the exception; it merely stashed
the resumption away. The try block exited. Only then we extracted the
resumption and tried to resume the already dead try block. Can we do
that? Common Lisp says no: attempts to resume an already exited condition
block are errors. However, if we implement resumption with shift, we
can resume the already exited block. As Peter Landin said, ``You can
enter a room once, and yet leave it twice.' A captured continuation
(resumption) is a regular function; therefore, we should be able to
call it whenever we want and as many times we want. For that reason,
the implementations of shift have to `copy the stack' one way or the
other.

So, the yield that is as

So, the yield that is as powerful as shift is not
familiar.

True, but as a generalization of the familiar yield it should be relatively simple for most to grasp. Familiarity is only one benefit though, the other being the inversion of control that it brings to mind. Yield seems like a pretty simple concept to grasp, shift/reset less so.

But as you say, regardless of the surface semantics, the full generality of delimited continuations must copy the stack fragment, at least when a multishot invocation is detected. I'm thinking here of a possible runtime optimization along the lines of one-shot continuations.

Comparison to Extensible Interpreters?

I'm curious how the algebraic approach to handling effects compares to something like Oleg's take on extensible interpreters via delimited continuations (EDLS). They seem very closely related, in that Eff's effects are delegated to their respective handlers in a fashion similar to delimited continuations, where EDLS's extensible values are correlated via delimited continuations to their respective interpreters. I might be remembering EDLS incorrectly, but the operational details seem very similar.