Simple Generators v. Lazy Evaluation

Oleg Kiselyov, Simon Peyton-Jones and Amr Sabry: Simple Generators:

Incremental stream processing, pervasive in practice, makes the best case for lazy evaluation. Lazy evaluation promotes modularity, letting us glue together separately developed stream producers, consumers and transformers. Lazy list processing has become a cardinal feature of Haskell. It also brings the worst in lazy evaluation: its incompatibility with effects and unpredictable and often extraordinary use of memory. Much of the Haskell programming lore are the ways to get around lazy evaluation.

We propose a programming style for incremental stream processing based on typed simple generators. It promotes modularity and decoupling of producers and consumers just like lazy evaluation. Simple generators, however, expose the implicit suspension and resumption inherent in lazy evaluation as computational effects, and hence are robust in the presence of other effects. Simple generators let us accurately reason about memory consumption and latency. The remarkable implementation simplicity and efficiency of simple generators strongly motivates investigating and pushing the limits of their expressiveness.

To substantiate our claims we give a new solution to the notorious pretty-printing problem. Like earlier solutions, it is linear, backtracking-free and with bounded latency. It is also modular, structured as a cascade of separately developed stream transducers, which makes it simpler to write, test and to precisely analyze latency, time and space consumption. It is compatible with effects including IO, letting us read the source document from a file, and format it as we read.

This is fascinating work that shows how to gain the benefits of lazy evaluation - decoupling of producers, transformers, and consumers of data, and producing only as much data as needed - in a strict, effectful setting that works well with resources that need to be disposed of once computation is done, e.g. file handles.

The basic idea is that of Common Lisp signal handling: use a hierarchical, dynamically-scoped chain of handler procedures, which get called - on the stack, without unwinding it - to parameterize code. In this case, the producer code (which e.g. reads a file character by character) is the parameterized code: every time data (a character) is produced, it calls the dynamically innermost handler procedure with the data (it yields the data to the handler). This handler is the data consumer (it could e.g. print the received character to the console). Through dynamic scoping, each handler may also have a super-handler, to which it may yield data. In this way, data flows containing multiple transformers can be composed.

I especially like the OCaml version of the code, which is just a page of code, implementing a dynamically-scoped chain of handlers. After that we can already write map and fold in this framework (fold using a loop and a state cell, notably.) There's more sample code.

This also ties in with mainstream yield.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Nice, I was going to post

Nice, I was going to post this too.

One thing I wondered about in this paper was the obsession with weight vs expressivity. The restrictions adopted by the "simple generator" strategy make generators less powerful. Is a copying single-shot generator really that much slower that these tradeoffs become attractive?

I also liked the repmin

I also liked the repmin example, and how the solution was to turn the tree into a stream -- the opposite of the SSAX work Kiselyov did 10 years ago.

The repmin solution

The repmin solution presented in the paper still does 2 traversals over the stream, which somewhat defeats the purpose IMHO. The attractiveness of the original repmin is that it does two things in one traversal.

One more traversal...

I'm not sure I see the point. If I remember correctly, the lazy repmin builds a thunk structure that exactly mirror the tree. So yes, you need only one traversal to build it, but then forcing a result out of it is isomorphic to a second traversal.

(It's a bit like when you use continuation-passing style to make any program tail-recursive: yes, it's tail-recursive, but the memory consumption corresponding to stack frames for non-tail calls is now needed to build the chain of continuation closures. It's an interesting and transform, but you still have to take into account the memory usage corresponding to continuation closures, just as here you should consider the control pattern corresponding to chained thunks forcing.)

Lazy repmin

No, in a lazy language repmin can link nodes to a new box for the min value, which isn't known until the whole tree is traversed.

I didn't read it in detail,

I didn't read it in detail, but am I correct in thinking that this cannot express zip?

Zip

I assume by zip, in this case, you mean:

(Producer m x, Producer m y) -> Producer m (x,y)

Inability to zip (or split) streams seems to be a common weakness of many of Haskell's pipe/conduit/iteratee/etc. models. You can partially make up for it by clever use of `Either`.

Iteratees can do zip, and even more

Simple generators indeed cannot do zip: they can express nested
but not parallel loops. Iterators of CLU (which was the first real
language with yield) made this limitation clear. I believe the balance
of simplicity of implementation vs expressivity is often an
illuminating question. For example, we gained a lot of insight
investigating a deliberately restricted version of Turing Machines:
read-only Turing machines, or Finite State Automata. Primitive
recursive functions are another example.

Iteratees and its many variations are in a different league. They do
capture and reify a part of their continuation. Therefore, they can do
zip. Iteratees can do much more: the ordinary ZipWith means
reading two sources in lockstep. Iteratees can read two sources in
arbitrary, statically unknown amounts -- for example, merge two sorted
sources:

http://okmij.org/ftp/Streams.html#2enum1iter

One can also distribute one source to several sinks, which is
simpler.

Thanks for the

Thanks for the clarification. Use of the type-level or monad transformers to perform zip isn't something I had much considered.

Thanks, very interesting.

Thanks, very interesting. I'll then finally take the deep dive into iteratees.