Chunked sequences: Lookahead laziness

To add to Ehud's proposed Clojure department: A recent Reddit thread mentioned the release of Clojure 1.1 RC 1, which features, among other things, support for ‘chunked sequences’—essentially, lazy sequences where elements are evaluated in chunks of 32, rather than on-demand. (If the behaviour sounds familiar, it's apparently what lazy bytestrings already do in Haskell.)

As a mathematician, there are few things more dear to me than laziness—of the physical sort, at least, which is perhaps why the programming sort appeals so much to me. I lamented its demise, and Rich Hickey responded with some very nice background and explanation.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Oh well... I went ahead and

Oh well... I went ahead and created the category. Contributing editors can now file stories under "Clojure" (in the spotlight section).

I'll admit, I'm not sure I understand.....

So far, what I *think* I'm reading is that if one has a lazy list (sequence) and one accesses the "next" item of this lazy list, Closure wants to compute the next 32 "next values, so that accessing the following 31 values of our "lazy sequence" is a something akin to a constant time indexing operation. Then, should we access element 33, we again compute 32 values in a sort of "strict" manner, and repeat.

Is this the whole point, or am I missing something? To this strict language guy, it sounds like a fine optimization on the whole (measured over tons of typical use cases) - but am I missing some downsides to this scheme?

Or am I totally off base regarding what is going on? Love to have a clearer idea of both the mechanism and its relative wisdom.


One downside

One downside of this scheme is that if any of the 32 next values are bottom, you're done. I'm not sure how often people exploit that capability in practice, though.

That depends

On if the language's actual semantics is that the next 32 values are evaluated or if it's only an optimization. In a pure language like Haskell, you can make the distinction. All the runtime has to do is ignore bottom if it comes up in the evaluation of 'extra' elements brought in by chunking, and setup an error to occur if the user actually tries to access them.

Detecting non-termination

All the runtime has to do is ignore bottom ..

And to ignore it you have to detect it. Detecting non-termination is not that easy in practice. The work on optimistic evaluation for Haskell did something similar, but it was fairly complex as far as I could tell.

Detecting non-termination

Detecting non-termination is not that easy in practice.

One of those nice cases when theory and practice are in agreement. :-)

Since this is just meant to be an optimisation anyway, and nothing depends on the value actually being evaluated, I suppose that one could simply set some arbitrary limit n, and declare that, for chunking purposes, any element that takes more than n steps to reduce to normal form is “approximately bottom”?

Ignore bottom?

I don't follow, how does a runtime ignore bottom?

In the following "taker" is only guaranteed to terminate in Haskell because the second element is guaranteed to never be evaluated.

collatz n = collatzr n 0 where
   collatzr 1 c = c
   collatzr n c =
           let next = c + 1 in 
           if odd n 
             then collatzr (3 * n + 1) next
             else collatzr (n `div` 2) next

taker x = take 1 $ "all good" : (show $ collatz x) : []

In a chunked sequence situation taker might or might not terminate.

Don't even need unsolved problems

We don't even need unsolved problems for this purpose (although I like your example!): If acks = map (\x -> ack x x) [0..], where ack is the Ackermann function, then take 1 ack will terminate under any finitary evaluation scheme, but much more slowly if it's trying to fetch 32 values than if it's trying to fetch 1.

This, to me, is the point that seemed not to be discussed in the thread I linked: There's a benefit to laziness even in a total programming language, namely, avoiding terminating-but-expensive calculations.

Is this the whole point, or

Is this the whole point, or am I missing something?

I am no expert (which is why I appreciated that Rich Hickey came into the thread to comment), but, to me, the point of laziness is to delay expensive (including infinitely expensive) calculations for as long as necessary, and this optimisation explicitly performs calculations before they are necessary. I think most of my reservations would go away if there were some sort of “expensiveness threshold” beyond which the look-ahead computation would be aborted (or saved for later).

On the other hand, it seems that laziness in Clojure serves a different philosophical purpose. Hickey says:

Second, there is another benefit to lazy sequences, of primary importance to Clojure: avoiding full realization of interim results.

The point—which I think I don't properly understand—seems to be not so much about delaying the computation of any individual result (which is how I think about laziness), but rather about saving on having to construct whole new sequences (apparently the only lazy objects in Clojure).

Finally, here's the real point for me: I have the impression that there's already lots of trouble reasoning about the space costs involved with laziness; how (without some explicit remedy to the points above) are we supposed to get a handle on the mixed space-time costs of chunked laziness?

Just for memory and bounding

Since I have thought a similar design for my language (sequence-only lazyness, as-you-go memory usage), maybe I can help with understanding here.

Basically when your sequence is the result of a generator, you don't want to keep all the calculated items around if you don't need them. So in that context, lazyness is seen mostly as a help to
* reducing memory usage
* preventing infinite loops
If the generator is pure you can discard old results aggressively, calculating them again when needed. If the items don't depend on each other, you can also use parallelism. These are not related to lazyness but they are other tools that contribute in the same way to making the sequences more useful.