Two Thank Yous for Peter van Roy

For FractaSketch. My kids love it!

For "Logic programming in the context of multiparadigm programming: the Oz experience". I always knew Oz was interesting. But, without a lot of time to sit down and write programs, how does one come to grips with a new paradigm? This paper does the trick.

Thanks, Peter!

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

How about "Functional program

How about "Functional programming in the context of multiparadigm programming: the Oz experience" ?

It would be great to get some pointers to read on this subject also...

You're welcome

You're very welcome, Phil, and many thanks for the compliments. You have just made my day.

FractaSketch was a labor of love for many years. It was inspired by Mandelbrot's original 1977 book. I used to work on it evenings after my "real" work. If I may say so, I am especially proud of the linking and colorization features. A good graphic designer (or a dedicated teenager!) could make some incredible designs with it. Note also that the drawing levels go up to 11 in the toolbar (remember Spinal Tap?).

Laziness in Oz: WaitNeeded is the right primitive

Phil, I have to add an update about laziness in Oz. The ByNeed operation given in the "Oz experience" paper is actually not the real primitive. The real primitive is the WaitNeeded operation: {WaitNeeded X} suspends the current thread until X is "needed" by some other operation (in a precise sense). ByNeed can be implemented in terms of WaitNeeded. WaitNeeded is the right way to combine by-need synchronization with dataflow concurrency. We only understood this after the paper was published. See chapter 13 of CTM (section 13.1.13) for a formal definition of WaitNeeded and a programming example.

Being Lazy

From a users perspective, it's kind of interesting that different languages start off with different primitives, but seem to point to the same end. As Andreas pointed out to me recently, Alice ML defines ByNeed in terms of Lazy as:

fun byneed f = lazy f ()

I suppose the downside is that the primitives in this case, as well as built-in for Haskell, put the burden on the core compiler and VM (i.e. they don't start from even simpler primitives).

Primitives for dataflow

To see what the real primitives are, you have to peel away the linguistic abstractions and look at the kernel language and its operational semantics. WaitNeeded is a real primitive because it has a natural place in the execution model of dataflow concurrency.

Depends

That very much depends on what operations you want to support and how you construct your formal semantics. The future calculus on which Alice is based (see Schwinghammer's diploma thesis and paper by Niehren/Schwinghammer/Smolka) introduces laziness as a primitive with minimum effort, as a minor variant of threads. It's simpler than the model in CTM. I cannot say off-hand how expressiveness compares, though.

Simplicity and expressiveness

It's simpler than the model in CTM.

It's a matter of taste; both are quite simple. Personally, I prefer an approach more in the style of a process calculus rather than a lambda calculus. With WaitNeeded you can take an existing eager implementation and make it lazy exactly at the points where it should be lazy, just by adding WaitNeeded calls at those points without changing anything else. The points are not limited to function call boundaries.

(This is intended as a reply to Andreas Rossberg.)

Haskell laziness is about as primitive as they come

since it's lazy in almost every aspect. Alice allows you to apply laziness at the expression level, but I don't know that this answers the question of expressiveness.

Of course, the real question is whether a language should be lazy by default (Haskell, Clean), or whether it should be explicitly stated (Oz, Alice, etc...). Just wondering though whether some of the possible advantages that laziness provides to a compiler can be taken advantage of without a default (i.e. combinator reduction, monads, etc...).

Simon Peyton Jones somewhat regrets

the lazy default in Haskell, as I recall, saying that this "hair shirt" was less important than originally thought, or words to that effect elsewhere.

Compare Eager Haskell.

Laziness forces you to be honest

He said that Haskell's primary differentiator is really functional purity (and not laziness, as it may appear initially). It is Haskell's functional purity that forced people to come up with new techniques for dealing with I/O, graph data, etc. Laziness just "kept them honest" by not allowing side-effects. (I don't recall him saying that he regretted lazy-by-default, though...)

...with drawbacks that he regrets

Neither he nor I said laziness was a/the "primary differentiator." Not being primary makes it all the more reasonable to modify.

I distinctly recall SPJ saying he regrets the lazy default somewhere, not to say laziness generally. Maybe it wasn't the hair shirt paper. I still remember him saying it somewhere. I just don't have the link.

Everyone knows that laziness has uses. The question is whether it should be the default. Research speaks louder than chance remarks anyway. SPJ has been researching along the lines of PvR, i.e., altering the lazy default in GHC:

Optimistic evaluation decides at run-time what should be evaluated eagerly, and provides an abortion mechanism that backs out of eager computations that go on for too long....

Eager Haskell [20, 19] was developed simultaneously, but independently, from our work. Its basic premise is identical: use eager evaluation by default, together with an abortion mechanism to back out when eagerness turns out to be overoptimistic.

Haskell is not lazy, only non-strict.

Haskell is not lazy by definition, only by implementation. Optimistic evaluation is still non-strict.

Optimistic evaluation turned out to have an overall profit for the standard nofib testsuite, but it was less efficient for some of the tests because some computations were unnecessary.

Speculative evaluation did have one nifty benefit, it allowed a 'standard' debugger that gave readable stack traces.

In my opinion, non-strict evaluation is harder to think about, but gives more power and more options to the user. Non-strict evaluation can always be more efficient than strict evaluation when amortized computations are available (see Okasaki's writings for more details). In my experience, the non-strict properties of Haskell allow me to do more stuff with less code.

On the other hand, I swapped some emails with Robert Ennals, one of the authors of the Optimistic Evaluation paper. He said,

My general feeling now, after studying lazy evaluation for a few years, is
that, while it can be very useful sometimes, it usually isn't what you want,
and thus that lazy languages are a bad idea.

Personally, I see only two important points in the for/against discussion with regards to non-strict evaluation.

  • For: Non-strict evaluation allows more flexibility and power than strict.
  • Against: Non-strict evaluation gives less naive concatenation of behaviour than strict.

Even so, I'm not yet convinced the second point is anything other than the expectation of strictness that is deeply ingrained into the current programming culture. Of course, the fact that strict behaviour is usually what people want and assume matches with what Robert Ennals said. It's hard to approach cultural issues separately.

In any case, I think there's a lot more research to be done in the area of non-strict evaluation.

--Shae Erisson - ScannedInAvian.com

Who's on first [. | ?]

Interesting, and distracting? Person A says "X," then person B says "not Y, but Z" as if A had asserted Y, or implied that Y = X. Chasing this admonition that X ≠ Y comes a third-party opinion on X. I'm lost here...

Fine distinctions should focus a topic, not expand it. The topic was eager vs. lazy default evaluation; strict vs. non-strict issues weren't raised.

It was not claimed that SPJ now wants strictness, or regrets laziness as such, but only that he somewhat regrets its choice as a default. And his (expert-experienced) opinion is a different topic from what the language spec permits or prohibits. Well and good that it permits so much; the question remains, what is the best evaluation default?

All major Haskell compilers use default laziness (if memory serves). The bold headline Haskell is not lazy therefore misleads. Most people hearing "Haskell" think of current implementations, not an abstract ideal with multiple potentials. Better to say that The spec does not require default laziness. Or, for a bold headline, Platonic Haskell Isn't Lazy. Cheers.