April 1st special: The War of the Worlds

Conrad Barski has posted a sneak peak from his upcoming Lisp textbook/comic: Land of Lisp.

The first slides may seem unrelated, but boy does the message sting when you reach the ending...

FPers will be quick to note, of course, that this being April Fools' Day the whole thing is a joke and we can all go back to Haskell...

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Purity, where is thy sting?

I can't help noticing that the comic's observations about purity miss the point. A better accusation against the defendants would be "guilty of having undeclared side effects". Which of course Haskell also allows, with unsafePerformIO.

As a number of Haskellers have observed, Haskell makes a fine imperative language — "the world's finest imperative language", according to Erik Meijer. It only asks that, ideally, you annotate the type of imperative functions - confess their sins, as it were. (Please, no yellow sleeve patches in future comics.) This would seem to satisfy the wizard's desire that "We just believe that everyone should have the right to cause side effects if they really want them."

Of course, the classic complaint is that combining purity with a fully static type system introduces the dreaded (to a Lisper) B&D style of programming. But defending undeclared side effects seems like the harder sell. At the end of that road is ninja patching.

True enough. At various

True enough. At various points I thought that the target is going to be Ada (the language most often accused of being a B&D language), even though I knew better.

This Comic has been Interpreted...

...as more critical of Haskell than I intended- The Lisp guys in the cartoon are pretty bumbling and their offhanded complaints of "why box yourself in with Haskell" isn't really a meaningful critique of Haskell. Note that they are responsible for creating a propaganda movie of Haskell and that their use of unsafe coding techniques apparently killed the cat at the end of the cartoon :)

A serious critique of the Monadic approach of Haskell is possible but I agree my comic doesn't contain it.

But hey, wasn't Haskell an

But hey, wasn't Haskell an elaborate prank all along?!

Actually...

Actually, this is how I interpreted the comic, but I was sort of afraid to say so...

In any case, a world divided between Lisp and Haskell would be a big step forward. :)

Maybe voters prefer bumblers?

I was really responding to Ehud's point about the message stinging, but that was based on an assumption about the bit that stung.

I did notice the propaganda-film within propaganda-comic aspect, but to me the bumbling obviousness of the wizard appears more benign than the portrayed totalitarianism of the other side. The wizard's approach may kill the odd cat, but he means well and isn't really scary. That might explain the interpretations you've seen.

I was just making sure

I was just making sure people read through to the end.

Good point

It may also explain why people persist in preferring Lisp. ;)

Not scary?

Check out this little Scheme program by Matthias Felleisen. It's got mutually-recursive self-modifying higher-order procedures that use call-cc! That is totally wicked cool, but "isn't really scary" is not how I'd describe it. :)

Nice! But call/cc isn't

Nice!

But call/cc isn't scary. What I find scary in Scheme are type errors that manifest themselves late in the life cycle...

Category mismatch

I was talking about the behavior of the characters in the comic! Any resemblance to real programming languages appears to be entirely coincidental.

[P.S. Besides, a true Scheme wizard would not be nearly as blasé about side effects as the comic's wizard.]

"the world's finest imperative language"

Going off on a tangent here... Haskell people often say this bit about it being the finest imperative language, but as much as I love Haskell, I'm not sure I agree with it.

Monads are cool and important for a lot of reasons, but the way they are used in Haskell to model effects doesn't seem to be ideal to me. As long as your code does only one kind of effect, everything is fine. But what if you want to use state and IO in the same function? Ok, you use monad transformers, but this already feels somewhat clumsy and awkward. If there are more than two effects, well, good luck, it's going to be a fun ride.

So ok, you can say that you should never use more than two effects in the same function, and avoid to use more than one. I don't know if this is reasonable. Also, having to lift all your functions to work in monads is a bit boring too.

I don't like invisible side-effect very much, like in ML languages that you can "hide" a side-effect inside what seems like a pure function, but the current Haskell way of using monads may not be the end of the story. I hope it's not, at least.

You need a type system that

You need a type system that handles effects. (For some reason this became clear to me the other day in the shower when I was adjusting the water temperature and reaching for the soap at the same time...)

I think there are advantages

I think there are advantages to a functional encoding of effects (e.g. with monads) over building the effects into your logic/type system (recently Disciple or Separation Logic). It makes the core of your language simpler and is generally more powerful (at least seems so to me - how do you use an effect system to implement software transactional memory?)

I agree with Andrei that Haskell is presently far from ideal as an imperative language, and the (relative) ease with which you can do some hard things (e.g. custom interpretation of monads) is often overshadowed by the difficulty of doing things that would be simple in a conventional imperative language. (EDIT: The newtype deriving stuff Derek concurrently posted about does make the situation much better - but it's still far from ideal IMO)

STM is a language change

How do you use monads to implement STM? The language has to change, in Haskell they used monads as a notational device for STM but the semantics of the language was extended. Of course you can implement STM in Haskell without changing the semantics of the language, it is turing complete, but the same can be done in (turing complete) languages with effect systems. Other thing to notice is that monads are a device available to any programming language, it's just a data type and the do-notation is just syntatic sugar. For example, Disciple can have a Monad class and use return and >= instead, with a special main that interprets the built monad and builds the effects. If the given monad uses control effects it will require additional interpretation (IIUC Disciple only tracks state-like effects, not things like shift/reset or retry from STM).

IIRC, monad is a term like

IIRC, monad is a term like group that is abused by convention. That is, it formally refers to a mathematical structure consisting of a set and some operations, but is often used to refer to just the set. That the monadic structure is explicitly identified with a typeclass in Haskell is irrelevant to most everything we're discussing here.

The relevant bit is that the operations on e.g. IO are not presented as side-effects to special functions that need a special type (effect) system to describe, but rather are presented in terms of the functional behavior of your code (i.e. the types of the continuations you pass in).

Whether or not STM requires a change to the underlying semantics of your language depends on what language you're talking about. If you already have state and some basic concurrency primitives, then you should be able to roll it yourself I think.

Semantics semantics

Maybe pedantic, but I also think you have to be careful to define 'semantics' to be able to say that you require a semantic change to implement e.g. concurrency. A language will have an 'internal semantics' that merely maps programs to their functional behavior without assigning any external meaning to IO, producing values polymorphic in runtime interpretation. This semantics doesn't require change.

Bathtub insights...

A very Archimedean moment!

Monads and monad transformers in Haskell

If you want to use more than two effects you simply use monad transformers again. Nowadays with GHC's newtype deriving and the MonadState/MonadReader/etc. classes, it's pretty much free to do this however many effects you want. For example, from XMonad we have,

newtype X a = X (ReaderT XConf (StateT XState IO) a)
    deriving (Functor, Monad, MonadIO, MonadState XState, MonadReader XConf)

This demonstrates using three "effects" at a time (in a particularly common arrangement no less). This automatically does all the lifting of functions except for the IO ones since they are not parameterized; however, liftIO is automatically created so you can lift an IO computation into an X one without needing to know how many layers there are.

If another monad "layer" was added, absolutely nothing would need to change except the "runX" function, though you'd probably add MonadFoo to the deriving clause.

Using more than two effects is not at all uncommon and, nowadays, is exactly as difficult as using two, which not at all hard. One of my earliest Haskell programs used a monad "stack" four or five levels deep (and this was way before newtype deriving).

In a nutshell, nowadays, monad transformers are no harder to use than monads themselves and are handled in a way that have rather good software engineering properties. Are there better ways of handling effects (in an explicit manner)? Possibly, probably even. Could even Haskell's usage of monads be improved? Yes, but it is very clear that this has been thought about and dealt with with the weight of years of experience.

Monad Transformers

Ok, you use monad transformers, but this already feels somewhat clumsy and awkward.

I am new to Haskell, but my experience with Monad Transformers was so far smooth. I too found the "lift" count a horrible way to denote which monad in the stack I want to use, and the MonadIO and other such classes quite ad-hoc.

But a suggestion from #haskell resolves all of my woes: Just newtype your Monad Transformer stack, deriving Monad, MonadTrans (you may have to specify MonadTrans.lift manually). Then you can create a "liftName1" "liftName2" and other functions for your monad transformer, where Name1 and Name2 are not the type of the monad (as in e.g MonadState or MonadIO) but the actual environment or purpose of the monad, making the code readable.

My monad stack was for GUI code, and had a "ReaderT" for the GUI's environment, a StateT for the GUI state, and a StateT to represent the model the gui is editing.

I had: liftModel, liftGuiState and liftEnvironment (actually I only had getEnv because that's the only thing I'd do with the environment).

This makes the code very reasonable, e.g: liftGuiState State.modify modifier...

It does mean that each monad transformer comes with a short boilerplate (specifying the lifters, and a MonadTrans instance), but so far it does not seem like a big deal. Perhaps some new syntax or otherwise novel approach can get rid of that boilerplate too.

This comic is offensive to

This comic is offensive to Haskell and must be taken down. I have started a petition to remove the insulting comic and I hope you will support freedom by signing it.

Chronological discrepancy

Should I adjust the timestamp on your comment for those of us east of Pacific time? ;)

great comic!

That was actually funny! And with a surprise ending!
That's all

Zap

It was neat to see Flakey Foont and Mr. Natural on a programming adventure, but Crumb's really let his crosshatching go.

every day

every day is april 1, from now on, until conditions change.

I don't believe you

I don't believe you-- your post was clearly time-stamped 2008-04-02, which according to you is April 1st. Also, this statement is false. That is all.

truth is

i've realized i do actually love big brother. (2+2=5 if you look at it right).

Many drawbacks of monads and monad transformers

One must keep in mind that monad transformers simply lack
expressiveness and fundamentally inefficient. First of all, the order
of effects may matter: when adding transformers for the state and the
backtracking effects, the order matters a great deal. Sometimes we
need both persistent and backtrackable state, so we need two instances
of StateT. We distinguish them via the number of 'lift' operations --
using unary notation. This is error prone, inconvenient let
alone just ridiculous. One should also keep in mind that each layer
adds overhead (e.g., layer of closures for ReaderT and StateT) that is
present even in the code that does not use that particular
effect. Please contrast with OCaml: a code that does not use mutation
or backtracking can be compiled without any knowledge of mutations and
backtracking. In contrast, a monadic code that does no State or
backtracking operation still has to thread the state throughout, even
if that piece of code does nothing with the state.

The most serious problem with monad transformers is that they simply
lack expressiveness. Monad transformers impose a fixed ordering of
effects, which is not sufficient in practice. Our paper on Delimited
Dynamic Binding (ICFP06) discusses the issue and points to Haskell
code. That lack of expressiveness was quite surprising to us when we
realized it. Incidentally, this result helps explain the popularity of
powerful flat monads, such as IO.

I think there are advantages to a functional encoding of effects
(e.g. with monads) over building the effects into your logic/type
system (recently Disciple or Separation Logic). It makes the core of
your language simpler and is generally more powerful

The more powerful claim is provably false. Please see the works of
Filinski and Wadler, Thiemann. In general, type-and-effect systems are
equivalent to (flat) monadic ones. In addition, if you use monad
transformers, you provably lose power.

As to simplicity, it seems that the unending stream of monad tutorials
explaining once again what is essentially a functional composition (or
the A-normal form, to be more precise) points out that perhaps the
concept of a Monad isn't at all a good match for a programming
language practice. Monads may be deeply rooted in Category Theory --
but then Peano numerals are too have the clearest mathematical
foundations and yet nobody seriously proposes to use them for
practical numerical computations.

I'm extremely delighted to see the work by Ben Lippmeier. Finally
someone answers the call Filinski made back in 1994, in his paper
Representing Monads. Please see the Conclusion section of the paper,
especially the paragraph starting with the phrase ``But surely there
is more to "functional programming with escape and state" than monadic
effects. After all, monads provide only the lowest-level framework for
sequencing computations.'' I'm anxiously looking forward to the
further development of The Disciplined Disciple Compiler.

how do you use an effect system to implement software
transactional memory?

Very easily: the function atomically takes a thunk whose type dictates
that the thunk may not do any observable effect. If the thunk needs to
access or modify a global variable, it has to ask (using
continuations, for example) the `operating system', the central
arbiter. The arbiter decides on the appropriate policy, be it
optimistic or pessimistic. Please see the Zipper File System, which is
transactional naturally.

Thanks for the detailed

Thanks for the detailed reply, Oleg. I'm actually aware of much of the work you cite, including (some of) Wadler and Filinski's work and your own IFCP06 paper. I deliberately advocated a 'functional encoding (e.g. monads)' so as not to limit my meaning to what Haskell currently does (I even made reference to the current situation in Haskell being far from ideal elsewhere in this thread).

One must keep in mind that monad transformers simply lack
expressiveness and fundamentally inefficient.

I think there are two separate issues here:
1. How effectful capabilities are presented? Are effects modeled in terms of their functional behavior, or are they treated specially as something outside of the language?
2. How are they implemented? e.g. Is state implemented by threading values along?

The implementation issue is clear cut. You can't simulate state or multi-processing without gross inefficiency. And it doesn't even make sense to simulate effects like IO with pure functions when the effects are the entire purpose.

The more powerful claim [comparing a functional encoding to an effect system] is provably false...

Well, certainly if you get to choose what I meant by powerful ;).

how do you use an effect system to implement software
transactional memory?

Very easily: the function atomically takes a thunk whose type dictates
that the thunk may not do any observable effect. If the thunk needs to
access or modify a global variable, it has to ask (using
continuations, for example) the `operating system', the central
arbiter.

In other words, you go back to a functional interface for this part? Suppose you have a system that was written with very modest atomicity requirements - it just had to verify that atomic blocks were in fact atomic. So it was written in an effectful style, with 'beginAtomic' and 'endAtomic' producing effects that just manipulated counters in memory. How do you now provide implementations of 'beginAtomic' and 'endAtomic' that implement STM without changing their interfaces?

I don't see how to do that with effects. This is the sense in which I suspect monadic style to be more powerful - it's more resilient to reinterpretation.

Drawbacks may be overstated

One must keep in mind that monad transformers simply lack
expressiveness and fundamentally inefficient. First of all, the order
of effects may matter: when adding transformers for the state and the
backtracking effects, the order matters a great deal. Sometimes we
need both persistent and backtrackable state, so we need two instances
of StateT.

Picking the ordering certainly takes a little getting used to, though I've found a suitable intuition that I'm quite happy with[1]. The example of two instances of StateT strikes me as a feature rather than a bug though: I have no desire to be limited to only one form of state, or to hand-fuse its implementation with the rest of my side-effects!

We distinguish them via the number of 'lift' operations --
using unary notation.

I have to admit that this irritated me the first time I built a monad out of a number of transformers. After writing multiply-lifted versions of a few functions I decided it might just be an idea to name the layers on the stack and write accessor functions equal to an appropriate number of composed lifts - it's never been a problem since.

One should also keep in mind that each layer
adds overhead (e.g., layer of closures for ReaderT and StateT) that is
present even in the code that does not use that particular
effect. Please contrast with OCaml: a code that does not use mutation
or backtracking can be compiled without any knowledge of mutations and
backtracking. In contrast, a monadic code that does no State or
backtracking operation still has to thread the state throughout, even
if that piece of code does nothing with the state.

This is certainly the case given a typical implementation and naive compilation (though it can be argued that OCaml compilation is always aware of mutation!), and I have to admit that at present I don't entirely trust GHC to optimise it appropriately. That said, we are talking about highly stylised code and it should definitely be possible to eliminate much of the overhead. Indeed, it can be argued that the basic idea behind Boquist's GRIN is to implement a lazy language by building a (restricted) lazy monad on top of a (restricted) strict monad, then inlining the lazy monad away and letting rip on the resulting code.

The most serious problem with monad transformers is that they simply
lack expressiveness. Monad transformers impose a fixed ordering of
effects, which is not sufficient in practice. Our paper on Delimited
Dynamic Binding (ICFP06) discusses the issue and points to Haskell
code. That lack of expressiveness was quite surprising to us when we
realized it. Incidentally, this result helps explain the popularity of
powerful flat monads, such as IO.

It really shouldn't be a surprise - the issue is simply that you have two effects whose interaction in each others' presence is non-trivial and needs to be specified in terms of the other effect. As such, a full specification of the effects is mutually recursive! It then follows that we can't build them individually without the use of a letrec-like construct or an appropriate combinator to put them together - the only reason it is surprising is that our (yes, mine included) thinking about effects and constructs such as monads is still partly-formed and otherwise obvious things are therefore hidden from our view.

This isn't an issue with monad transformers at all - it's an issue with combining effects under an open-world assumption where new effects can be defined by a user. To put it another way, the issue comes not from the encoding but from the problem domain! It's not, however, an actual loss of expression so long as a monad transformer can be written offering the appropriate combination of effects. Rather, it's a net gain on the flat monad - an effect's extent is delimited by the function running its corresponding monad transformer.

As the paper points out, the effects could still be encoded as a stack of transformers - one per delimiter - though a static stack is not sufficient. This raises a question: can we maintain a suitable dynamic stack (whether of monad transformers or something merely related) and present it as one transformer from the outside while handing out references to layers on the inside?

I think there are advantages to a functional encoding of effects
(e.g. with monads) over building the effects into your logic/type
system (recently Disciple or Separation Logic). It makes the core of
your language simpler and is generally more powerful

The more powerful claim is provably false. Please see the works of
Filinski and Wadler, Thiemann. In general, type-and-effect systems are
equivalent to (flat) monadic ones. In addition, if you use monad
transformers, you provably lose power.

You haven't shown a loss of power above - is there a result I'm unaware of?

As to simplicity, it seems that the unending stream of monad tutorials explaining once again what is essentially a functional composition (or the A-normal form, to be more precise) points out that perhaps the concept of a Monad isn't at all a good match for a programming language practice. Monads may be deeply rooted in Category Theory -- but then Peano numerals are too have the clearest mathematical foundations and yet nobody seriously proposes to use them for practical numerical computations.

That wasn't the value of simplicity being used in the comment quoted.

I'm extremely delighted to see the work by Ben Lippmeier.

I'm definitely looking forward to playing with it some - though I'm looking to use such a language as an underlying metalanguage rather than directly managing all my effects in it.

[1] Imagine you're playing, say, a board game. There are many rules, and on occasion as stated they may conflict (this is equivalent to effect interaction). As a means of resolving the conflict, they have been numbered - lower numbered rules take priority.

The monad transformer stack is a collection of rules, and the monad on the bottom of it is item 0 (this is why there's no IOT - however powerful your language, it can't produce programs that override the laws of physics!).

Monad Tutorials

As to simplicity, it seems that the unending stream of monad tutorials explaining once again what is essentially a functional composition (or the A-normal form, to be more precise) points out that perhaps the concept of a Monad isn't at all a good match for a programming language practice.

The question I have about this comment is whether the popularity (and necessity) of monad tutorials is a function of their poor match for programming language practice, or a function of widespread programmer unfamiliarity with function composition and similar functional language idioms? I'd bet the latter.

The vast majority of programmers have never used a closure, have never used recursion instead of looping, never used a Hindley-Milner type system (or any type system more advanced than that of Algol-68), and have never used functional composition. These basic ideas, that Ocaml/Haskell/SML programmers take for granted, come from Mars as far as most programmers are concerned. So it's no surprise to me that more advanced ideas- or even the same ideas presented in a slightly non-standard way, require explanation.