Question about the Monad associativity law

A really dump question...

The associativity law states (using Haskell syntax):

m >>= (\x -> k x >>= h) = (m >>= k) >>= h

What I don't understand is why "bind" can be associative. One use of monad is to encapsulate side-effects. And the order of operations is important when there are side-effects. If so, why does associativity hold?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

...with wrong ideas that appeal to you...

This isn't actually a dumb question, as it reveals an important subtlety of monads. The short answer is that changing the association doesn't actually change the order of operations (it's "m, k, h" on both sides). It would be far stranger if bind were required in some sense to be commutative! But probably the short answer doesn't help that much...

In order to really understand this, it's important to remember that the monad operations (such as bind) are a distinct layer of operations from the "actual" operations (m, k, and h) being manipulated. All the associativity requirement says is that I should be able to build my composite operation in any order, and as long as it's made of the same basic operations in the same order, it'll be the same operation. At the risk of giving a wrong intuition, you can (sort of) think of it like building a list. If m, k, and h are lists and ++ is concatenation, you'd certainly expect that (m ++ k) ++ h == m ++ (k ++ h). This is the kind of associativity we're dealing with here.

The reason it's important has little to do with the actual underlying operations (IO actions, lists, etc.). It has more to do with the expected semantics of nesting monad operations, Haskell's "do" notation, etc. I think you might find this post helpful (and hopefully more clear than what I've written).

Effects are associative

You have three things to do: m, k, and h.
So you either do
m and k ... h
or
m ... k and h

You're making a cake: mix batter, bake, eat.
You can think of this in two way
First I'm mixing batter and baking, and later I'll eat the cake.
Or
First I'm mixing the batter, and later I'll bake and eat.
But they both do the same thing. You can extend this by thinking that each stage produces a value (mixed batter, baked cake, full tummy). The operations are still associative.

No mystery.

A bit misleading

As in this case k may be reduced to id, leading to the same or even greater effect (if baking powder is involved).

SCNR

Great example

That was a terrific example, Lennart! I was going to give a mathy proof but the cake is far more instructive (and much shorter).

I like your example, too.

I like your example, too. There's a long standing tradition in Eastern philosophy of elucidating -- not proving -- complicated ideas with metaphors, and computer science would do well to take that up.

It's only sort-of

It's only sort-of associativity, precisely because of the lambda abstraction. It's saying something like:

doing m then doing k then doing h

is the same as

preparing the groundwork to do k then h, then doing m then finishing the job

I guess in the side-effecting monads as shell scripts analogy you're saying: You can write the bits of the script in any order as long as they appear in the correct order on the page (a bit like list associativity), as long as when you come to execute some of it you're not trying to execute up to line n before you've written out lines 1 to (n-1).

Associativity isn't about order

Consider monoids first, which are slightly easier. Commutativity is about order. In a commutative monoid, say, we have AB=BA, so order doesn't matter. But in general, monoids aren't commutative and order does matter.

Associativity for monoids states that A(BC)=(AB)C. Note that the order of the elements doesn't change. In effect, associativity states that the way we choose to group the elements into a sequence of binary products shouldn't matter, but it says nothing about the order.

The same happens with a monad. There are such things as commutative monads where order doesn't matter (eg. the Reader monad), but much of the time order does matter. But all monads are associative. The associativity law for monads is about how we group the subexpressions, not about the order. This is more apparent if you look at the monad laws using Haskell do-notation.

Exactly, just an axiom

Like sigfpe states, it is an axiom about the associativity of the >= operator.

This has nothing to do with execution order!

The problem is that there is no execution order assumed in normal mathemathics. I.e. for any expression f(g(x)) there is no evaluation order for f or g; even, f(g(x)) is exactly the same as the result it denotes.

Evaluation order is normally assumed, sometimes explicitly stated, in mathematical texts dealing with operational semantics. So this particular axiom says nothing about the evalution order of actions, just about the equivalence of chained actions.

I go back to first

I go back to first principles to try to understand the mechanism. I have this observation. Could someone tell me if I'm correct or not? Here it is:

Monad works the way it is only because call-by-need, call-by-name or call-by-value reduction strategy is used, such that the expression in an abstraction won't be evaluated. If we use normal order reduction, monad won't work as expected.

No, in monads that can't be

No, in monads that can't be implemented from within the language in question evaluation isn't the mechanism that performs side-effects (so the mechanism that does has to cause further evaluation when something's dependant on the result of an action).

Philippa, you have

Philippa, you have completely lost me... 8-}
Would you mind to elaborate...

Okay, I'm going to use

Okay, I'm going to use Haskell as an example.

Some monads can be implemented completely within Haskell - State would be an example. Running a computation 'c' in the State monad amounts to evaluating 'runState c', and this'll work fine - modulo termination you'll get the same result with any evaluation order.

Some monads can't be implemented completely within Haskell - notably IO. If I fully evaluate 'print "Hello World!"', I don't get "Hello World!" on my screen - I get a value representing the action of printing it. If I run the program 'main = print "Hello World!"' then conceptually what happens is that 'print "Hello World!"' gets evaluated and the resulting action is executed. Execution is not the same as evaluation, and the behaviour of the monad doesn't rely on evaluation order.

Hmm... Let me ask this way:

Hmm... Let me ask this way: Would monads in Haskell still work the same if Haskell used normal order reduction?

Yes. Those monads

Yes. Those monads implemented in Haskell can't tell, those implemented outside the language don't care.

(blank post)

(blank post)