Best Introduction To Monads For Newbies (& Especially Imparative Minds) I'v Ever Read!!!

This is the best introduction to "Monads" for newbies and imparative
mined that I'v ever read! For a long time I was searching for such an
excellent text on monads. Ofcourse there are many good text there. But
all of them are about mathematics and how clever the monads are. After
all most of them are useless for a practical approach. Take a look at
this http://sigfpe.blogspot.com/2006/08/you-could-have-invented-monads-and.html
!!

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Still hard

Sorry but this is hard to understand.
The syntax doesn't help IMHO.
This kind of declaration are really ugly to read:
bind :: (a → StdGen → (b,StdGen)) → (StdGen → (a,StdGen)) → (StdGen → (b,StdGen))
for reader not used to functionnal notation.

In imperative language, people use typdef to avoid such long declaration.

Maybe hard, but manageable

In imperative language, people use typdef to avoid such long declaration.

No, in (certain) imperative languages people use typedef to avoid abominations like

template<class A, class B>
pair<B,StdGen> (*(*bind(pair<B,StdGen> (*(*)(A))(StdGen)))(pair<A,StdGen> (*)(StdGen)))(StdGen);

which are not hard but impossible to understand. ;-)

That's what I call "syntax that doesn't help"!

Edit: Not that

template<class T>
typedef pair<T,StdGen> Randomised(StdGen);

template<class A, class B>
Randomised<B> *(*bind(Randomised<B> *(*)(A)))(Randomised<A>*);

would be any more pleasent to read. And the typedef there isn't legal either...

Edit 2: Changed auxiliary type name to correspond to article.

type Randomised a

The type (from the article)

type Randomised a = StdGen -> (a, StdGen)

allows one to specify the type of bind as

bind :: (a -> Randomised b) -> Randomised a -> Randomised b

That's the point!

In imperative language, people use typdef to avoid such long declaration.

In Haskell, people use Monads to avoid such long declarations. But some people find them hard to understand....

Keep working at it. It'll make sense in the end. :-)

Having said that...

... that type declaration is probably the hardest part of the article, and the paragraph just before it is garbled. It should read something like:

We now must work out how to compose two randomised functions, f and g. The first element of the pair that g returns needs to be passed in as an input to f. But the seed returned from the g also needs to be passed into f. So we can give this signature for bind:

bind :: (a → StdGen → (b,StdGen)) → (StdGen → (a,StdGen)) → (StdGen → (b,StdGen))

Still confused? Okay, let's use the following type declaration, which is mentioned later in the article:

type Randomised a = StdGen → (a, StdGen)

This says that we can only know the true value [a] of a random value [Randomised a] after we have passed in the seed [StdGen]. When we pass in the seed [StdGen →], we get out a pair consisting of the true value and the next random seed [(a, StdGen)].

Now, recall that f is a function taking a deterministic input, of type a, and returning a non-deterministic output, of type Randomised b:

f :: a → Randomised b

Similarly, g takes input of some as-yet-unspecified type (call it c) and returns a non-deterministic a:

g :: c -> Randomised a

bind allows us to compose f and g. It does this by converting f into a function that accepts a non-deterministic input:

bind f :: Randomised a -> Randomised b

or in other words:

bind :: (a -> Randomised b) -> (Randomised a -> Randomised b)

Composing bind f with g gives:

bind f . g :: c -> Randomised b

Does that help?

Yes, thanks using the type

Yes, thanks using the type helps a lot.

Now I must try to understand the rest :-)
The first two example were very easy to understand, the third caused me trouble and I've not managed to grasp the end of the article.

As an additionnal question, why do you call the seed 'StdGen'?

Unpacking declarations

StdGen = standard generator (I think), corresponding to a type in the standard library. But here it's just a name that could represent any type of random number seed.

If you find it easier to work with the declarations that Peter has provided then you're at an advantage, as that's how real code is written. The reason why I chose to write the article with unpacked declarations was that I (and presumably some others) were having trouble reading real source code and I found that some things became much clearer when I had the entire unpacked defintion in front of me.

Your comments have been valuable as I now have a good idea of what the hardest part is to understand. I may add in a diagram or use some colour coding to make the long declaration easier for a human to parse.

Edit: This is of course a reply to renox. There's something about the layout of these web pages that induces me to hit the wrong button to reply.

I think an explanation without code would be a better start.

Although I find the introduction very good, I think a shorter introduction without Haskell code should be better. Personally I would use graphical shapes to illustrate the functions, of course after I have explained the problems monads solve.

My way to think about monads

My way to think about monads is simply to look at them as a way to force execution order in a language without a specified evaluation order of arguments.

If there is no way to say in which order the arguments of the function f(x, y, z, ...) are evaluated, then we have a problem if those arguments create side-effects (doing IO for example). So how to force a definitive order now? We use te fact that the evaluation happens after the evaluation of its arguments.

So in f(g(h())) h has to be evaluated before g and g before f. Now we have fixed the evaluation order to h->g->f. Monads now simply encapsulate this behavior using HOFs and also encapsulate the chaining of values thru the calls.

Thats it about monads, the rest are implementation details and 'theoretical mumbo-jumbo'. And because execution order is defined by the monad and not the language itself, we can create monads which allow for more complex kinds of execution order, like backtracking etc.

Not just sequencing...

Standard prefix: IANACT (I am not a category theorist)

A paper of McBride's, on what he calls "applicative functors", details a weaker set (and so superset) than monads. These applicative functors sequence things, however, they do so in a static manner. Monads allow more than just sequencing; the paper points out that they allow computed values to effect the control flow of the rest of the computation. Along with arrows, there's probably a decent slew of intermediate-powered structures between monads and pure computation.

1.5 out of two

My way to think about monads is simply to look at them as a way to force execution order in a language without a specified evaluation order of arguments.
...
Monads now simply encapsulate this behavior using HOFs and also encapsulate the chaining of values thru the calls.

Sequencing is one of the foundational features of monads, but on its own doesn't characterize monads properly. After all, CPS also enforces sequencing, and is much simpler. With sequencing alone, you would not be able to "encapsulate the chaining of values thru the calls". That feature relies on the other foundational feature of monads, which is that they wrap values. It is the combination of these two foundational features which supports the full capabilities of monads:

1. Wrapping values in an arbitrary wrapper, specific to each type of monad (performed by 'unit' a.k.a. 'return').
2. Sequencing steps of a computation using an explicit binding/application operator ('bind'), specific to each type of monad.

Most interesting uses of monads depend on both of these features. The fact that monadic values are wrapped allows (among other things) state to be squirreled away "in the monad"; and the fact that 'bind' is invoked between each step of a monadic computation provides a place to perform operations on that squirreled-away state, and to perform any other operations that need to happen at each step.

I totally agree that my

I totally agree that my description is a bit short, but I think that if you want to explain a concept like Monads to the 'imperative mind' (which most programmers still are) you first have to explain the foundation and motivation of the concept. And thats is: Sequencing (by using calls which have a definite execution order even in a pure functional language) and value-chaining thru those calls.

If you mentions things like 'category theory' and give a strict mathematical definition of the concept, most people simply will 'shut down' immediatly because they aren't used to strict mathematical thinking. Monads are still something which sounds quite 'esoterically' to most 'normal' programmers. Many of them have still difficulties to really grok closures. And don't even mention continuations. Maybe we even need a more 'down to earth' instead of 'Monads' for it.

I admit I had the same problems in the beginning and it took some time for me to get the point. If someone simply had told me the above, I'm sure it would've taken less time.

And after you explained the motivation and the basic principles and those principles are really understood (which seems trivial to many of the people here - but its far from trivial for most programmers out there) - then you can go to the details. And yes, on this level monads are really more then the basic concept - but not much more.

Confirmed

I can only agree: Karsten's sketch gives me a basic idea I can grasp and work on from, while other explanations where like a slippery fish that evaded me as a whole whenever I tried to grasp them.

Moggi's view of computational monads

When I was learning about monads the papers that really nailed it for me were some of Moggi's original papers on the denotational semantics of lambda calculi extended with various kinds of side-effects. The distinction in this setting between object and meta-language makes a lot of things clearer when you try to intuit monads in terms of the Kleisli arrow interpretation (which is the only way to go, IMO). These papers are quite readable if you ignore the more heavy-weight domain theory and try to just focus on the interplay between object and meta-language.

If someone with no idea of monads asked me for three sources to read I'd mention some of Moggi's original papers, Wadler's papers and some of Dan Piponi's blog posts. The one mentioned in this thread is decent but I actually liked some of his other blog posts better since they were based on the Kleisli arrow interpretation.