Prove: 'Cont r a = (a -> r) -> r' forms a monad

I don't follow Haskell too much, and more often than not I disagree with Erik.

However, this came up during an interview so it's supposedly something I should know.

Prove that 'Cont r a = (a -> r) -> r' forms a monad.

Any takers?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

So it's a typo?

Mwa. That you can do continuation passing in monadic style seems a no-brainer, since you can do everything^tm in monadic style. It looks to me Meijer thought of something different, or the question didn't make it right to the article.

0 points so far! (In Eurosongthingy style.)


The interview question is to provide "bind" and "return" operators for the parameterized type (Cont r) type, and check that they satisfy monadic laws. "bind" is indeed not obvious.

return : a -> Cont r a
bind : (Cont r a) -> (a -> Cont r b) -> Cont r b

I am worried by the amount of incomprehension category theory that you displayed in the FRPnow thread -- because it is not conductive to productive discussion and does not make LtU a better community to be part of. dmbarbour is not always easy to follow but his answers make sense to me. Yours do sometimes, but when "monads" or "category theory" are on the table they don't anymore. You could invest some time into learning about them; you have publicly said that you did not plan to, and this is just fine. But if you don't understand them and don't plan to invest more work into understanding them, maybe it would be best for you not to talk at length about them.

Yeah right.

Yeah right.

re: no-brainer

It's an interview question. Even so-called "no-brainers" like FizzBuzz can often filter a depressing proportion of candidates.

Showing Cont is a monad by defining return and bind (and thus showing knowledge of monads and type analysis, etc.), or pointing to the MTL library (and thus demonstrating one's knowledge of the problem and the available resources, and a healthy habit of looking up existing answers before reinventing old wheels)... either would probably be an acceptable answer in context of an interview. Though the latter is more dubious by far, and my answer was hence rather tongue in cheek.

I certainly wouldn't bet on most people passing. :(

As gasche mentions, the definition for `bind` is not obvious. (It is simple, though.)

Oh, okay

No, I could have come up with those definitions. I understood the question in a different manner. I.e., show that that type is isomorphic to a monad and satisfies monad laws.


The type is not isomorphic to a monad, as a monad has a type like this:

(forall a b . m a -> (a -> m b) -> m b, forall c . c -> m c)

What you want to show is the type can be substituted for 'm' in the above, and you can show that satisfies the monad laws.

Maybe I am focusing too much on the words and not the intent, I get what you mean, I just think words are important, certainly my compiler keeps complaining about the words I use in programs all the time :-)

You got it right

No, that's exactly why I found the assignment puzzling. I supposed that since continuations and monads are much alike, roughly something like control operators, the aim of the game was to show isomorphism.

But the only requirement was to show that there are unit and bind operators. I don't assume that by doing so we have shown isomorphism?

(Actually, Erik uses the terms 'forms a monad'.)

It’s easy, just tedious.

unit = λx k. k x
bind = λm k c. m (λx. k x c)

bind (unit a) f           = f a [left identity]
bind (λk. k a) f          = f a [unit]
λc. (λk. k a) (λx. f x c) = f a [bind]
λc. (λx. f x c) a         = f a [β-reduce]
λc. f a c                 = f a [β-reduce]
f a                       = f a [η-reduce]

bind m unit                 = m [right identity]
bind m (λx k. k x)          = m [unit]
λc. m (λx. (λx k. k x) x c) = m [bind]
λc. m (λx. (c x))           = m [β-reduce]
λc. m c                     = m [η-reduce]
m                           = m [η-reduce]

bind (bind m f) g                       = bind m (λx. bind (f x) g)                 [associativity]
λc. (bind m f) (λx. g x c)              = bind m (λx. bind (f x) g)                 [bind]
λc. (λc′. m (λx′. f x′ c′)) (λx. g x c) = bind m (λx. bind (f x) g)                 [bind]
λc. m (λx′. f x′ (λx. g x c))           = bind m (λx. bind (f x) g)                 [β-reduce]
λc. m (λx′. f x′ (λx. g x c))           = λc. m (λx′. (λx. bind (f x) g) x′ c)      [bind]
λc. m (λx′. f x′ (λx. g x c))           = λc. m (λx′. bind (f x′) g c)              [β-reduce]
λc. m (λx′. f x′ (λx. g x c))           = λc. m (λx′. (λc′. f x′ (λx. g x c′)) c)   [bind]
λc. m (λx′. f x′ (λx. g x c))           = λc. m (λx′. f x′ (λx. g x c))             [β-reduce]


“bind” works by taking an action, giving to that action a continuation which accepts its result x, and passing the result on to the continuation k; it returns a function that takes a continuation c and passes the result of k on to c. (Assuming for the sake of argument that these things actually do return…)

Douze points

Douze points


I wonder if LtU would be better or worse with voting. What we have is a nice balance between the linearity of a forum and Reddit-style threaded discussion.

I sometimes wonder too

But at the same time, you run the risk that all dissent is down-voted. And I am not that happy with LtU. I sometimes wonder whether what is published here should not be seen as masterpieces but medals for sheer incompetence. That is, no relevant news from industry and, personally, I am of the opinion that these days all relevant research is in industry and academia is mostly busy with some self-serving navel gazing.

Where's the latest news on C++, untyped languages like Python and Ruby, machine learning, or compilers targeting Google's tensor hardware?

(Anyway, it's a bit of a joke. It's Eurovision songfestival time, and this is the highest grade. You can Google it.)


Providing the full stepwise proofs of monad laws? In an actual interview, you'd likely get extra credit for even thinking to​ do so. And then you'd probably be stopped so they could move to another question.


I’m familiar enough with this stuff at this point that the slow part is the physical act of writing it out.


I think this is what the interviewer was looking for. Just defining unit and bind does not make something a monad, those definitions have to have a left identity, a right identity, and be associative.

What did we learn?

So, with an open mind, the next question:

What did we learn?

Personally, my takeaway is that there is an interpretation that continuations are naturally chainable and if you study that in a pristine mathematical setting there is an algebraic proof for that.

Well, that, and that professors still want you to pass your algebra exam. So nothing much changed there.

Neat. What did I miss?

Left as exercise for the reader (FIXED)

Edit: OOPS! Sorry I shouldn't have posted an exercise from memory. This wasn't supposed to be hard, but when I tried to do it I got stuck. I forgot I need the return type of the continuation to be in the monad, too. And the quantifier over the return type isn't necessary -- it should probably be left a parameter as with Cont.


Show that for any monad m with bind and return (with correct types but satisfying the monad laws or not), you can define bind' and return' making the following type into a monad (satisfying the laws even if m didn't):

type CPm r a = Cont (m r) (m a)

If you're once bitten, twice shy and don't want to spend any more time on my nonsense, I'll post a solution tomorrow.

That's just 'stacking' stuff, right?

Hmm. I passed my algebra exam twenty years ago, so I am slow, but I bet it's just the same trick for introducing a unit' (by also using unit) and then deriving a bind' which looks mostly similar but also employs bind to chain stuff together.

That doesn't look too hard but I'll gladly leave it to others for now.

Edit: Something like?

unit' = λx k. k (unit x)

And the rest follows naturally? Hmm, it's 4:30am here. Tomorrow, maybe.


It wasn't supposed to be hard, but I suspect it may have been ... quite challenging... due to the fact that I bungled the problem statement. See above.

Uhm, why sorry?

It doesn't look hard. Just another algebra exercise where you need to introduce and chain stuff together. But I want to know more about some SMB worm at the moment, so I'll take a raincheck.


Sorry because my initial phrasing of the problem was wrong and probably impossible. But if you're not interested I'll skip the solution. You're right it's not hard. Very similar to the above from Jon.

Oh, I am interested

Interested enough to wanna know whether my initial estimate of unit' was right. And I guess you need to use a bind in bind'.

But I forgot how Haskell works so I would need to check everything by hand, instead of just running the type checker on it, which is too tedious enough for me.

I'm mobile today so can't

I'm mobile today so can't check either but I think your unit' is right. I think bind' should be:

bind' m k c = m \ma. bind ma \x. k x c

Right Kan Extension

These are special cases of a "Right Kan Extension":
Ran g h a = forall b. (a -> g b) -> h b 

Cont r a = Ran (Const r) (Const r) a 
Codensity m a = Ran m m a
There's another version of the continuation monad which is simultaneously a comonad:
Cont a = Ran Id Id a
which expanded out is:
forall b. (a -> b) -> b


Thanks for the connection.


The "Yoneda Lemma" (which is a kind of church encoding of a functor) is another special case:

forall b. (a -> b) -> f b  

which corresponds to:

Yoneda f a = Ran Id f a

And ofc there's a dual "Left Kan Extension":

Lan g h a = exists b. (g b -> a) (h b)

with its corresponding "Cocontinuation", "Density", and "Coyoneda"

The hierarchy of abstract nonsense

Wondering whether if you walk up the hierarchy of abstract nonsense you'll just end up with one combinator/functor which will do everything? That's how combinator rewriting ended, one combinator and one axiom which does 'everything'.

Hard to pick up interesting things

Just like if you put a monkey with a typewriter in a closed room, given enough time and lifespan, the monkey would end up with all of the Shakespeare greatest books written by himself. Of course, among all of the other combinations. The problem is that there are so many combinations that we could never pick the interesting ones.

The same goes to a type system. A type system describes all of the possible algorithms that can even exist. If we start enumerating them, one by one, we would end up with a lot of artificial general intelligence algorithms. One of the algorithms would be the most wonderful AGI personality one can imagine. And the other would be the most mean AGI personality one could imagine. So, finally, maybe it is good thing that this mean personality is hidden behind 2n combinations. We loose the good one, but it is enough safe not to get the bad one.


A type system describes all of the possible algorithms that can even exist.

Surely not. (Sorry to intrude on the thread.) Most type systems are about excluding vast swaths of algorithms.

int -> int

You can come a long way with integer arithmetic, so as long as your type system allows that, you don't exclude anything.


Type int->int suggests ability to write all computable functions. My interpretation of the word "algorithm" is about how things are computed rather than merely what they compute.

Completeness results

We can write an interpreter for Turing machines (or your favorite model of computation) in Turing-complete typed languages; would that not give completeness for the notion of algorithm that you have in mind? Even among the strongly normalizing calculi, we have rather strong results -- you have to sensibly weaken linear logic to get something that captures polynomial algorithms only, and System F is complete for any complexity class I've ever heard algorithmicians being interested in.

Is System F a Turing Tarpit?

... and System F is complete for any complexity class I've ever heard algorithmicians being interested in.

A Turing tarpit is a Turing-complete programming language in which any computable function could theoretically be written, but in which it is impractically difficult to do so.

Below-Turing Tarpit

System F (without non-termination extensions) is not Turing complete, but yes, the fact that its termination bound is humongous means that most algorithms can be expressed (if one is into set theory, there was a paper this year expressing System F normalization strength through (a variant of) bar recursion), but not that it is convenient to express them in this form.

Both ML-family languages and Haskell go beyond System F in day-to-day programming, with modules in ML and higher-kinded type constructors in Haskell, that both belong to the stronger language F-omega.

There has also been interesting research on expressivity of languages capturing more restricted computation classes, see for example Pure Pointer Program with Iteration, Martin Hofmann and Ulrich Schöpp, 2008, which proposes a programming language for a strict subset of LOGSPACE that is more expressive than previous approaches. The point is that even with languages designed for restricted complexity (which are a special sort of strongly-normalizing languages), our understanding of the question of expressivity/practicality is often fine-grained, people went past the mere idea of a tarpit.

Mwa, I am not that nuanced

People in complexity theory might study different classes and have nuanced feelings about the tarpit but I don't really feel the inclination to follow them in that. I just want languages which allow me to more or less directly state pure or impure textbook algorithms. (Though my own language is an eager untyped term rewrite system.)

Not really my thing. Not worthless either.^1 Although the remark that ML and Haskell implement F-omega variants I find a bit of a stretch. But I haven't looked at Haskell's type checker for ages.

^1: Edit. That came out wrong, I was thinking what it would mean for my term rewrite language.


May I ask what is the difference between functional programming and term rewrite system?

Depends on who you ask, I guess.

It depends on who you ask, I guess. If you go really broad and see functional programming as a programming style even C is a functional language, and if you go really narrow only languages like Haskell are.

I call it a term rewrite system because even the runtime system I implemented in a bit of an old-school manner tries to remain as faithful to a term rewrite system as possible. Although I turn the term inside out to get some performance out of it.

I guess...

... I need to learn a new word today :)

How would we call "algorithms" in functional languages? A function composition? I kind of liked the word "algorithm", it had a kind of cool electronic echo in my mind.



I've got

a better idea: a computing unit! Kids would love it! B]

[edit] No? Ok, term it is.

Agreed, I stepped into the Turing Tarpit

Yah, maybe that was a silly remark.

Roughly, if you look at the relation between types and terms, you should have

  • Lambda terms which are interesting and which cannot be typed in a type system X.
  • Typed terms in type system X which are uninteresting because they don't encode any useful algorithm.

Type any lambda

I don't think it is hard to have a type system that can type any lambda term, although I don't have a proof, I used random open lambdas to test my type inference algorithm. "Compositional Principal Typings" with recursive types enabled can infer a type for any lambda expression I have been able to generate.

However, I am not convinced of the usefulness of this approach, after all, if you can type any term, what safety does it give you? Yes, you could use the type system to add type-classes and generic types to a language with type inference, but the type of most complex lambda terms devolved into something like: "mu A . A -> A"

That would be mostly useless

What would the point be of a type system that could type any lambda terms? The whole point of the game is to exclude terms?

A possible use for typing everything

A possible use for being able to type everything is to detect useful terms which aren't typeable in another more restricted system, I guess.

Excluded types

I failed to mention that type systems also often exclude types which are interesting, syntactically valid, but don't meet the type system's criteria.

My pet peeve interesting type is

type automaton i o = i -> (o, automaton i o)

This fails to type check in lots of systems.


It seems to me, intuitively, that any type system must either exclude terms that are of interest, or include terms that are wrong, even if there were a clear dichotomy between the two kinds of terms (which, on consideration, there might not be). This feels more like an informal principle than a formal hypothesis, but at a minimum, if one chooses some particular class of actions to define as "wrong", the set of programs that would actually do those things is undecidable, whereas presumably a type system is only useful if typing of terms is decidable.

Last I know we are using undecidable type systems

I think they moved away from decidable type systems (in the general sense) and moved towards 'in general decidable with a little help' because the former was too restrictive.