## Distributive laws for the Coinductive Solution of Recursive Equations

B. Jacobs, Distributive laws for the Coinductive Solution of Recursive Equations, Information and Computation, to appear.

This paper illustrates the relevance of distributive laws for the solution of recursive equations, and shows that one approach for obtaining coinductive solutions of equations via infinite terms is in fact a special case of a more general approach using an extended form of coinduction via distributive laws...

Turi and Plotkin first investigated such a situation where one functor G describes the syntax of a programming language and the other functor F the behaviour of programs (terms) in that language. Having a distributive law GF => FG means that the behaviour on terms is well-defined, and leads to results like: (coalgebraic) bisimilarity is an (algebraic) congruence. Hence distributive laws capture where "algebra meets coalgebra".

Interesting stuff.

Be warned: This paper is highly technical, and requires familiarity with the fundamental notions of category theory.

## Comment viewing options

It's a shame to see important papers like this written so abstractly as to be unreadable by most language designers and implementors.

For example, one example of distributive laws of monads is the natural transformation from "arrays of dictionaries of t" to "dictionaries of arrays of t", and another is the transformation zip :: [t]*[t] -> [t*t]. The structure here is very interesting and one can see the outline of applications, such as extensions to Haskell's monad comprehension constructs that deal with distributions.

But the author immediately jumps to "Lambda-bialgebras" with maps a:T(X)->X and b:X->F(x) and I'm lost. I'd love to see a concrete example of this because I don't see how to arrive at anything useful from an instance of e.g. a::[t]->t.

The general conclusion of the paper seems to be that certain guarded recursive equations over functions are guaranteed to have a unique solution. This is a topic I have been investigating on my own in implementing an experimental compiler and I wish I could follow the author's reasoning to that point.

### Help

Maybe someone here can help. If something is unclear, why not raise it here for discussion.

### Example

Bart Jacobs asks me to point you to

http://www.cs.ru.nl/B.Jacobs/PAPERS/cmcs04-solutions.pdf

It has a worked example (example 3.3).

The text of the link above is right, but the href has an extra <br> after the .pdf.

Done.

### An example algebra

So far, I've found the paper pretty opaque too, but maybe an example algebra will shed some light.

Consider this functor:

data List a b = Nil | Cons a b

instance Functor (List a) where
fmap f (Cons a b) = Cons a (f b)
fmap f Nil        = Nil


The fixpoint of List a is the usual list type (that is, List a [a] is isomorphic to [a]). A List a-algebra over a carrier b is a function List a b -> b. You can do a bunch of things with such algebras, for example:

fold :: (List a b -> b) -> [a] -> b
fold alg []     = alg Nil
fold alg (x:xs) = alg (Cons x (fold alg xs))


In fact, you can show that List a b -> b is more-or-less isomorphic to (b, a -> b -> b), which are the usual arguments to foldr.

You can also use Maybe-algebras to implement primitive recursion, because the fixpoint of Maybe is isomorphic to the natural numbers.

More generally:

data Fix f = Fix (f (Fix f))

cata :: Functor f => (f a -> a) -> Fix f -> a
cata alg (Fix x) = alg (fmap (cata alg) x)


### Aha.

> A List a-algebra over a carrier b is a function List a b -> b

That makes much more sense now. Thanks.