Building Interpreters by Composing Monads

Building Interpreters by Composing Monads

We exhibit a set of functions coded in
Haskell that can be used as building blocks to construct
a variety of interpreters for Lisp-like languages. The
building blocks are joined merely through functional
composition. Each building block contributes code to
support a specific feature, such as numbers, continuations,
functions calls, or nondeterminism. The result of
composing some number of building blocks is a parser,
an interpreter, and a printer that support exactly the
expression forms and data types needed for the combined
set of features, and no more.
The data structures are organized as pseudomonads,
a generalization of monads that allows composition.
Functional composition of the building blocks implies
type composition of the relevant pseudomonads.

So actually it is about building interpreters by composing pseudomonads.

PS: I stumbled upon this paper while trying to factor an interpreter into a set of features (and yes, I tried to package them as monads).
After a day of fighting with undecidable instances and rigid type variables I gave up and started googling - well, I was trying to invent a wheel.
Any comments on how pseudomonads relate to arrows (and other generalizations of monads) are appreciated.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Monad transformers

Aha, Monad Transformers and Modular Interpreters says that "pseudomonads are really
just a special kind of monad transformer".

Semantic Legos

You'll probably be interested in David Espinosa's work if you haven't run across it before. That said, while there are problems with monad transformers, the de facto consensus in the Haskell community seems to be their low-overhead in use outweighs their poor composability properties. Most other approaches never seemed to have gained traction. Arguably, Composing Monads Using Coproducts is the "correct" approach, but my impression when I read the paper a long while back is that I'd rather not have the explicit management overhead for the gain in composability. Part of my reason for this is that, in practice, there seems to be relatively few "primitive" monad transformers. Along those lines though, my not-even-half-baked gut feeling is that a pushout is more what is desired. I'm sure both of these papers have been mentioned on LtU before. Of course, the categorical stuff can be equally applied to arrows.

More freedom, and more control

Arguably, Composing Monads Using Coproducts is the "correct" approach

Wouldn't that mean that composition is associative and commutative? While having these properties is often advantageous, I can imagine PL designers preferring more order than just a soup... err, set of monads. Isn't there some middle ground between a rigid chain of monad transformers and commutative composition? Maybe something like explicit fixpoints - just make the chain of monad transformers eat its own tail instead of a base monad... Not sure if I understand that fully, though :-)

[on edit: maybe what I meant is that instead of coproducts PL designers would prefer colimits of the diagrams they choose, thus having more control of how exactly the features of the language interact]

Stabs at answers

Wouldn't that mean that composition is associative and commutative?

Yes... but only up to isomorphism. The product of types is commutative and associative (in a completely lazy language product types are categorical products), but (A,B) is a different but isomorphic type to (B,A). However, you are right that coproducts are just too "disjoint" which is part of my motivation for suggesting pushouts which (with an initial monad) would give us all finite colimits. While if I remember correctly the paper only explicitly demonstrates the existence of coproducts for finitary algebras, assuming it generalizes to infinitary ones, we can obviously see an issue with combining the State monad with the Continuations monad; there are at least two interesting ways of combining these but there is only one coproduct of them, which I'm pretty sure corresponds to neither.

Re the fixpoint part: I'm not sure what you are getting at there.

Spanning the composition

However, you are right that coproducts are just too "disjoint" which is part of my motivation for suggesting pushouts which (with an initial monad) would give us all finite colimits.

Correct. My CT is quite rusty, I had to open Category Theory Lecture Notes by Barr and Wells to even remember the relationship between pushouts, coproducts, and colimits.
I am glad, though, that I came to the same conclusion as you - we might need more precise composition tools than coproducts.
Could you elaborate your vision of specifying the span needed for a pushout? I would naively imagine it to be a couple of monads A and B we want to compose plus a third monad C plus morhisms C->A and C->B, but the exact meaning of these C->A and C->B eludes me...

Nevermind my comments on fixpoints, I must be playing too much with GHC :-)

Pushouts of monads

[editted]
The spans would, as in all pushouts, specify what parts of the monads we want to identify. Id springs to mind as a place to start as it is always available and makes no assumptions; however, it turns out that Id is the inital monad and therefore there is only one pushout from Id to two other monads, namely the coproduct of the two monads. Taking a different tack, let's start with some monads that are easier to deal with. Consider two free monads generated from two signatures, which we can think of as specifying a collection of operations. The pushout of id:M1→M1 and f:M1→M2 where f maps certain operations of M1 to certain operations of M2 (and preserving return) should be the monad that contains the operations of both monads except with a single new operation corresponding to the identified pairs of operations (this is similar to a generator and relator presentation of an algebra very unsurprisingly). This new operation would behave like both of the operations it joins simultaneously, as a trivial example identifying Nothing of the Maybe monad and Fail of a NonDet(erminism) monad would cause Nothing to behave like Fail and thus induce backtracking and allow Fail to be caught via catchMaybe, in this case, though, this would essentially simply embed Maybe into NonDet. I also believe that it should be possible to witness the image of a monad transformer as a suitable colimit, but I haven't stumbled upon the way to do that.

bump, apologies

Edited posts use to bring a thread into the active forum topics or recent posts lists I believe (which I found annoying), but apparently not anymore. My above edit was more of a post (and I'll handle it that way in the future). Apologies to the (probably large majority of) people who don't care about this thread.

I've been working on this paper, too.

I've been working on trying to understand Monads and interpreters, and I've run into the same undecideable instances and rigid type variables.

One document I found helpful is Monadic Parser Combinators. It has a pretty complete discussion of a wide range of parsers and (functional) combinators. Like the Modular Interpreters paper, this paper uses Gofer as the implementation language.

An implementation of simple monads and combinators in a more recent version of Haskell is Designing and Using Combinators: The Essence of Functional Programming. Take a look at the source code in the Monads and monad transformers section.

Building Compilers by Combining Algebras

"Building Compilers by Combining Algebras" appeared on LtU about 8 months ago, but didn't seem to garner much interest.

I just looked at it -- it's

I just looked at it -- it's a very good read.

Attention

It's more likely things will be noticed if posted to the homepage by a contributing editor (hint, hint).

If BlogLines is any indicator...

...there are over 1500 subscribers to the front page, while the comments and forum topics are in the lower two digit range (unless there are some additional RSS feeds that I'm unaware of).

You are correct

These are indeed the figures. It's worth keeping in mind that many prefer to read the forum on the site, and use the RSS feed only for the homepage - even though they do read the forum, so the numbers don't really indicate how much the homepage is more visbile than the rest of the site.

Be that as it may, the number of people reading the main RSS feed is quite amazing!

Thanks

Thanks, I read that when you posted it (well, more like sampled/sklmmed it, actually), and liked it. So many good papers, so little time...

LPS

Another link I forgot which the name of the second article reminds me of is LPS: A Language Prototyping System Using Modular Monadic Semantics.

Wow, thanks

I now have a lot of reading for the weekend :-)
Thanks to everybody!

One of the privileges of

One of the privileges of being a LtU contributing editor...