## Parameterized Notions of Computation

Parameterized Notions of Computation, Robert Atkey, JFP 2008.

Moggi's Computational Monads and Power et al's equivalent notion of Freyd category have captured a large range of computational effects present in programming languages. Examples include non-termination, non-determinism, exceptions, continuations, side-effects and input/output. We present generalisations of both computational monads and Freyd categories, which we call parameterised monads and parameterised Freyd categories, that also capture computational effects with parameters.

Examples of such are composable continuations, side-effects where the type of the state varies and input/output where the range of inputs and outputs varies. By also considering structured parameterisation, we extend the range of effects to cover separated side-effects and multiple independent streams of I/O. We also present two typed Î»-calculi that soundly and completely model our categorical definitions â€” with and without symmetric monoidal parameterisation â€” and act as prototypical languages with parameterised effects.

Once you've programmed with monads for a while, it's pretty common to start defining parameterized families of monads -- e.g., we might define a family of type constructors for IO, in which the program type additionally tracks which files the computation reads and writes from. This is a very convenient programming pattern, but the theory of it is honestly a little sketchy: on what basis do we conclude that the indices we define actually track what we intend them to? And furthermore, why can we believe that (say) the monadic equational laws still apply? That's the question Atkey lays out a nice solution to. He gives a nice categorical semantics for indexed, effectful computations, and then cooks up lambda calculi whose equational theory corresponds to the equations his semantics justifies.

The application to delimited continuations is quite nice, and the type theories can also give a little insight into the basics of how stuff like Hoare Type Theory works (which uses parameterized monads, with a very sophisticated language of parameters).

On a slightly tangential note, this also raises in my mind a methodological point. Over the last n years, we've seen many people identify certain type constructors, whose usage is pervasive, and greatly simplified with some syntactic extensions -- monads, comonads, applicative functors, arrows, and so on. It's incredible to suggest that we have exhausted the list of interesting types, and so together they constitute a good argument for some kind of language extension mechanism, such as macros. However, all these examples also raise the bar for when a macro is a good idea, because what makes them compelling is precisely that the right syntax yields an interesting and pretty equational theory in the extended language.

## Comment viewing options

### Atkey on arrows

Relevant to the methodological point, another paper of Robert Atkey, What is a Categorical Model of Arrows?, clarifies the relationship between Freyd categories and arrows. It has been discussed here on LtU twice (one, two).

### ?

It's incredible to suggest that we have exhausted the list of interesting types, and so together they constitute a good argument for some kind of language extension mechanism, such as macros. However, all these examples also raise the bar for when a macro is a good idea, because what makes them compelling is precisely that the right syntax yields an interesting and pretty equational theory in the extended language.

I'm pretty confused by what you're saying here.

First, neither monads nor arrows require macros, or even language extension - merely their notation does (unsurprisingly, since macros about about notation). Further, why do they raise the bar for when a macro is a good idea? There are many extremely elegant functions in Haskell, but they don't 'raise the bar for when a function is a good idea'. Finally, the equational theory of monads is expressed in terms of the combinators, not the syntax (at least in all the presentations I've seen).

Can you explain what you're getting at here?

### What I'm getting at is that

What I'm getting at is that there's nothing "mere" about notation. While it is certainly possible to describe the operations of new type constructors with a set of combinators, this is not an ideal way of doing so.

When we look at a normalization proof for a typed lambda calculus (or alternately the cut-elimination proof for its sequent calculus), we see that the proof has a modular structure when the introduction and elimination forms for each of the type formers is distinct -- we can add and remove types from the system without disturbing the rest of the proof very much. However, adding constants (such as combinators) of non-base type makes the proof considerably less modular and more fragile, because they destroy the symmetry between introduction and elimination.

So this is a mathematical consideration arguing for adding new syntax to the language, whenever we introduce a significant new type constructor. For example, in the case of monads, Moggi suggested a syntax for introducing and eliminating monadic types, with an equational theory corresponding to the monad laws (roughly like Haskell's do-notation, only a little cleaner). Pragmatically, I find this syntactic presentation of the equational theory much easier to read, and my experience has been that it's a lot easier to teach to monad-newbies, as well.

Now, when we program in a programming language, we have to work with the existing types of the language, and expose interfaces in terms of a collection of types and functions. In other words, we're exposing our interfaces as a set of combinators, which brings up the infelicities mentioned above. So, one possible[*] solution to this is to extend the language, and to prove that a normalization theorem holds for the extension. This way, we can erase the difference between (e.g.) sum types as an abstract type and sum types as a primitive type constructor.

This "raises the bar", in the sense that we're asking language extension to do more work than just serving as an abbreviation, and that induces a correctness criterion/proof obligation.

[*] I don't actually know if this works, since I haven't worked it out in detail. It's on my ever-expanding post-thesis todo list. :-)

### Hmm

For most languages, there is no normalization proof (this is certainly true of Haskell), so it seems like a strange obligation to suggest that language extensions (such as monads) meet.

Also, why should we expect language extensions, in general, to correspond to new introduction or elimination forms for particular new classes of types? This happens to be the case for do-notation and arrow-notation, but it's not the case for, say, n+k patterns, to pick a random notational extension. We don't expect the authors of functions to have a new equational theory to go along with the function definition - why should we expect this of the authors of macros?

### Re: Normalization

There are strong normalization proofs for some very interesting subsets of Haskell, such as polymorphic delimited continuations. Of course you are always free to use general recursion or other computational effects such as error and lose those properties; but on the other hand, the strong normalization results are still partially applicable to these programs.

### Warning flags

Perhaps it's time for GHC to add a -fwarn-general-recursion flag?

### We'll add -Wmutation to GCC

We'll add -Wmutation to GCC next.

### Language Extension...

This "raises the bar", in the sense that we're asking language extension to do more work than just serving as an abbreviation, and that induces a correctness criterion/proof obligation.

I understand your desire for symmetry in the language - it offers a sort of elegance and uniformity that becomes especially impressive if sufficiently powerful to be maintained through a wide variety of standard and 3rd party libraries.

In that sense, I'd "raise the bar" for standard library function definitions, procedures, service abstractions, etc. just as much as I would for macros. It all needs to fit together.

But, for the end user of a language, adding a bit of syntax to allow representation of a concept without all the syntactic noise surrounding it is a perfectly legitimate way to reduce both clutter and maintenance labor... just as much so as naming a value using a 'define', 'let', or 'where' clause. Adding words to a language is a language extension in every sense I understand the phrase. Ask any Forth programmer ^_^.

Rather than raising the bar for macros, we should lower it further... such that syntax extensions become no more difficult to read, write, comprehend, and modify than are function definitions. Luca Cardelli's work on extensible Attribute Grammars has been inspiring in this direction.

I haven't looked at the paper carefully yet, although I'm rather acutely aware of it. I've been playing with polymorphic delimited continuations a bit for about a month now, and I'm throughly convinced that they are Monads 2.0.

### sigfpe weighs in.

Dan Piponi just posted an article about this on his blog, with the modest proposal that all the excitement about Monads is misdirected, the parameterized version is what Haskell really needs to support and the work involved in the upgrade is mostly trivial. I can't really judge whether Dan is on the right track -- my Haskell is almost as good as my Dutch.

### Hey, that's pretty cool!

That's a nice article -- thanks for pointing it out. I should have known sigfpe would already be on the same track. :) However, I don't think the type signature he proposes is the right one. He suggests:

class ParameterisedMonad m where
return :: a -> m s s a
(>>=) :: m s1 s2 t -> (t -> m s2 s3 a) -> m s1 s3 a


For intuition, you can read m s1 s2 t kind of like an assertion in Hoare logic. You can read this type as saying "if the precondition is s1, then running this command will yield a postcondition state of s2, and return a value of type t".

The composition operation >>= he proposes then says that you can glue together a command that takes s1 to s2 to one that takes s2 to s3. This is too stringent; it requires the post and the pre-condition in the composition to be exactly the same. For example, imagine that the s indices tracked a set of open files. In this case you wouldn't want commands to be able to distinguish between different orders of opened files.

Continuing with the analogy to Hoare logic, this is akin to the sequential composition rule. However, Hoare logic also has a rule of consequence, which lets you strengthen preconditions and weaken postconditions. This lets you compose two commands that don't have post- and pre-conditions that are exactly the same, and that capability is missing from this interface.

### I would think it would be

I would think it would be possible to add that in on top of these types as a separate weakening function. There should be no trouble defining two functions: pre :: (s1 → s2) → m s2 s α → m s1 s α and post :: (s1 → s2) → m s s1 α → m s s2 α and then using these to make the types match exactly before using (>>=).

### Parametric polymorphism

Doesn't Haskell already take care of this? Hindley-Milner would assign the most general possible types (propositions, per Curry-Howard) to the arguments, and parametric polymorphism would take care of specializing them as needed by the ParameterizedMonad, right? (I'm in way over my head here...)

### Depends on how you represent...

...pre- and post-conditions.

For example, I could imagine using a pair of types, A and B, to represent a pair of conditions that might or might not hold. One could then use Haskell types like (A,B) to mean A holds and B holds, and Either A B  to mean either A or B holds. In that case, if A were your post-condition and Either A B were your following pre-condition, Milner-Hindley isn't going to allow you to use join at this point. Even though logically A implies (A or B), Either A B is not more general (in the sense of unification) than A. But as Derek points out, you could fix this if you use something like pre or post in conjunction with the function Left :: A -> Either A B.