Monads vs Applicative Functors.

This article ( got me thinking about the structural problems with monads, namely that `bind: m a -> (a -> m b) -> m b` is opaque in its second argument. One thing that I have become increasingly concerned with over the past few years is the separation of data and code, and this article made it clear to me that monads have a problem in this respect, and at least in the case of parsers result in mixing implementation details with the application of the parser. By combining an approach focused on the initial algebra for parsers, and implementing combinators as an applicative functor (and not a monad), the "application" of the parser is reduced to a static data structure, and the `map` functions only generate output.

Using continuation parsing to encode the existential types from the GADT (see and function overloading, I was able to port the applicative parser to TypeScript, which I have made available here and as a Deno module here I also added a couple of useful helper functions based on Parsimmon's `seq` and and `seqMap` which use a little conditional-type magic to make working without operator overloading easier. The result is we can write a simple parser in TypeScript (for floats) like this:

const digit = OneOf('0123456789');
const float1 = seqMap((a, b) => [...a, b], many1(digit), OneOf('.'));
const float2 = seqMap((a, b) => [...a, ...b], float1, many1(digit));
const float3 = seqMap((a, b, c) => [...a, b, ...c], choice(float2, many1(digit)), OneOf('e'), many1(digit));
export const float: Parser<number> = FMap(a => parseFloat(a.join('')), choice(float3, float2, float1));

Monads are of course useful, and things aren't really so black and white, but it seems interesting to consider what can be gained from the separation of data and code possible if we limit ourselves to applicative functors. The benefits include being able to have a complete implementation for the parser initial algebra in 50ish lines of code, that you can build your parser application from the basic 9 combinators and applications can remain the same whilst you can swap different implementations (with backtracking, error reporting, greater efficiency etc).

It also seems interesting to consider the initial algebra for parsers in the same way we consider Codd's five primitive operators for relational algebra. The ones used in the article are Fail/Empty/OneOf/Return/Forget/Map/Product/Either/Fix, is this minimal? There are clearly alternative formulations, for example `OneOf` isn't the only possibility, but it has the advantage that it allows the set of accepted symbols to be generated for a parser application.

In the context of compiler architecture, it seems interesting that each pass of the compiler can be implemented as a recursive descent of the abstract syntax tree with case matching on node type. With this approach the parser also becomes a recursive descent with case matching over the syntax definition of the language, which seems an elegant solution.

Thinking about programming languages themselves, as we can thread state through an applicative functor, and we can assign to scoped variables (map), it seems interesting to think about what a functional language that uses applicative functors to manage state and IO would look like? Obviously existing programming languages can implement applicative functors, in the same way you can write object-oriented code in procedural languages, but they don't focus on it. Likewise there is nothing stopping someone writing monadic code, but you would not want to encourage it. What would a language designed to encourage the programmer to think in an applicative, initial-algebraic way look like?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

A Monad = A Pair of Functors?

I have been using the above typescript implementation of applicative parser combinators to try and implement a parser for lambda calculus using de Bruijn indexes. The interesting thing is that whilst you can implement this with a monadic parser, you cannot implement it with the applicative parser without re-implementing the `parse` compile function. This suggests that there is a combinator missing from the initial algebra.

Referring back to my earlier work on C++ parser combinators reminds me that we can consider a parser as an attribute grammar (thanks to Knuth) and that the `map` function of the applicative parser deals with synthesised attributes, but only some inherited attributes (the ones involving siblings, where `Product` is used to pass the result of a group of siblings to `FMap`.

So to avoid having to rewrite the `parse` compiler/interpreter function for different applications, which would defeat the point of separating the implementation of the parser combinators from the application of the parser, we need to thread inherited attributes with the parser state, and add a combinator to replace them.

First we change the type of the function returned by `parse` to be:

export type Parse<A,B> = (_: {cs: string, pos: number, args: A}) => {result: B, cs: string, pos: number}|null;

And we need to change the type of the parser cominators to be:

export type Parser<A,B> =
    | Fail<A,B>
    | Empty<A,B>
    | OneOf<A,B>
    | Return<A,B>
    | Forget<A,B>
    | FMap<A,B>
    | Product<A,B> 
    | Either<A,B>
    | Fix<A,B> 
    | Raw<A,B>
    | ArgMap<A,B>

And implement the new combinator `ArgMap` which mirrors `FMap`, so that ArgMap maps the arguments parsed to a combinator, and `FMap` maps the results returned by a combinator (perhaps `ResultMap` would be a better name?).

Finally we need a way to access the parent-inherited attribute, so we update `FMap` to include the `args` passed in as an extra argument to the `map` function.

An example application would look like this:

const testParser = parse(ArgMap(({n}:{n:number}) => ({n:n+1}), FMap((m:number, {n}:{n:number}) => n+m, Return(1))));

testParser({cs: '', pos: 0, args: {n:1}});

with the expected result:

{cs: '', pos:0, result: 3}

It seems interesting that we can achieve the same expressive power as monadic variable binding using a pair of applicative functors, one that maps the input and one that maps the output from each node in the parser AST. In a way this seems obvious from Knuth's work on attribute grammars.

Is there a category theoretic explanation of this relationship between the pair of functors, and the monad?

It seems interesting that we can take a monad, with the problems of opaque second argument, and transform it into a pair of applicative functors, and retain the same power of expression, but keeping the structure of the parser application completely separate from the implementation of the parser combinators, such that we have a iterable abstract syntax tree for the parser application.

monads, lenses, arrows

There are a lot of ways to describe monads. To disentangle some of the overlapping features of Haskell's representation (e.g. with bind), we might use this:

class Functor f where
  fmap : (a -> b) -> (f a -> f b)
class (Functor f) => Applicative f where
  pure : a -> f a
  ap   : f (a -> b) -> f a -> f b
class (Applicative f) => Monad f where
  join : f (f a) -> f a

With this formulation, it's easy to view Monads as a metaprogramming API: an 'f' program can construct another 'f' program then directly run it with 'join'. In context of grammars, this capability would correspond to some of the most powerful adaptive grammars. The same ones that are nigh impossible to wrap one's meat-brain around. Naturally, this is vastly more powerful than most use-cases will require. Excessive power is one of the common complaints about monads.

Aiming for solutions that are more comprehensible and controllable, Applicatives are one obvious approach. I believe that Generalized Arrows provide a better viewpoint for this sort of question than the Monad/Applicative/Functor breakdown because all of the latter models *assume* an implicit host model of function `(->)` types, which might also be unnecessarily powerful (e.g. non-termination) or undesirably opaque for many use-cases.

We can also look into Lenses/Prisms, which can be expressed within many models.

Anyhow, there is no direct relationship between a "pair of functors" and monads. It's just that some popular use-cases for monads, such as modeling stateful programs, can be done with much more restrictive APIs.

Initial algebras and Arrows.

I think my particular interest in functors and category theory comes from a sort of topological viewpoint (perhaps leaning towards homotopy type theory, if I actually understood it). Much like Codd's 5 relational operators, the idea that we can reduce other complex things like parsers to an initial algebra seems very interesting to me, and perhaps a good way to structure a standard library for a language.

I am not really sure I 'get' arrows, after all isn't an arrow just a function? From a type system point of view `a -> b` is just an infix binary type-constructor, it has no special properties, and could just as well be replaced with `Fn a b` or `Arr a b` without changing anything.

arrows and functions

Some meandering thoughts:

You say: "From a type system point of view `a -> b` is just an infix binary type-constructor, it has no special properties,". This isn't quite true, because `->` is a *concrete* constructor. It inherits many specialized properties from its instances. For example, in Haskell, an `a -> b` term is opaque and computing `b` is potentially divergent. These propositions aren't true of arrows in general.

If we want to abstract and generalize the arrow itself, that requires careful design. Generalized arrows, e.g. per Adam Megacz's thesis (Generalized Arrows are Multi-Level Languages), are one case of this design. (Note: Haskell's Control.Arrow does not generalize arrows to a sufficient extent for Megacz's use-cases.)

We can find specialized algebras for parsers that are pretty good for most use cases, and I believe most other problem domains will also have suitable algebras. But a minimal algebra of any problem domain probably won't include C++ or Haskell functions. In some sense, embedding arbitrary host functions within an initial algebra is a meta-layer instance of "you wanted a banana".

Aside: Categories can be viewed as a small aspect of generalized arrows. E.g. Haskell's Control.Category module describes an abstract arrow `cat a b` that provides `id` and sequential composition.

Type Constructors and Static Enumeration

In the end the semantics of the language matching the syntactic type system operations is entirely arbitrary. It would seem interesting to make the language actually use a generalised arrow as it's 'arrow' type. So we could take a generalised arrow, define it to be '->' in the type system, and adjust the semantics to match.

Regarding bananas, the point of the above is that the parser structure is entirely declarative, the "host language" functions only map data out of the parser, or map data into the parser and these map functions are pure. FMap is used to construct AST output nodes, and ArgMap is used to update the input to a sub-parser. The `parse` function takes that static parser definition and 'compiles' it, and that process does not depend on the definitions of any of the map functions. The map functions only evaluate when the 'compiled' parser is fed the input string.

So the point is you can analyse the structure of the parser without having to know anything about those 'map' functions, and that's something you can't do with the monadic version due to the opaque second argument.

I think this minimises the size of the gorilla that you get with your banana, because the initial algebra of the parser is independent of those map function.

To try and put it another way, the static initial algebra of the parser defines the 'set' of acceptable inputs. You should be able to enumerate all possible valid input strings from that static definition with no knowledge of what is in the map functions. The map functions only affect the specific result you get for a specific input. ArgMap can only pass inherited attributes into the FMap functions. FMap can only map one result type into another, it cannot cause the parser to 'fail', hence both host functions together only affect the 'result' data, they cannot affect the set of valid inputs to the parser.

Having an statically enumerable set of valid inputs is one of the key features along with the ability to generate the set of accepted symbols, even generate a complete state diagram of the parser automatically from the declaration.

Edit: perhaps it needs to be clarified that 'helper' functions like 'choice' and 'second' in the code example below simply expand into the initial algebra, for example second can be defined:

const second = (a, b) => FMap(([_,x]) => x, Product(a, b));

The parser cannot be compiled by the 'parse' function if it contains anything other than the initial algebra, if you look at the implementation of 'parse' it's just a recursive switch/case - made slightly more complex because of the continuation passing to encode the GADT type.

syntax is arbitrary

If it were easy, too, that'd be fantastic. ;)

statically enumerable set of valid inputs is one of the key features, along with the ability to generate the set of accepted symbols

These are indeed very nice features, but they can be achieved with backtracking monads, too.

Mostly we need to replace opaque 'read' primitives with 'expect' primitives that tell an external observer of the effects more about what tokens are anticipated.

FMap can only map one result type into another

Relatedly, Profunctor is what you're looking for if you want to also map over input types.

Opaque Monads

I don't think you can have the static enumeration of all possible inputs with a monadic parser because the bind function 'm a -> (a -> m b) -> m b' second argument is a function, and hence can do anything, for example fail the parser. So when we a fold over the parser application, we can't get at RHS of the bind function because it's a function.

I read a bit about Profunctor, how do they differ from a pair of simple functors?

Monads and Search

Assuming functions are opaque, you cannot use structural inspection to find valid inputs. But there are other methods.

For example, the user of the monadic program can control the response to each 'expect' request (i.e. success vs failure) to systematically *search* the entire space of valid inputs. If a parse fails, you just drop the search path leading to it then try another path. An input leading to normal termination of the parse is accepted.

This works with a black box, only requires that the program abstracts its input source.

Searching infinite sets

Yes, that makes sense, although if the second argument of the monad returns a random response it could be tricky. Further if the parser accepts an infinite set of inputs this will never terminate, whereas with the static approach we can know the infinite set (because we get the structure including the fixed points).

For example we could consider a parser that accepts the digits of Pi. We can extract from the static structure that the parser never terminates, but you cannot do that from just searching the valid inputs. This is even true of the lambda calculus parser which already has an infinite set of valid inputs.



Infinite sets

Yes, to understand the infinite set accepted by a monad would require abstract interpretation on its construction.

That's essentially your intent with the apllicative model. A difference is that your applicative is mostly transparent while your monad is mostly opaque. This isn't a necessary difference, rather due to your chosen encodings (including PL).

Regardless of opacity, applicative is less powerful (limited to context-free grammars) and much easier to analyze. Applicatives cannot, for example, express a grammar for parsing embedded binaries based on a `(count of bytes, raw data bytes)` representation. At least without introducing a new rule just for that.

Bounded Recursion

Applicatives cannot, for example, express a grammar for parsing embedded binaries based on a `(count of bytes, raw data bytes)` representation. At least without introducing a new rule just for that.

It's not quite so bad, you can use the profunctor `lmap` to push the parsed count into the inherited attributes. The remaining limitation is that the recursion provided by `Fix` is infinite (terminated by failure), and we know that the map functions cannot directly cause the parser to fail.

One solution to that would be to introduce limited repetition, a combinator like `Repeat` that applies its parser argument `n` times.

context-sensitive parsing

True. We can create combinators for patterns such as `CountedParse(Parser for Header, Header -> Count, Parser for Elem)` returning a pair (Header, Array of Elem). Such a rule would cover size-prefixed embedded binaries without being "just for" size-prefixed embedded binaries.

Monads support arbitrary context-sensitive patterns. If you have a language where you can describe or import a parser for later use (e.g. reader macros), the parser will naturally be monadic. Monads also allows us to express new context-sensitive patterns without requiring new built-in constructors like CountedParse. But the option to add new built-ins does exist, and is a worthy option if you don't need to invoke it too frequently in practice.

context-sensitive with composition

Profunctors can do better in this regard than functors, because profunctors compose.

So we don't need `CountedParse(Parser for Header, Header -> Count, Parser for Elem)`. Instead we can do:

Compose(Parser_for_Header, Repeat(Parser_for_Elem)

`Parser_for_header` would return an int, `Compose` is the standard profunctor composition, `Repeat` applies the parser `N` times which comes from the input. The types would be:

Compose: P a b -> P b c -> P a c
Parser_for_Header: P () Int
Repeat: P () a -> P Int a

re profunctor composition?

The 'Profunctor' typeclass in Haskell doesn't require a compose method. It only requires dimap. But the 'Category' typeclass does require the compose method you're suggesting (for morphism composition). It does make sense for your profunctor to also be a category. Composition is very useful! But it certainly isn't implied for profunctors to always be categories, nor vice versa.

In addition to sequential composition, you might also want arrow-like composition such that you can operate on just part of the output value.

first : P a c → P (a * b) (c * b)
second : P b c → P (a * b) (a * c)

left : P a c → P (a + b) (c + b)
right : P b c → P (a + b) (a + c)

With first and second, you can easily do Forth-like stack programming in your category, where a stack has the form `(a * (b * (c * stackRem)))`. With left and right, you get conditional programs (bool is `(() + ())`). It's very convenient.


Yes, I realised the same thing, and then came here and read your post. What I have ended up with is a Category...

I had already implemented first and second as they were necessary to route the data. I think you get first and second with a "Strong" Profunctor, its composition that makes it a Category.

Applicative Profunctor

Okay I get a Profunctor now, `(a1 -> a0) -> (z0 -> z1) -> f a0 z0 -> f a1 z1`, where 'f a b' is our parser with two type parameters.

We could supply a 'Dimap' combinator that applies both the input and the output transform function, but its probably easier to just apply the two map functions separately.

Maybe we should rename 'ArgMap' as 'ContraMap' to make the Profunctor structure more explicit.

Edit: Haskell uses `lmap` and `rmap` rather than `fmap` and `contramap` that might be better, and shorter too.

Edit2: this is not really a Profunctor because LMap and RMap are not quite right, RMap gets passed an extra argument which is the attributes, and LMap is actually a binary operator, I wonder if its possible to refactor into a proper Profunctor, or if there is necessarily a more complex structure here?

The current structure is more like this:

(c -> b -> a) -> (b -> a -> d) -> f a b -> f c d

L-Attribute Grammar

Continuing with implementing the parser from A tutorial implementation of a dependently typed lambda calculus, just the simply typed lambda calculus for now. You can see the original ML parser, which uses the monadic `Parsec` library here

My initial attempt at implementing ArgMap doesn't work, because we have no way to evaluate some of the sibling nodes before this node, which we need so that we can pass the lambda expression argument list to ArgMap before evaluating the RHS term. Because the order of evaluation is fixed by the structure of the parser, this will result in an L-attribute grammar. We can achieve this by refactoring ArgMap to take two arguments, a left and right. The left argument is evaluated first, and its result is added as an extra argument to the `map` function, the right argument is evaluated after with the `args` returned from the `map` function. With this change we can successfully implement the simply typed lambda calculus parser from the above paper:
function parseLam(iterm: Parser<Attr,ITerm>): Parser<Attr,CTerm> {
    return Fix(lam => RMap(([xs, t]) => xs.reduce((acc, _x) => Lam(acc), t),
        LMap(({env}, xs) => ({env: [...xs.reverse(), ...env]}),
            second(operatorBackslash, many1(identifier)),
            second(operatorArrow, parseCTerm0(iterm, lam))

function parseCTerm0(iterm: Parser<Attr,ITerm>, lam: Parser<Attr,CTerm>): Parser<Attr,CTerm> {
    return Either(lam, RMap(Inf, iterm));

function parseCTerm3(iterm: Parser<Attr,ITerm>): Parser<Attr,CTerm> {
    return Either(parens(parseLam(iterm)), RMap(Inf, parseITerm3(iterm)));

const parseITerm0 = Fix<Attr,ITerm>(iterm => Either(
    seqMap((t, ty) => ty ? Ann(Inf(t), ty) : t,
        seqMap((t, ts) => ts.reduce(App, t), parseITerm3(iterm), many(parseCTerm3(iterm))),
        Either(RMap(maybe, second(operatorColon2, parseType)), Return(maybe()))
    seqMap((t, ty) => Ann(t,ty), parens(parseLam(iterm)), second(operatorColon2, parseType))

function parseITerm3(iterm: Parser<Attr,ITerm>): Parser<Attr,ITerm> {
    return Either(
        RMap((x, {env}) => {
            const i = env.indexOf(x);
            return (i >= 0) ? Bound(i) : Free(Global(x));
        }, identifier),
The main changes are needing to refactor the recursion to use the fixed point operator, and the need to use ArgMap to pass the updated list of variables to allow the De Bruijn index to be found. It should be possible to work around the L-attribute grammar restriction by using the look-ahead combinator `Forget` to pass the right siblings, but that is not needed in this case.

Edit: I have renamed ArgMap as LMap, and FMap as RMap to make the Profunctor structure more obvious.

Is it really a functor?

The implementation of applicative parser defines the type of the parser (after 'compilation' to be something like:

{cs: string, pos: number} => {cs: string, pos: number, result: A}

The output type of the parser `Parser<A>` is just the `result` part of the value returned.

If we look at `product` `p a -> p b -> p (a,b)` are we cheating by feeding the input with the `pos` advanced to the RHS of product?

currently given a parser like:

Product(token(string('a')), token(string('b')))

if we pass the input:

{cs: 'a b ab aa bb c', pos: 0}

we get the output:

{cs: 'a b ab aa bb c', pos: 4, result: ['a', 'b']}

The question is, is this really a functor, because the RHS of `product` gets a different input from the LHS?

If this were a proper functor, would we not have to do something like:

Compose(FMap(x => [x,x], token(string('a'))), Second(token(string('b'))))

So we apply the `a` parser to the input, its result gets duplicated, we run parser `b` on the output of parser `a` (which includes the input string and updated position) and because its strong the result from the LHS passes through unaltered. Finally the `Second` on parser `b` makes the result only update the second of the pair from the LHS of `Compose`.
This second version seems to actually be a functor to me, the first seems dubious?

Is the first version really a functor at all?

Monads are functors.The

Monads are functors.

The question isn't whether you have extra behaviour, only whether your behaviours include fmap.

Okay to have parser behavior

So its okay to only consider the routing of some inputs and outputs in a Profunctor `P a b`, `a` being the inherited attributes and `b` being the results, and ignore the routing of the input string, and the position of the parser in that string when considering the Profunctor. I have always assumed that to be the case, but I just wanted to check I wasn't over-reaching here.

Strong ProductProfunctor

So it seems what I want is a Strong ProductProfunctor. In this way I can have the correct signatures for LMap and RMap, and using three new combinators `Compose` which sends the result of the first parser as the attributes to the second, `First` and `Second` which route a pair of inputs so that one (the first or second respectively) goes through the parser, and the other bypasses it (this is the 'strong bit). I also modified `Return` so it is passed in the current attributes, so you can just return some/all of them as a result. I'm not sure if thats right for pure/return in a Profunctor? LMap and RMap can be used to duplicate the attributes and results into pairs as required for `First` and `Second`.

With these changes the parser is actually a strong product-profunctor, and the two parts of the lambda calculus parser that need these features now look like this:

function parseLam(iterm: Parser<Attr,ITerm>): Parser<Attr,CTerm> {
    return Fix(lam => Compose(
        RMap(([xs, vars]:[string[], string[]]) => tuple(xs, [...xs.reverse(), ...vars]),
            LMap(x => tuple(x,x), First(second(operatorBackslash, many1(identifier))))),
        RMap(([xs, t]:[string[],CTerm]) => xs.reduce((acc, _x) => Lam(acc), t), 
            Second(second(operatorArrow, parseCTerm0(iterm, lam))))

function parseITerm3(iterm: Parser<Attr,ITerm>): Parser<Attr,ITerm> {
    return Either(
        RMap(([{env}, x]) => {
            const i = env.indexOf(x);
            return (i >= 0) ? Bound(i) : Free(Global(x));
        }, Product(Return(id), identifier)),

Selective applicative functors

I ran in to this a few weeks ago; it might be interesting to you, if you haven't already come up with this idea independently.

Selective applicative functors

Applicative functors and monads have conquered the world of functional programming by providing general and powerful ways of describing effectful computations using pure functions. Applicative functors provide a way to compose independent effects that cannot depend on values produced by earlier computations, and all of which are declared statically. Monads extend the applicative interface by making it possible to compose dependent effects, where the value computed by one effect determines all subsequent effects, dynamically.

This paper introduces an intermediate abstraction called selective applicative functors that requires all effects to be declared statically, but provides a way to select which of the effects to execute dynamically. We demonstrate applications of the new abstraction on several examples, including two industrial case studies.

Profunctor Composition

That does look interesting, however I don't think we need this with Profunctors, because they compose:

compose: P a b -> P b c -> P a c

Edit: as noted above, adding composition to a Profunctor makes it a Category.


I have implemented the LambdaPi interpreter from this paper (in TypeScript):

using the applicative (categorical?) parser. This repo is here:

The lambdapi parser is in src/lambdapi/parser.ts

I have added a 'Try' combinator to control backtracking, and error generation.

In the end it does seem very 'Forth' like (referring to dmbarbour's comment above) using `LMap(x => tuple(x,x)` which is effectively `dup` and First/Second to select which element is passed direct to the output.

This is obviously not as easy to code with as a Monad where you get varaibles you can assign. Thinking about how you could maintain the transparency - I wonder if you added combinators to get and set variables? You effectively would then have an embedded DSL, but would it be applicative or monadic?

For example rewriting:

    second(keywordForall, parseBindings(iterm, parseLam(iterm), true)),
    First(second(operatorPoint, parseCTerm0(iterm, parseLam(iterm))))


    Set('x', second(keywordForall, parseBindings(iterm, parseLam(iterm), true))),
    Set('y' Get('x', second(operatorPoint, parseCTerm0(iterm, parseLam(iterm))))))

This would still seem to be a category. I am not even sure we need composition here, but would the 'set' and 'get' would be a side-effect in the target language?

re forth-like

For a Forth-like experience, you must model the data-stack as a tuple. So 'dup' would be `forall a,s. p (a,s) (a,(a,s))`.

Similarly, `data : forall a,s. a -> p s (a,s)`, `swap : forall a,b,s. p (a,(b,s)) (b,(a,s))`, etc. And 'second' is equivalent to the 'dip' combinator.

Your version of dup does it wrong if you want a very casual Forth-like composition. But it's good for other composition styles.

Named Variables

Yes, you are right, and we could model the attributes as a stack, but then why not move directly to name/value pairs stored in a map. This seems the more interesting case, as we can embed a simple language with variables, and looping. In that way we can extend the parser to context sensitive grammars without having to write custom combinators? In fact you would seem to have equivalent expressive power to a monad, whilst just having a category. Thinking about this we shouldn't really need more than sum and product (like an algebraic datatype), as using those we can construct the abstract syntax tree for a (any?) language.

re named vars

I'll distinguish between labels and names. A map or dict is labeled data, and your proposed use is very similar in nature to a register machine.

This is convenient for access but ultimately awkward for composition. Every operator ends up requiring a bunch of source/dest register parameters, which greatly hinders concise expression. There is a lot of overhead tracking which labels are in use for which purposes, and which are free. Inevitably, you'll be shoving copies of registers into a 'stack' register just so you can reuse labels or avoid conflicts in multiple subprograms.

That's probably not your intention. What you want is to automate local data plumbing by naming variables, with implicit scopes and shadowing, etc. And this is more a notation issue.

Sum, products, and ASTs

we shouldn't really need more than sum and product (like an algebraic datatype), as using those we can construct the abstract syntax tree for a (any?) language

This is almost true. In the PL I'm developing now, Glas, the underlying representation for all data is binary trees, equivalent to `type T = (1+T)*(1+T)`.

A lot of data is represented by bitstrings, e.g. 00010111 is a tree where every node has only one outbound edge. Bitstrings can encode numbers and also symbols and labeled data, e.g. encoding a null-terminated UTF-8 text into the bitstring prefix, and using a radix tree (prefix sharing) for labeled records.

To compact bitstrings, a more suitable rep is `type T = BitString * (1+(T*T))`. This is equivalent to the prior T.

In any case, starting with products and sums, we can encode any acyclic structure, including extensibility and self documentation via text labels, and certainly including ASTs.

That said, beyond sum and product (and their identities), you'll also need some way to represent and work with recursive structure, e.g. recursion types and loops.

Type Checking

Thinking about this for a while, I can see a problem with my above suggestion. By reducing the 'combinators' to just the sum and product constructors we lose all type checking of the parser syntax. The functor/profunctor/categoric/monadic structure is what enables the host-language type-system to ensure only valid parsers are constructed.

So whilst my suggestion of Get/Set combinators would work for allowing variables to be used with just the structure of a category there would be no checking that variables are initialised, used after definition etc. Of course there could be a syntax/type check in the parser itself, but that doesn't seem as good to me.

It occurs to me that we could look at the lack of transparency of the monads in a different way. We could define the functions in the target language. For example extend the parser combinators with a Lisp interpreter, Now the structure of these combinators in the host language would be monadic, but also transparent.

re type checking

You can still typecheck sums and products: it just expands to a very big mess of pair and Either types in the host language. The main issue is that most host languages would make it painfully awkward and inefficient to represent or work with large, sophisticated types in this manner.

Relatedly, you can check that variables are initialized, i.e. if you use a row-polymorphic record type in your host language instead of a dynamic 'map' type. In context of representing everything using sums and products, a row-polymorphic record is easy: it's simply a trie made of pairs and sums with the 0 type blocking one choice (i.e. type `(a + 0)` is equivalent to `a` but adds a typeful prefix to it). Easy, just dreadfully inefficient and awkward in most host languages.

It occurs to me that we could look at the lack of transparency of the monads in a different way. We could define the functions in the target language. For example extend the parser combinators with a Lisp interpreter, Now the structure of these combinators in the host language would be monadic, but also transparent.

Yes, this is essentially the path that GADTs would take.


So this is already using a GADT to type check the parser. But that't not quite what I meant. Let me rephrase in terms of a question:

Can you type-check the abstract syntax of a parser that provides named variables (with shadowing etc) in the host language's type-system without it being a Monad?

For example even if we use a row polymorphic record to represent variables, we can only take a variable on the right (output size) to the left (input size) by composition. That means the complete input gets replaced after the composition. The only way to reference earlier input is to use dup to duplicate it and then route only one side of the pair through the next parser combinator. In order avoid this routing and maintain a variable in an environment in the host language seems to require a monad?

monadic parsers

If you want a non-monadic parser within a category, all you really need to do is avoid any construct of the form: `forall s s'. p ((p s s'), s) s'`.

Earlier, I described expressing a data stack within a parser. However, I should clarify that this is a parse-time data stack. It does not represent the data stack of the program being parsed. It's just a data stack that the parser itself is using to temporarily store data for later parsing steps. This can be very convenient for constructing a sophisticated parse. Similarly, if we model a variable context within this type, they would be parse-time variables like an attribute grammar.

You can track the variable context of the program being parsed as a variable in the parser's context. Then, you might refuse to parse variables that aren't defined ahead of time, for example. But the set of program variables would be dynamic from the parser's perspective. There is no easy way to check it except via dynamic processing in the host language.

I'm not convinced that monadic structure would particularly help with this dynamic processing. Monadic structure is more likely to be needed for user-extensible parsers.

First class names

I think I am not expressing the question clearly enough.

Looking at a monadic parser like:

x <- parseLength
y <- parseBinaryOfLength x
return (x, y)

Here `x` and `y` are not dynamic with respect to the parser, they are type checked by the host language the parser is written in.

If we look at that as a categoric parser:

Compose (Set 'x' ParseLength) (Compose (Set 'y' (ParseBinaryOfLength (Get 'x'))) (Return (Get 'x', Get 'y')))

In the first example 'x' and 'y' exist in the static scope of the language we are writing the parser in, and are checked by the host language type system when the parser is compiled.

In the second example 'x' and 'y' exist in the runtime of the parser (maybe as a list of names and values which we search, so we get shadowing), and are not checked when the parser is compiled, but only when the parser is actually used.

My question is, is a monad essential to get the host language to statically check the 'variables' at compile time?

Could you for example use first class names in the type system to get the host language type-system to track the allocation, scope and type of the variables in the categoric version.

In which case, why not provide some syntax sugar for the categoric version, and you might end up with something like:

x <- parseLength
y <- parseBinaryOfLength x
return (x, y)

So does this represent a monad or a category? It seems to me that depends on the types assigned.

Would first class names in the host language type system allow you to replace monadic structures with categoric ones?

re first class names

You could easily use types to constrain a statically knowable dict of variables that your parser is passing around. Any of dependent types, refinement types, or row polymorphic record types could do the job.

Monadic structure is not essential. Aside: use of variables is non-monadic if the variables are limited to data. We do have an implicit monadic structure if variables may be 'called' or 'applied' or 'invoked', e.g. as first-class functions. But second-class languages can also support variables, no problem.

Syntactic sugar is much more relevant.

I have used syntactic sugar for local variables in stack-based languages, such that swap could be expressed as {\X Y-> Y X}, or drop as {\X->}, implicitly compiling clearly scoped local variables to a composition of point-free combinators. You might read `\X ->` as popping the top stack element and storing it into a new X variable which survives to the end of the curly-brace scope, and `\X Y->` as desugaring to `\Y -> \X ->`. I could also do let-like expressions as { Expr \X-> Expr \Y -> Expr }. It is feasible to support further sugaring to a more conventional syntax and even model 'mutable' variables (e.g. for use within loops), or for a notation to fully hide the stack and only use variables.

Sugar is capable of doing nearly 100% of the job you're asking assuming you have composition that is flexible enough to model a data stack. You just need to be clever about it. Use of labeled data to directly model a structured variable environment can certainly be made to work, and might be more convenient for debugging (because you hold onto the names for longer), but it's essentially an implementation detail for desugaring. In most cases, your 'program environment' should be a static type, whether it's a data stack made from tuples or an environment of local and global variables represented by row-polymorphic records.