## A question of separation logic and monads

I am slowly working my way up to an understanding of separation logic and one thing I have not seen with separation logic is a transformation into functional code with immutable data structures.

With normal imperative code you just wrap a big fat monad around everything and there is a straightforward transformation into functional code with immutable data structures.

The disadvantage of a big fat monad is that you loose the ability to reason about parts of the program that have side effects but are independent of each other. I would expect therefore that smaller monads that could be split and combined by the rules developed for separating logic and the separating conjunction would exist.

Does anyone know if anyone has already research that? Or perhaps I am making an elementary error in my reasoning?

## Comment viewing options

### Illusion of simplicity

The idea that monads give you a simple translation from any imperative code to purely functional code is an illusion. Some well-studied, simple notion of effects have this property, but that does not cover side-effects as used in practice.

Take the State monad: it does model mutable state, but it models *one* mutable cell of one statically-fixed type that is threaded throughout the computation. Actual code (in ML, for example) dynamically allocates mutable cells in arbitrary number and types, this cannot be represented with the state monad alone.

The only syntactic encoding of general references I know of is the work on Fork, A typed store-passing translation for general references, François Pottier, 2011, and it is not at all "straightforward". There are semantic methods to build equivalent tools in a possibly less sophisticated way (I'm thinking of a trail of work by Lars Birkedal and colleagues, or Aloïs Brunel's PhD thesis), but they don't really give an intuition of the underlying transformation.

The finer-grained the effects, the more subtle the translation. Given that separation logic is used to reason about subtle aspects of effects (separation/interference), I would expect an equally subtle translation to be required.

### Simple monads capture simple effects

There are, of course, much more sophisticated monads out there. If you're interested in the semantics of local store, Ian Stark's thesis is a starting work, with much work following, the most recent strands I'm familiar with are the algebraic accounts of Gordon Plotkin and John Power (2002), Sam Staton (2009, 2010, 2013), Paul-André Melliès (2014 --- sorry for the non-free copy, couldn't immediately find a free one on his home page).

### algebraic effects

Koka, Eff, PureScript, Kitten, and various other functional programming languages have been pursuing Algebraic Effects (recently on mentioned on LtU), which offer a fine-grained means to type effects (e.g. so updating THIS variable is a different effect from updating THAT variable) and to compose systems with different effects.

Other interesting approaches are available if you're willing to model effects as first-class values, i.e. capabilities as inputs and effect-descriptors as outputs for functions. But a lot of those are difficult to protect unless you also have substructural (affine, relevant, linear) types (i.e. such that you can prevent an effect-descriptors from being lost or replayed).

### This is folklore

Separation logic is a substructural logic, and people applying substructural logic to programming typically use a linear type theory of some kind. Linear types can be seen as a particularly well-behaved subsystem of separation logic, with the separating conjunction P * Q corresponding to the tensor product P ? Q.

The key feature of separation logic that makes it possible to give modular specs of imperative programs is the frame rule. In terms of linear types, this boils down to having (a) a computational monad T(-) for effects, which has (b) a strength. That is, you want a map s : A ? T(B) ? T(A ? B) (satisfying some equations I ignore).

In terms of stateful programs, the strength says that if you have a stateful value of type A, and separately a computation producing a value of type B, then you can produce a computation producing a pair of an A and a B, which is only possible if the computation of B cannot interfere with the state owned by A (i.e., is separate).

I don't know where this was first documented in the literature, though.