Category Theory

Lightweight Monadic Programming in ML

Lightweight Monadic Programming in ML

Many useful programming constructions can be expressed as monads. Examples include probabilistic modeling, functional reactive programming, parsing, and information flow tracking, not to mention effectful functionality like state and I/O. In this paper, we present a type-based rewriting algorithm to make programming with arbitrary monads as easy as using ML's built-in support for state and I/O. Developers write programs using monadic values of type M t as if they were of type t, and our algorithm inserts the necessary binds, units, and monad-to-monad morphisms so that the program type checks. Our algorithm, based on Jones' qualified types, produces principal types. But principal types are sometimes problematic: the program's semantics could depend on the choice of instantiation when more than one instantiation is valid. In such situations we are able to simplify the types to remove any ambiguity but without adversely affecting typability; thus we can accept strictly more programs. Moreover, we have proved that this simplification is efficient (linear in the number of constraints) and coherent: while our algorithm induces a particular rewriting, all related rewritings will have the same semantics. We have implemented our approach for a core functional language and applied it successfully to simple examples from the domains listed above, which are used as illustrations throughout the paper.

This is an intriguing paper, with an implementation in about 2,000 lines of OCaml. I'm especially interested in its application to probabilistic computing, yielding a result related to Kiselyov and Shan's Hansei effort, but without requiring delimited continuations (not that there's anything wrong with delimited continuations). On a theoretical level, it's nice to see such a compelling example of what can be done once types are freed from the shackle of "describing how bits are laid out in memory" (another such compelling example, IMHO, is type-directed partial evaluation, but that's literally another story).

Kleisli Arrows of Outrageous Fortune

Kleisli Arrows of Outrageous Fortune

When we program to interact with a turbulent world, we are to some extent at its mercy. To achieve safety, we must ensure that programs act in accordance with what is known about the state of the world, as determined dynamically. Is there any hope to enforce safety policies for dynamic interaction by static typing? This paper answers with a cautious ‘yes’.

Monads provide a type discipline for effectful programming, mapping value types to computation types. If we index our types by data approximating the ‘state of the world’, we refine our values to witnesses for some condition of the world. Ordinary monads for indexed types give a discipline for effectful programming contingent on state, modelling the whims of fortune in way that Atkey’s indexed monads for ordinary types do not (Atkey, 2009). Arrows in the corresponding Kleisli category represent computations which a reach a given postcondition from a given precondition: their types are just specifications in a Hoare logic!

By way of an elementary introduction to this approach, I present the example of a monad for interacting with a file handle which is either ‘open’ or ‘closed’, constructed from a command interface specfied Hoare-style. An attempt to open a file results in a state which is statically unpredictable but dynamically detectable. Well typed programs behave accordingly in either case. Haskell’s dependent type system, as exposed by the Strathclyde Haskell Enhancement preprocessor, provides a suitable basis for this simple experiment.

I discovered this Googling around in an attempt to find some decent introductory material to Kleisli arrows. This isn't introductory, but it's a good resource. :-) The good introductory material I found was this.

A Lambda Calculus for Real Analysis

A Lambda Calculus for Real Analysis

Abstract Stone Duality is a revolutionary paradigm for general topology that describes computable continuous functions directly, without using set theory, infinitary lattice theory or a prior theory of discrete computation. Every expression in the calculus denotes both a continuous function and a program, and the reasoning looks remarkably like a sanitised form of that in classical topology. This is an introduction to ASD for the general mathematician, with application to elementary real analysis.

This language is applied to the Intermediate Value Theorem: the solution of equations for continuous functions on the real line. As is well known from both numerical and constructive considerations, the equation cannot be solved if the function "hovers" near 0, whilst tangential solutions will never be found.

In ASD, both of these failures and the general method of finding solutions of the equation when they exist are explained by the new concept of overtness. The zeroes are captured, not as a set, but by higher-type modal operators. Unlike the Brouwer degree, these are defined and (Scott) continuous across singularities of a parametric equation.

Expressing topology in terms of continuous functions rather than sets of points leads to treatments of open and closed concepts that are very closely lattice- (or de Morgan-) dual, without the double negations that are found in intuitionistic approaches. In this, the dual of compactness is overtness. Whereas meets and joins in locale theory are asymmetrically finite and infinite, they have overt and compact indices in ASD.

Overtness replaces metrical properties such as total boundedness, and cardinality conditions such as having a countable dense subset. It is also related to locatedness in constructive analysis and recursive enumerability in recursion theory.

Paul Taylor is deadly serious about the intersection of logic, mathematics, and computation. I came across this after beating my head against Probability Theory: The Logic of Science and Axiomatic Theory of Economics over the weekend, realizing that my math just wasn't up to the tasks, and doing a Google search for "constructive real analysis." "Real analysis" because it was obvious that that was what both of the aforementioned texts were relying on; "constructive" because I'd really like to develop proofs in Coq/extract working code from them. This paper was on the second page of results. Paul's name was familiar (and not just because I share it with him); he translated Jean-Yves Girard's regrettably out-of-print Proofs and Types to English and maintains a very popular set of tools for typesetting commutative diagrams using LaTeX.

Simplicial Databases

Simplicial Databases, David I. Spivak.

In this paper, we define a category DB, called the category of simplicial databases, whose objects are databases and whose morphisms are data-preserving maps. Along the way we give a precise formulation of the category of relational databases, and prove that it is a full subcategory of DB. We also prove that limits and colimits always exist in DB and that they correspond to queries such as select, join, union, etc. One feature of our construction is that the schema of a simplicial database has a natural geometric structure: an underlying simplicial set. The geometry of a schema is a way of keeping track of relationships between distinct tables, and can be thought of as a system of foreign keys. The shape of a schema is generally intuitive (e.g. the schema for round-trip flights is a circle consisting of an edge from $A$ to $B$ and an edge from $B$ to $A$), and as such, may be useful for analyzing data. We give several applications of our approach, as well as possible advantages it has over the relational model. We also indicate some directions for further research.

This is what happens when you try to take the existence of ORDER BY and COUNT in SQL seriously. :-)

If you're puzzled by how a geometric idea like simplexes could show up here, remember that the algebraic view of simplicial sets is as presheaves on the category of finite total orders and order-preserving maps. Every finite sequence gives rise to a total order on its set of positions, and tables have rows and columns as sequences!

An Innocent Model of Linear Logic

An Innocent Model of Linear Logic by Paul-André Melliès was referenced by Noam in a serendipitious subthread of the "Claiming Infinities" thread.

Here's the abstract:

Since its early days, deterministic sequential game semantics has been limited to linear or polarized fragments of linear logic. Every attempt to extend the semantics to full propositional linear logic has bumped against the so-called Blass problem, which indicates (misleadingly) that a category of sequential games cannot be self-dual and cartesian at the same time. We circumvent this problem by considering (1) that sequential games are inherently positional; (2) that they admit internal positions as well as external positions. We construct in this way a sequential game model of propositional linear logic, which incorporates two variants of the innocent arena game model: the well-bracketed and the non well-bracketed ones.

The introduction goes on to refer to to André Joyal's "Category Y with Conway games as objects, and winning strategies as morphisms, composed by sequential interaction," and points out that "it is a precursor of game semantics for proof theory and programming languages," and is "a self-dual category of sequential games." The foreword mentions that the paper goes on to give "a crash course on asynchronous games" and then "constructs a linear continuation monad equivalent to the identity functor, by allowing internal positions in our games, [which] circumvents the Blass problem and defines a model of linear logic."

Jacques Carette called this paper mind-blowing. My mind-blow warning light already exploded. I'm posting this paper because I know a number of LtUers are interested in these topics, and this way I can buttonhole one of them the next time I see them and ask them to explain it to me. ;)

Lawvere Theories and Monads

Martin Hyland and John Power (2007). The Category Theoretic Understanding of Universal Algebra: Lawvere Theories and Monads. ENTCS 172:437-458.

Both monads and Lawvere theories provide characterisations of algebraic structure, with monads providing the more general characterisation. The authors provide an introduction to Lawvere theories, discusses their relationship to sets, and why monads became the more popular treatment.

Then they tackle the application of the theory to the semantics of side effects, where they argue that the generality of monads allow them to characterise computational phenomena that are not to do with side effects such as partiality and continuations, and argue that Lawvere theories more cleanly characterise what side effects are.

This paper is a good introduction to an important line of recent research done by Hyland&Power; cf. also the LtU story Combining computational effects.

Parameterized Notions of Computation

Parameterized Notions of Computation, Robert Atkey, JFP 2008.

Moggi's Computational Monads and Power et al's equivalent notion of Freyd category have captured a large range of computational effects present in programming languages. Examples include non-termination, non-determinism, exceptions, continuations, side-effects and input/output. We present generalisations of both computational monads and Freyd categories, which we call parameterised monads and parameterised Freyd categories, that also capture computational effects with parameters.

Examples of such are composable continuations, side-effects where the type of the state varies and input/output where the range of inputs and outputs varies. By also considering structured parameterisation, we extend the range of effects to cover separated side-effects and multiple independent streams of I/O. We also present two typed λ-calculi that soundly and completely model our categorical definitions — with and without symmetric monoidal parameterisation — and act as prototypical languages with parameterised effects.

Once you've programmed with monads for a while, it's pretty common to start defining parameterized families of monads -- e.g., we might define a family of type constructors for IO, in which the program type additionally tracks which files the computation reads and writes from. This is a very convenient programming pattern, but the theory of it is honestly a little sketchy: on what basis do we conclude that the indices we define actually track what we intend them to? And furthermore, why can we believe that (say) the monadic equational laws still apply? That's the question Atkey lays out a nice solution to. He gives a nice categorical semantics for indexed, effectful computations, and then cooks up lambda calculi whose equational theory corresponds to the equations his semantics justifies.

The application to delimited continuations is quite nice, and the type theories can also give a little insight into the basics of how stuff like Hoare Type Theory works (which uses parameterized monads, with a very sophisticated language of parameters).

On a slightly tangential note, this also raises in my mind a methodological point. Over the last n years, we've seen many people identify certain type constructors, whose usage is pervasive, and greatly simplified with some syntactic extensions -- monads, comonads, applicative functors, arrows, and so on. It's incredible to suggest that we have exhausted the list of interesting types, and so together they constitute a good argument for some kind of language extension mechanism, such as macros. However, all these examples also raise the bar for when a macro is a good idea, because what makes them compelling is precisely that the right syntax yields an interesting and pretty equational theory in the extended language.

Computation and the Periodic Table

By now there is an extensive network of interlocking analogies between physics, topology, logic and computer science, which can be seen most easily by comparing the roles that symmetric monoidal closed categories play in each subject. However, symmetric monoidal categories are just the n = 1, k = 3 entry of a hypothesized “periodic table” of k-tuply monoidal n-categories. This raises the question of how these analogies extend. We present some thoughts on this question, focusing on how monoidal closed 2-categories might let us understand the lambda calculus more deeply.

Link to the talk

via The n-Category Café

Arrows generalise monads and idioms

Two fresh papers from the Edinburgh theory stable:
  • Lindley, Wadler & Yallop, 2008. The Arrow Calculus, (Functional Pearl) (submitted to ICFP).
  • Lindley, Wadler & Yallop, 2008. Idioms are oblivious, arrows are meticulous, monads are promiscuous (submitted to MSFP)
    We revisit the connection between three notions of computation: Moggi’s monads, Hughes’s arrows and McBride and Paterson’s idioms (also called applicative functors ). We show that idioms are equivalent to arrows that satisfy the type isomorphism A ∼> B ≅ 1 ∼> (A -> B) and that monads are equivalent to arrows that satisfy the type isomorphism A ∼> B ≅A → (1 ∼> B). Further, idioms embed into arrows and arrows embed into monads.
The first paper introduce a reformulation of the Power/Thielecke/Paterson/McBride axiomatisation of arrows, which the authors argue is more natural, and shows that arrows generalise both monads and idioms. The second paper studies the relationships between the three formalisations in more formal depth; in particular the results about applicative functors struck me as significant.

Species: making analytic functors practical for functional programming

Species: making analytic functors practical for functional programming, Jacques Carette and Gordon Uszkay. Submitted to MSFP 2008.

Inspired by Joyal's theory of species, we show how to add new type constructors and constructor combinators to the tool set of functional languages. We show that all the important properties of inductive types lift to this new setting. Species are analytic functors, representing a broader range of structures than regular functors. This includes structures such as bags, cycles and graphs. The theory is greatly inspired by combinatorics rather than type theory: this adds interesting new tools to bear, but also requires more work on our part to show that species form a good foundations for a theory of type constructors. The combinatorial tools provide a calculus for these structures which has strong links with classical analysis, via a functorial interpretation of generating series. Furthermore, we show how generic programming idioms also generalise to this richer setting. Once the theory is understood, various methods of implementation are relatively straightforward.

Learning more about the theory of species and working out its implications for programming has been on my to-do list for several years now. It really seems like the natural way to more tightly integrate combinatoric ideas into the functional programming style. (For an example of how fruitful this connection can be, consider how datatype differentiation explains zippers/functional pointers.)

However, there's been this whole "graduation" and "primary research focus" stuff that's kept getting in the way, so I'm happy to see that Carette and Uszkay are figuring it out so I can just crib from them. :)

XML feed