Does anyone know of work done on co-data-types?

There has been a lot of work done on co-data and stream programming, but the assumption is usually of a very simple repetitive data stream. Has anyone done any work on highly structured co-data or co-data-types? A data structure passed into a function represents a limit on all of it's constituent types, that is, the structure must be fully constructed (AND logic) before the function can be called. But a co-data structure (co-type) passed into a function represents a co-limit and only needs to be partially constructed (OR logic), perhaps the function can even return partial (co-data) results based upon the co-data-type it receives. This seems to me a very relevant topic for advancing stream programming and the exploitation of multiple layers of data parallelism.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Data Level Parallelism

In terms of work on co-data that isn't streams, I found that the generic approximation lemma was very useful for looking at bisimulation over co-data trees. Graham Hutton and Jeremy Gibbons have a really nice paper on how to use it here.

In Coq you can construct coinductive types and create corecursive functions that themselves return coinductive types. There are lots of examples of Coq code that uses infinite or potentially infinite trees.

Of course in Haskell you can pun data and co-data and people do it regularly, though it does seem as though streams are by far the most common example.

I've had the same thought lately that data-level parallelism is where it is at, and that coinductive methods are the simplest way to look at it. For servers and other reactive systems it is useful to think of a program as having potentially many I/O streams which "drive" computation of different parts of the program. This would make programming user interfaces which must react to user input, while also performing other I/O actions much easier to write. Maybe someone can give some pointers to functional programming systems which exploit this?

I'm not sure exactly what you had in mind in terms of more general codata types and how they would be used for the exploitation of data parallelism. Care to clarify?

data level parallelism

I meant more complicated types, not more general. Co-inductive types sounds like what I should have been searching for. I'll read up on it. In terms of exploiting data-parallelism, multiple levels of parallelism can exist over different levels of state. For example, processing an array [13,2,23,4]. I can have a function say +1 that can be applied in parallel over each member, or function that needs an adjacent pair could process [13,2], [2,23], [23,4] simultaneously. Another function might need four numbers to produce a result. Now convert the arrays to streams and the function parameters to co-limits and the functions to objects capable of holding state and we're talking about parallel processing over a stream of data with different objects holding state over different parts of the stream. The degree to which multiple instances of these objects can be instantiated and run in parallel determines the maximum amount of data parallelism possible (I think!). I built a system (unfortunately proprietary) that works like this, now I'm trying to understand it, and perhaps build something better.

Charity, CPL

There is the Charity language, which comes with coinductive data types a well as the regular inductive data types. There is Hagino's thesis, Categorical Programming Language, which discusses codatatypes.

The difference between a data type and a codata type is that a data type is given by a set of constructors and comes with a fold operation (or pattern matching) to destruct it, and a codata type is given by a set of destructors and comes with an unfold operation (or the dual of pattern matching - 'record' or 'merge') to construct it.

Hope that helps. The publications of Bart Jacobs might be useful if you haven't seen them, too.

Hagino's thesis very very interesting. I'm surprised I've not seen it before, and it appears never to have been posted to LTU. Does anyone know if there is an implementation of CPL floating around.

Especially interesting is the inference of the adjunction. We can simply write a data type such as the disjoint union as

left object A + B with [,] is
  inl : A -> A + B
  inr : B -> A + B 
end object

And [,] is inferred from the resulting commutative diagram (excuse the ascii art).

A --> A+B <-- B 
|      |      | 
f    [f,g]    g
|      |      |
v      v      v
C ==== C ==== C 

It's interesting also to see his criticism of ADTs and algebraic specification. I'd be interested to know what else has been done subsequently in this area.


I've also wondered if there's an implementation of CPL, or related work building directly on Hagino's... I would like to put a bounty on such a thing, but I have nothing to offer but my admiration.

Maybe I will have one - sort of

So far I haven't found an implementation, and I'm glad someone else is interested. I started my own implementation a couple of weeks ago - sort of.
My goal was to be able to evaluate expressions and to see where the mind goes thinking like that.
So far is a bunch of haskell code that allows me to evaluate expressions, while the reduction rules are hardcoded, and no typing. So, not a real implementation, but I have a simple toplevel that takes a prelude of definitions, and a prompt that parses and evaluates... I can

:trace add.pair(s.0, s.s.0)
:edit let even = pr(in1, case(in2,in1))
:edit let not=case(in2,in1)

Of course the real fun is be to build the reduction rules from the types definitions... I hope I can bootstrap it some day, but for the moment I plan to 'compile by hand' some other types to play with, maybe trees and automatas.


Here's an implementation of Hagino's CPL: A Categorical Programming Language

See also the discussion

A few things...

Also broaden your search to include material on coalgebra.

You will almost certainly want to look at Varmo Vene's thesis, Categorical programming with inductive and coinductive types, as well as many other papers of Vene's (often with Uustalu).

I'd also second the recommendation to look at Bart Jacobs' stuff. In particular, he has a book draft (Introduction to Coalgebra: Towards Mathematics of States and Observations) that you might be interested in.

I think this is a very interesting area with many deep connections. Some of these connections are not yet well understood, or at least not well-used by existing languages.

After reading a bit of Jacobs...

Jacob's book is very well written and just about at my level. I haven't finished it yet and still don't understand a lot but I have a question.

What do you call the function that maps a co-inductive type back into a POID (Plain Old Inductive Data) type? A -> {A U ?} -> A, the ? being my symbol for exception, non-termination or any untoward result.

An example of this would be reading a file stream. While reading the stream your data is co-data, it remains co-data if there is an exception or the stream never terminates. But if the stream does terminate (eof), then the data could become a POID again. From a practical point-of-view it seems to me that a function processing a co-inductive type could be very interested in observing this transition.

Delayed reply...

I've been thinking about similar things lately, from time to time. The formal thing you suggest is perhaps an F-bialgebra, but I'm not sure that really captures the idea you're talking about.

It's worth noting, incidentally, that for any initial F-algebra f: FA -> A, f is an isomorphism, so there is an "inverse" coalgebra g: F -> FA. Thus (F, f, g) is a bialgebra. (That initial algebras are fixed points of F, and that f is iso, is basically Lambek's Lemma.) Of course, likewise for terminal coalgebras, although in many cases they are different fixed points, and the resulting bialgebras are different. But again, I don't think this is really what you're interested in.

I have a strong hunch that there is indeed more to say relating these various perspectives. But I do not yet know what it is, or if anyone has yet said it. I don't think it's as simple as the first-order bialgebra notion. I'm more interested, for example, in the way that we can use a coalgebraic formalism (e.g., objects) to model algebraic data, but I don't have anything interesting to say about it.