Recursive types

In some literature, including Wikipedia, there is a distinction I don't quite understand made between iso-recursive and eqi-recursive typing. In both cases, however, there is a concept of unrolling a recursive type by replacing the fixpoint variable with the recursive term.

I believe this idea is suboptimal and should be replaced by a better one I shall describe below. I am interested if this has been done before, and if my analysis (and the concrete algorithm I have written) is correct.

The problem with iso-recursion is that it induces an equivalence relation on type representations, and this a partition, which fails to place eqi-equivalent types in the same class. Here is an example:

([] -> [([] -> fix-2)])
([] -> [fix-2]

Here [..] denotes an n-ary tuple type, -> is a function type, and fix-2 is a fixpoint whose binder is implicitly 2 levels up in the structure. These encodings represent the same type, the second one is recursive but its unrolling is not equal to the first type.

I have found a solution to this problem: instead of a full unrolling, we can just unroll one level. This is much better! In particular my re-rolling algorithm subsumes the standard re-rolling by a full circle, and also produces a unique canonical representative of the class. It should work for monomorphic and first order polymophic types, at least if there is only a single fixpoint.

I will not give the algorithm here but I will give a description because it is fun! Consider a piece of string representing your type, with coloured segments for each level. The end of the string is knotted around the string some way up. The knot is called the fixpoint and the place it is knotted to is called the binding point.

The algorithm for re-rolling the string simply rolls the circle up the string one segment and compares the colours. If they're equal, roll up another segment. Continue until you either run out of string, or you get a mismatch, in which case backtrack by one. You can finish by cutting the matching tail off and doing a new knot.

Unrolling is the same. Just unroll ONE segment to shift the fixpoint binder down out of the way.

A standalone demo in Ocaml is here:

https://github.com/felix-lang/felix/blob/master/m.ml

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Looks a little different

I think your example isn't expressible in Haskell (or caml I think). In Haskell you might try to write it like this, but this isn't legal:

type X = [[] -> X]
type Y = [] -> X

type Z = [] -> [Z] -- Are Y and Z the same type?  

You can only introduce recursive types at named datatypes. Iso-recursion crops up when you treat recursion, sums and products as separate type constructions. Then the question is whether e.g. (mu X. 1 + X) is the same type as 1 + (mu X. 1 + X). With isorecursion, they are different isomorphic types and you have to manually coerce between them.

Unrolling

I am not really sure what this is about. Recursive types occur when you unify something with a repeat type variable on both sides. The decision is whether to allow them or not. Recursive types lead to unsoundness in a Herbrand universe, but are fine with rational-trees.

Type identity is easily solved by unification on rational trees.

These rational trees are equi-recursive types. If you traverse the regular tree of a type, and break the cycles, marking the cycle with a new variable name, you can convert equi-recursive types to iso-recursive.

In my opinion, you always want to operate on equi-recursive types and use rational tree unification when operating on them, but you cannot print out an equi-recursive type, so you have to convert into iso-recursive to print/write the types down.

iso-recursive, equi-recursive

Your use of iso-recursive is not correct.

Both iso-recursive and equi-recursive can be represented with the same syntax (mu a. t) (or μα.τ if you like Unicode), where (a) is a type variable and (t) is a type expression.

The unfolding of this type is t[(mu a. t)/a]: replacing each occurence of (a) in (t) by the type (mu a. t) itself.

The difference between iso-recursive and equi-recursive is as follows:

- with equi-recursive types, a mu-type is *equal* to its unfolding (the two types have the exact same elements, there is a conversion rule that can be used at any point in the derivation, you need to check equality of rational trees to decide type equality)

- with iso-recursive types, a mu-type is *isomorphic* to its unfolding, and the isomorphism corresponds to term syntax (fold e), (unfold e), where fold, unfold act as functions from a mu-type to its unfolding and conversely. In particular you never have the problem of "not knowing how deep to unroll", the unrolling is controlled by the term syntax. (In other words, users have to write equivalence witnesses by hand.)

Ok but is the algorithm correct?

I accept this but my issue is with the folding and unfolding.

My contention is that the definition is weak, and I have found a better one. I am no genius so I guess either I am wrong or someone else found it too.

In my code a small example type term is described:

type btyp =
| BTYP_fix of int
| BTYP_function of btyp * btyp
| BTYP_tuple of btyp list
| BTYP_hole

The "hole" is not really a type term but is included to express a zipper. The code includes a type_eq function to calculate if too terms represent the same type. The two example types compare equal.

The conventional wisdom is as you say, unfolding and folding but I contend this notion is weaker than necessary and a new pair of operations is more powerful: wrapping and unwrapping.

The wrap algorithm is given in the code. It converts the first type to the second, which is irreducible and recursive. I haven't written the corresponding unwrap operation yet.

So I would like to know if the algorithm is correct, and my claim is that wrap/unwrap partitions type terms into equivalence classes which express type equality, and selects a canonical representative which is uniquely minimal. In particular it produces the unique recursive type in the class if there is one.

So because wrap/unwrap is stronger than fold/unfold, it should replace it everywhere: in the literature, in your code. And in particular in my code because my compiler has to generate, for each type, a nominal type in the target language, which is C++, such that if two types are equal, the same nominal type is generated (otherwise assignment, passing values to functions, etc, will not typecheck in C++).

That is, I propose changing the understanding of iso-equality to state that two types are isomorphic if wrap(A) = wrap(B), for eqi-equality, the usual type equality (also exhibited in my Ocaml program) can also be replaced by the term equality wrap(A) = wrap(B).

I'm not sure how to write the unwrap rule in type theoretic form, but the rule is simply to replace a mu term with the argument to the mu binder consisting of a combinator and its argument, with the combinator applied to the mu binder operating on the combinator argument term in which the type variable of the mu binder is replaced by f applied to that type variable.

mu a. f b --> f (mu a. b) where a in b replaced by f a

in my example f is either tuple or function. In other words, push the mu binder down one level, and replace the mu variable in the term with the combinator applied to the variable to compensate.

If you do this enough times (the number of levels between the mu binder and the variable) you get an unfold. So this operation subsumes unfold.

It would be nice to see this expressed in type theoretic terms and a proof of correctness given, or, a counter example. If the result is already known (which seems likely) a link to a paper. Some of the papers I have looked at may have theorems implying this since they're dealing with much richer type systems and claim principal types exist, but they're beyond my understanding.

Clarification needed

(To clarify, my comment above was strictly a correction of Keean Schupke's comment above.)

Unfortunately, your explanation is very hard to understand. The textual explanation in your last reply (the paragraph "The rule is simply to replace [...]") is super-hard to parse correctly in the first place. The more formal rule has some missing words that make it difficult to follow. Finally, I am not sure what you mean by fix-2 above. Wouldn't it be possible to rephrase your examples using the usual mu-type notation, which is very clear about where a variable is bound?

While I was writing what I did not understand in your notation, I think I got what it meant. Lightly adapted, it reads:

mu a. f t --> f (mu b. t[a := f b])

Is that the definition of wrap? It seems to do just one level of unfolding, that is push the mu-binder inside one level of type constructors. But then what is your strategy to iterate this wrapping? (I find the code of your 'fold' function a bit hard to follow; but then it's late in the night in my current timezone.)

To give some examples that may help others: in the type mu a. 1 + (a * int), then f(x) is (1 + x), and b in skaller's notation above is a * int, so we have the (un?)wrapping

mu a . 1 + (a * int)  -->  1 + (mu b . (1 + b) * int)

(When the mu-variable occurs several times in the type, it's possible to do the same thing but it's more cumbersome to give examples.)

Also note that the assumption that the mu-type starts with a type constructor (f ...) corresponds to assuming that the mu-types are productive (not of the form mu a. a which is arguably ill-defined) -- a reasonable assumption for equi-recursive types.

I guess that giving explanations using dot graphs could help in this case. See for example this picture (this is actually rendered using tikz), where the green arrows represent the forall-binders; the same graphical representation would help make your idea of "string with colored segments" more precise -- I think using dot could be a way to make this not-too-painful to draw, you could even implement a renderer from your OCaml type representation to dot.

Clarification

Yes, I'm sorry it's not clear! That's why I provided a link to (apparently) working code in Ocaml which is formal.

I asked this question here, there is a picture (but I used a record instead of a tuple there, otherwise the example is the same):

http://stackoverflow.com/questions/37722699/find-minimal-type

The types are equivalent to:

unit -> (mu fix-2 . tuple[unit -> fix-2])

mu fix-2 . unit -> tuple[fix-2]

I think (if I transcoded them correctly). In the type description fix i just means go up minus i levels to find the implicit mu binder. This notation has the advantage that, being variable free, comparisons don't need any alpha-conversion or whatever to account for different mu variable names. [Felix uses Ocaml X as U notation for recursion in the unbound type terms, and implicit binder in the bound terms, perhaps unfortunately the integer in the fix term in my compiler is always non-positive, a non-negative value would have been easier to understand]

"It seems to do just one level of unfolding, that is push the mu-binder inside one level of type constructors." -- Yes, precisely. This is *unwrap*. The traditional unfold replaces the mu variable with the whole term, which is the same as unwrap repeated N times (at least, I think it is). There's no need to provide a way to do the traditional unfold because it is useless. Unwrap is just better. Before pattern matching, unwrap your type so you can't hit a mu binder.

The wrap function undoes all wrapping. At least, I think it does! It may not be possible to do this, that's why I'm asking for help. It works on simple examples I've tried but as you say creating more difficult examples is tricky and doesn't prove anything: a proof is probably easier.

I haven't implemented unwrap in the Ocaml code, only wrap, because that's the function I actually need in my compiler.

Hm

I replied on StackOverflow (markdown syntax is nicer to write than HTML). I like the "as" notation which I find much clearer, but I still don't know the answer to your question -- I'm far from an expert on recursive types, though.

Rational Trees

Rational trees have the correct equality for equi-recursive types, in other words the unfolded equi-recursive type would be equal to the folded one in the rational tree universe.

This gets a lot more complex with multiple recursion points in the same type, and the folding/unfolding model doesn't work well.

With "mu" types you do not automatically get equi-recursive equality, you get iso-recursive equality unless you perform some kind of search (or convert back into a rational tree).

Edit: so maybe in my original post I should have talked about the natural equality of the representation.

With your explanations on

With your explanations on the recursive type syntax, I understand that your code example checks that

[unit -> tuple(a)] as a
unit -> [tuple(unit -> a)] as a

are equivalent, which is correct. But the implementation seems to fail with

  let t1 = BTYP_function (u,BTYP_fix(-1)) in
  let t2 = BTYP_function (u, BTYP_function(u, BTYP_fix (-2))) in
  ...

which if I understand correctly corresponds to comparing

[unit -> a] as a
[unit -> (unit -> a)] as a

and those two should also be equivalent.

You're right

Yes, my comparator says these are equal but my algorithm fails because both are recursive. In terms of the analogy with strings, you have two equivalent loops of string of different lengths. My brain is having trouble with that!

So I am left with my problem: given a type T, how can I produce a canonical representation?

I have to do that, because each type leads to a type in C++, and I have to pass values of types to functions, assign them, etc, so if two types are equivalent I have to use the same C++ type for them.

Doubts

A natural interpretation of equi-recursive types is as infinite regular trees -- or equivalently, finite directed graphs, which is how compilers (such as OCaml) implement them in finite space.

Checking equivalence of infinite regular trees is equivalent to interpreting the isomorphic graphs as FSMs, and checking their equivalence. This can e.g. be done by minimising both FSMs, and see if that yields the same FSM/graph.

(In a similar vein, some forms of equi-recursive subtyping are related to inclusion of (the languages recognised by) two FSMs.)

With that in mind, I highly doubt that your definition (which I don't fully understand) can be "stronger" than the standard definition with (co-recursive!) rolling/unrolling -- because anything that equates more types would be unsound.

In fact, I believe yours is weaker (and thus incomplete). For example, consider these two types:

t1 = μ α. (t → t → α)
t2 = t → μ α. (t → α)

Under the standard definition, these are equivalent. How would your algorithm equate them?

Doubts

I'm not trying to change the definition of type equivalence, but find a unique canonical representation. You are correct, your t1 and t2 calculate equivalent, but my algorithm reduces the them to the same pair gasche gave, t2 reduces because it isn't recursive.

Thank you for suggesting using minimised FSMs.

My other alternative is ugly but viable: I have a list of types to generate, I can just run through them all, looking for equivalent types and arbitrarily pick one. Of course that only works because the list is finite, the representations won't agree across separate compilation boundaries (but then, at present, they don't anyhow).

Canonical

If you mean canonical and not principal, then it is easy, keep the smallest representation, and encode type equality using rational-tree-equality. Whenever you compare two types keep the smallest one. Actually canonical doesn't even require this, you can keep either one.

If you use a union-find algorithm to manage the equivalence classes of types, then you get this automatically.

This is why most efficient implementations of type systems use a unification in a rational-tree universe, and union-find to perform the unification. Interestingly you are now 2/3 of the way to getting a Prolog implementation, and you just need backtracking.

If you plan on including type-constraints (type-classes/traits) and want to allow overload resolution on constraints, then you need backtracking too.

At which point you really are implementing a Prolog interpreter.

Suggestion for a canonical representation

I will call "folded tree" the finite representations with back-edges, and "unfolded tree" their (potentially) infinite unfoldings.

Let us say that two nodes in a folded tree are "equivalent" if their unfolded sub-tree are equal (as infinite trees; this is decidable). Let us say that a tree is "maximally folded" if, for any node and any of its (transitive) children, they are equivalent if and only if they are the exact same node.

I conjecture that maximally folded trees are canonical.

An algorithm to obtain a maximally folded tree from another folded tree is as follows. For all nodes, check if any sub-node (without following back-edges) is equivalent, and if so replace it by a back-edge to the parent node.

DFA minimisation

The problem is directly isomorphic to FSM/DFA minimisation, for which well-known algorithms exist. Minimal DFAs are indeed unique.

Is it?

Is it, though? I tried to think more about that and it didn't seem clear to me. But then I haven't thought about automata in a long time:

- What is the alphabet of the symbols recognized by these automata? Are the symbols the type connectives (prod, arrow, sum, var alpha), or rather labels of specific paths (prod.left, prod.right, ..., var alpha)?

- What are the final states of this automaton? Or are we considering automata that recognize infinite words? Does the classic minimization algorithms work for automata that recognize infinite words?

One way it might work

Formally, you can have symbols which ask about the example that the FSM is supposed to construct. As long as the answer to your "questions" is yes, you're in an accepting state. As soon as the answer is no, you transition to a permanent fail state (with no transitions out). i.e. you start the FSM and apply symbols: prod.left, prod.right, arrow.left, var a. Then we should be in an accepting state only if the example looks like this: (X * (a -> Y)) * Z for some X,Y,Z.

I'm not sure this is the best answer, though, since you're asking for an FSM that solves a decision problem and the most natural way to think about this isn't as a decision problem. So the above construction has to jump through the hoop of re-casting the FSM as a decision problem. But I think the important point is that equirecursion is decidable because the expressions that generate the type trees only cycle through finitely many possibilities as they unfold.

See...

...Francois Pottier's 2001 paper Simplifying subtyping constraints: a theory..

The basic case, for ML, is worked out on exactly the lines Andreas suggests, and then he studies the generalization to the case where you have type schemes with subtype constraints.

Thanks

...for digging up one of the references -- I was too lazy on a late Sunday night. :)

Thanks!

It's amusing that I did not know about this particular work. When I looked for references I saw the more recent work of François Pottier and Nadji Gauthier on second-order recursive types (when there are polymorphic quantifiers deep in the types), but not this one. While the automata connection makes perfect sense, I don't believe hearing about it previously. I would guess that they (François Pottier and Didier Rémy) later moved to more powerful type systems and thus richer representations to replace automata. For example, sharing is also well expressed by full directed graphs, which forms the basis for the graphic representation of MLF types introduced by Boris Yakobowski -- with more different kind of edges to express the scope and nature of polymorphic quantification.

I think your idea would work

I think your proposed algorithm would work, but wouldn't produce representations as small as DFA minimization. As I understand it, your algorithm only allows reference back to direct parents of any given node. So, in the simplest example, if you have mu a. (a -> a) * (a -> a), it would need to duplicate those two (a -> a) branches, requiring three nodes, whereas the minimal FSM would just have two: a where a = b * b where b = a -> a. This difference probably changes the big O, but I'm not sure how much it would matter in practice.

Hm

Indeed my algorithm does not attempt any sharing between siblings. Sharing between siblings is possible if you use the (as) syntax, but not if you use the (mu . ...) syntax to describe recursive types -- "as" binds the variable both inside and outside, mu only binds inside.

Principal types

If an algorithm generates principal types, would you ever need to 'compact' the representation?

Yes, you need to

...because there may be infinitely many different representations for the same principal type. IOW, it's an orthogonal problem.

OCaml

Am I right that OCaml does not do this? I compared the types generate by my own type inference algorithm with OCaml for large random terms and it was the same, and I am not doing any compacting. I suppose as long as the types unify and I use union-find to keep a canonical representation of each type there is no need to do this. However it would be interesting to see if it can speed up processing times for large programs.

Are you inferring recursive

Are you inferring recursive types? Doesn't OCaml disable this behavior by default?

Recursive Inference

Yes, You have to pass a command line argument to OCaml to permit recursive types.

I am inferring recursive types as one option, I am not sure if it's definitely a final feature, I like to try writing algorithms and see how it feels to use it. Alternatives are union types. I played with intersection types for a while but they weren't quite what I wanted.

OCaml

OCaml already allows equi-recursive structural types even when -rectypes is not passed, but only for types that recurse through the structural types of the language, namely object types and polymorphic variants. Implicit recursion in types with just data constructors is only accepted when -rectypes for usability reasons -- it accepts too many erroneous program.

Note that OCaml internally uses sharing for its representation of types, but it may un-do some of this sharing when printing its types, in particular when you explicitly pass the -principal option (the default parameters sometimes cut corners, but I don't know much about what actually changes here). It may be hard to infer what the type-checker does internally by looking at the printed types.

Probably only turns off the Occurs-check

I expect that -rectypes only turns off the occurs check and doesn't prove equivalences.

Unification

Two equivalent regular trees will unify whether they are minimised or not, so the equivalence-classes of types are preserved, and each type will have a canonical representative (thanks to union-find).

Don't think it's that simple

I believe that there are subtle differences between turning off the occurs-check and proving equivalences.

But honestly, I forgot. You'ld need to dive into the code or typing rules.

Additional info: The way to prove an equivalence is to postulate an equality, then unfold the types a bit further, and then use the postulate to derive that two cyclic types are equivalent; this is essentially a bisimulation proof. Simply turning off the occurs check shouldn't do that, in general. But I admit I am guessing here.

Rational Trees

If the rational trees unify the types are the same. You are guessing wrongly :-)

If you think about unification on rational trees it is a bisimulation proof.

Note: unification on rational trees does not have an occurs check, instead you check for cycles post unification if you want to exclude recursive types.

Don't expect to be wrong

It's simply unlikely that in all cases unification can give you back the same proofs as bisimulation. Wouldn't make much sense.

(I still assume ocaml simply turns off the occurs check; looks like you're talking about a different algorithm.)

Rational Trees

I can't imagine any real type checker uses the text book unification of lists. Certainly rational tree unification seems common in Prolog implementation. I am pretty sure OCaml uses rational tree unification because removing the occurs check on non-rational tree unification results in non-termination if types are recursive (like used to happen in older Prolog implementations).

Rational tree unification proceeds by descending both trees in bisimulation. If the simulation diverges unification is terminated (as a failure) if a cycle is entered we know the unification down that path will succeed (as the nodes have already been unified). As we are doing this any variables in one type are unified with the other. In other words the unification of two rational trees is the bisimulation of the descent of both threes, remembering the nodes and unifying variables during the descent.

We agree on that

Yes, that's what I said and deduced too after some puzzling. There is a difference between turning off the occurs check and constructing bismulation proofs (over rational trees.); i.e., bisimulations over rational trees can prove equivalences where the standard unifier without occurs check would loop. (The postulate supplies the termination condition.)

So, we actually agree on everything; only question is whether -rectypes turns off the occurs check, or does bisimulation proofs over rational trees. And I was betting on the former since I don't expect them to have implemented an additional typing algorithm. But no idea, really.

OCaml type checking is clever

The OCaml type checker is pretty clever, and when I tested it using my random lambda term generator it produced the same types as my rational tree unification code, so it did not loop. This is reasonable evidence it is using rational trees internally. Infact it shares two things with my own type checking code, use of rational trees and the union-find algorithm (I think Oleg blogged about this). What it lacks is the constructive environment-less bottom up nature of my own type checker, instead it uses a concept of 'levels'. I think my own solution is simpler, but then I would say that :-)

There should be a simple check

Too lazy to check this, but it should be trivial to check by supplying two types which are bisimular; type in a term, but give it the 'wrong' bisimular type, or so. The typing algorithm then either deduces that two types are equivalent, or it loops. Shouldn't be too hard.

(Okay. I have some doubts now. Other unification methods might work too, but I wouldn't know what termination conditions they have; it really depends on the nitty-gritty details of the algorithm.)

Algol'68 Terminology

The term 'postulate' is actually from the Algol'68 type inferer AFAIK; essentially the manner in which most type checkers for imperative languages (can or should) decide type equivalence for bisimular record types. Pascal, C, C++, C#, and Java should have type inferers which work this way. A postulate is an assumed equivalence on two types, usually represented by (two) integers referring the typing table.

Equi and Iso recursive types

I was having problems with the idea one would write fold and unfold in a data type. Never seen that. Doesn't make sense ...

Oh yes it does! I've been doing it for years! I just spelled it differently. In C, a fixpoint is a pointer.

struct X { int a; struct X * next; };

Unfold:

struct X2 { int a; X x; };

Its a distinct type. The unfold is nested in it. Ignoring NULL pointers both types are infinite lists. But they're definitely distinct because you can get the first two elements of X2 without a dereference.

However this leads to the idea that we should unfold differently: a fixpoint IS a pointer:

struct X2 { int a; X *x; };

and THIS X2 is layout compatible with X so it is the same type. (Ignoring the fact that C doesn't have structurally typed records).

equi-recursive in 'C'

How about?

struct X {int a; struct X x};

But this is slightly missing the point, it is how type-equality is defined that determines if something is iso-recursive or equi-recursive. The mistake I made above was thinking about you would naturally define equality on certain datatypes, but in the end it doesn't matter, you can have pointers or not, it depends on how you define equality (Think of equality as a algorithm, and you can see there are many possible algorithms, compare the pointers, follow the links, check for cycles, don't check for cycles.