## The Two Dualities of Computation: Negative and Fractional Types

The Two Dualities of Computation: Negative and Fractional Types

Abstract

Every functional programmer knows about sum and product types, a+b and a*b respectively. Negative and fractional types, a-b and a/b respectively, are much less known and their computational interpretation is unfamiliar and often complicated. We show that in a programming model in which information is preserved (such as the model introduced in our recent paper on Information Effects), these types have particularly natural computational interpretations. Intuitively, values of negative types are values that flow â€œbackwardsâ€ to satisfy demands and values of fractional types are values that impose constraints on their context. The combination of these negative and fractional types enables greater flexibility in programming by breaking global invariants into local ones that can be autonomously satisfied by a subcomputation. Theoretically, these types give rise to two function spaces and to two notions of continuations, suggesting that the previously observed duality of computation conflated two orthogonal notions: an additive duality that corresponds to backtracking and a multiplicative duality that corresponds to constraint propagation.

## Comment viewing options

### I found this paper very

I found this paper very interesting, and for a few months I pursued use of fractional types in my language design, i.e. as a model for promises/futures.

However, I eventually abandoned this effort because it hinders equational reasoning and refactoring of code. For example, with pure f we can structurally translate foo (f x) (f x) into let fx = f x in foo fx fx (without knowing the implementation details). But if f x might have fresh fractional types as output, this transform is no longer valid.

Well.. that's one reason I abandoned it. I also had difficulty reconciling fractional types with distribution, open systems, and unbounded streaming programs (which are significant use-cases for my language).

### Rationalization

Here's a question I haven't given much thought to, but maybe someone can say something enlightening. What's the relationship between considering  1 / (1 - Int) via its series expansion and considering it as constraints and borrowing?

(xs :: 1 / (1 - (mu a. 1+a)) is how you declare a list of integers in my language, and I'm really hoping I don't have to back to the drawing board.)

### Maybe it works like this

I haven't figured out every detail, but I think your drawing board is pretty safe.

To make the type constructors clear, I'm going to avoid directly using types (a - b) and (a / b). Instead I'll use -x for negated types and coin a similar syntax /x for reciprocals. In this syntax, your type is (/(1 + -Int)).

To construct and deconstruct lists of this type, I think we just need to choose a particular isomorphism from (/(1 + -Int)) to (1 + Int * /(1 + -Int)). The system in the paper admits a derivation like that, which is easy to come up with by equational reasoning on rational numbers. (I worked backwards.)

1.  /(1 + -Int)
2.  1 * /(1 + -Int)
3.  (0 + 1) * /(1 + -Int)
4.  (1 + 0) * /(1 + -Int)
5.  (1 + (-Int + Int)) * /(1 + -Int)
6.  ((1 + -Int) + Int) * /(1 + -Int)
7.  (1 + -Int) * /(1 + -Int) + Int * /(1 + -Int)
8.  1 + Int * /(1 + -Int)


A circuit implementing this isomorphism is (uniti ; (zeroi ; swap+ ; id + eta+ ; assocl+) * id ; distrib ; epsilon* + id).

The operation eta+ (lines 4-5) resumes a computation that's been backtracking, and epsilon* (lines 7-8) does unification.

I was going to try to demonstrate this in a step-by-step way, and the paper does describe nondeterministic rewrite rules for this system, but I couldn't manage to write it out in a readable way. I was trying to write out what happens to values-with-holes-in-them, but I think in order to make it actually work, I would have needed to explicitly track every logic variable and its constraints.

The intuition I ended up with, for what it's worth, is this: Since lines 4-5 involve backtracking, they actually refer to two values of type (/(1 + -Int)). One of them is the original value--the whole list--and one is whatever value is carried back up there during backtracking. The program never actually reverses direction to do the backtracking (i.e. there's no epsilon+), but it does force those two values to unify with the difference of one Int value. This makes it so the resulting (/(1 + -Int)) value has the same structure as the original except for that Int. (Empty lists are accounted for too, of course.)

### I think we just need to

I think we just need to choose a particular isomorphism from (/(1 + -Int)) to (1 + Int * /(1 + -Int)).

Yes, that's a simple explanation. Thanks!

I had never thought about representing lists this way. It's quite interesting, and Ross's expansion helped a lot. Yet, I have no idea how I might infer such a type. Nor why I'd want to.

### 1/(1 - Nat) is still Nat, but surely 1/(1 - a) isn't a!

I thought that the isomorphism Nat \cong 1 \times Nat \cong (1/(1 - Nat)) \times (1 - Nat) \times Nat \cong 1/(1 - Nat) would do it, but this doesn't use anything special about Nat. What did I do wrong?

EDIT: Ah, I see; of course (1 - Nat) + Nat, not (1 - Nat) \times Nat, is isomorphic to 1. I was trying to fix it, but it seems that Ross Angle beat me to it with a nice argument anyway.

### Why series expansions?

xs :: 1 / (1 - (mu a. 1+a)) is how you declare a list of integers in my language

What got you into doing lists this way? I've never seen it before. Does this give you something mu doesn't?

### Very funny Ha ha

It was just a joke.

### Assuming makes anything out of anything

I was fooled I guess. :) In that case, I'd like to underscore that I don't actually know if the (/(1 + -a)) list representation pans out. In fact, I might have an intuitive argument that it doesn't.

I had already been racking my brain against this paper every so often, and I found your list representation pretty inspiring, even if you weren't serious.

---

The authors touch on something similar to the list example later in the paper (section "Computing in the Field of Algebraic Numbers" of the conclusion), where they observe there's an isomorphism between six binary trees and the unit type.

The question is why such an algebraic manipulation makes sense type theoretically, even though the intermediate step asking for an isomorphism between $$x^6$$ and $$1$$ has no computational context. In the setting of $$\Pi^{\eta\ \epsilon}$$, this isomorphism can be constructed but it diverges on all inputs (in both ways). This suggests that, in the field of algebraic numbers, some algebraic manipulations are somehow "more constructive" than others.

If that "diverges on all inputs," the isomorphism I gave for the list encoding might have a similar issue.

Given the way I was intuitively rationalizing (ahem) the list encoding, I'm not sure it's unreasonable to store six binary trees' worth of information under the unit type. For all I know, a program that backtracks into that unit over and over again might impose different constraints on its logic variables each time, possibly allowing it to reconstruct all that information.

If it's possible to stuff information into parallel worlds that way, it might ultimately make sense to disguise just about any type as any other.

Come to think of it, the paper seems to ignore division by zero altogether, so the circuit uniti ; (eta* ; swap*) * id ; assocr* ; id * (distrib0 ; factor0) ; assocl* ; (swap* ; epsilon*) * id ; unite could accomplish exactly that disguise:

a
1 * a
(0 * /0) * a
(/0 * 0) * a
/0 * (0 * a)
/0 * 0
/0 * (0 * b)
...
b


Well, that puts the paper in a different light. Information preservation isn't a straightforward thing to claim anymore.

Suppose we simply patch this issue by requiring a proof that b is not isomorphic to 0 whenever we use eta* : (1 -> (/b * b)) or epsilon* : ((/b * b) -> 1).

Then we have to follow the same rule when encoding lists: In order to use (/(1 + -Int)) as a list of Int via the isomorphism I gave, we have to prove Int isn't isomorphic to -1. So whatever axioms we use for deciding apartness, they have to deal explicitly with whatever construct we use to define Int.

The encoding you used was (mu a. 1+a). If I were to use a binary encoding BNat = (mu a. 1 + a + a) instead...

BNat
(-1 + 1) + BNat + (BNat + -BNat)
-1 + (1 + BNat + BNat) + -BNat
-1 + BNat + -BNat
-1


Oops, I guess at least BNat can't be used as an element of a list. :)

Perhaps mu isn't effective at constructing recursive data in this framework. From the point of view of dealing with rational numbers, we'd require (mu a. 1 + a + a) to have a well-defined solution before even using it. In this case the solution is well-defined, but it's -1, which isn't what I would have wanted.

I suppose the ultimate issue is that lists encode an infinite number of possibilities, so for them to follow along in the theme of information preservation, they'd need to be encoded as some kind of infinity anyway.

### Belated correction, and taming the frays of this system

Someone brought up this comment to me again, so I reread it.

First, I noticed a mistake I made:

In order to use (/(1 + -Int)) as a list of Int via the isomorphism I gave, we have to prove Int isn't isomorphic to -1.

We would actually have to prove Int isn't isomorphic to 1, not -1. So I suppose (/(1 + -BNat)) is a fine type, isomorphic to (/2). ^_^

Anyway, I want to say I didn't remember of my comments here as being a rebuttal of the type system. I think people had some inaccurate expectations of what the type system could do that I wanted to correct, and I found it interesting to explore corner cases I thought were interesting to explore, since the operational semantics can sometimes be a source of ideas for improving the type system design.

The operational semantics in the paper doesn't care if we divide by zero or put in types of infinite information content (like natural numbers or lists), even though these things don't correspond to the rational number axioms as well as the rest of the system does. While I thought these were details worth exploring, I think people would probably want to avoid these things in a practical use of the system. They clash with the rational number and information conservation metaphors, so they can produce a lot of conclusions.

If we simply set limitations on 0 (-a + a) and 1 (/a * a) that a must be a finite-depth structure composed of a few known type constructors, then I think that's enough to tame the behavior of this system in the face of infinities like (mu a. 1 + a) where the rational number laws don't apply.

With that restriction in place, I think these types could even be integrated into another type system. Alien types like (a -> b) would not be considered "finite" enough to use under 0 (-a + a) and 1 (/a * a) until there was a good reason to believe they were safe (e.g. a full implementation of values of type (-(a -> b)) and/or (/(a -> b))).

Some type systems can already forbid division by zero, but I think the worst that would happen to other type systems is that we'd lose progress, just like we do with a language that has general recursion. My division-by-0 example used the isomorphism 0 (0 * b), which introduces a logic variable of type b which has no constraints applied to it and never will. Any client waiting for a particular value of type code>b won't make progress.

So, I'm very sorry if I made it out that this type system is nonsensical. Sometimes I can be all over the place in my exploration of trivialities, and it took me a while to figure out the benefits of this type system myself too. But I don't think I've seen any other system which has such a strongly typed treatment of backtracking and logic variables, and the isomorphism to rational numbers isn't easy to forget. I hear there's been a bit more work going on since this paper, so I want to check that out when I can.

### Category collapses for another reason

Whoops, I wasn't attending the conversation very well while I was writing that. Jaques Carette shows me there's another known source of inconsistency in this system, so there go my naive ideas about these types playing along well with "alien" types. Hmm.