Lambda the Ultimate

inactiveTopic Theorems for free!
started 7/31/2003; 4:23:19 AM - last post 8/4/2003; 5:10:47 AM
Ehud Lamm - Theorems for free!  blueArrow
7/31/2003; 4:23:19 AM (reads: 2036, responses: 7)
Theorems for free!
Philip Wadler. Theorems for free! 4'th International Conference on Functional Programming and Computer Architecture, London, September 1989.

From the type of a polymorphic function we can derive a theorem that it satisfies. Every function of the same type satisfies the same theorem. This provides a free source of useful theorems, courtesy of Reynolds' abstraction theorem for the polymorphic lambda calculus.

I see that this useful paper wasn't mentioned here before, so I here goes.

The paper formally proves an important meta-theorem for derving theorems about algebraic properties of functions, based solely on their types!

Example: Suppose a polymorphic function r with type forall a.[a]->[a], then r statisfies the following theorem: for all types A and A', and every total function a:A->A', we have map a . r_A = r_A' . map a.

The intuitive explanation of this result is that r must work on lists of X for any type X. Since r is provided with no operations on values of type X all it can do is rearrange such lists independent of the values contained in them. Thus applying a to each element of a list and then rearranging yields the same result as rearranging and then applying a to each element.


Posted to theory by Ehud Lamm on 7/31/03; 4:24:52 AM

Frank Atanassow - Re: Theorems for free!  blueArrow
8/1/2003; 8:15:29 AM (reads: 913, responses: 0)
This is a very famous paper, which gives the lie to claims that static typing only guarantees trivial program properties.

BTW, in categorical terms, the theorem about r above says precisely that r is a natural transformation (from and to the list functor []).

Tim Sweeney - Re: Theorems for free!  blueArrow
8/1/2003; 2:23:32 PM (reads: 865, responses: 0)
Keep in mind that this "free theorem" business isn't universal, but depends on the particular type theory used. In particular, that the type theory doesn't admit types as terms and doesn't provide any operations on types besides sum, product, and exponential. Haskell is one of the few practical language where it holds.

For example, if your type theory exposed a function inhabited :: type -> bool which returned true if the type were inhabited and false if empty, then not every universal quantification would have such a free theorem. For example, forall a.[a]->[bool] would exist but wouldn't be invertible.

In my view, the reorganization of type theory into category theory has been very harmful to the subject. It has reinforced the separation of types and terms, leaving few researchers to ponder systems combining the two or to even consider the possibility that such systems might be valid and useful. Why? Category theory's focus on isomorphism (instead of equality) doesn't scale. John Baez's writings on things such as "the category of all categories" and "the category of all categories of all ..." illustrate the near-hopeless complexity of isomorphisms-of-categories perfectly. As a result, simple notions like "the type of the type of all inhabited subtypes of the type of natural numbers" seem incomprehensibly difficult in category theory and hence in type theory.

Category theory brought useful advances to topology. No surprise here, because topology is precisely the study of isomorphisms between geometric objects. Type theory, as it applies to any sort of actual programming system, just needs to answer the question, "which values are allowed in this context?" and there is no benefit to dragging isomorphism into this picture.

The state of type theory today seems to me to like a set theory where most refuse to acknowledge the possibility that sets might themselves contain sets, or the existence of functions from sets to sets, from values to sets, or from sets to values. Sure, the current approach brings some simplifications to reasoning about types, but only at the expense of a wealth of new tools for expressing, refining, and reasoning about types.

This conclude's today's rant. :-)

Frank Atanassow - Re: Theorems for free!  blueArrow
8/2/2003; 7:08:23 AM (reads: 803, responses: 0)
In particular, [it requires] that the type theory doesn't admit types as terms and doesn't provide any operations on types besides sum, product, and exponential.

That's not true.

For example, if your type theory exposed a function inhabited :: type -> bool which returned true if the type were inhabited and false if empty, then not every universal quantification would have such a free theorem.

That sounds possible, but I don't see a counterexample off the top of my head. Do you?

The naturality property does not depend on parametricity, BTW; rather it's the other way round, as there are some natural transformations which are not parametric. So being able to distinguish forall'd types is not necessarily enough to break the theorem.

Also, don't you think conversely that, if some feature of a type system breaks the naturality property, that that might reasonably be construed as an argument against adopting such a feature? After all, it charges a cost for a theorem one would otherwise get for free. :)

For example, forall a.[a]->[bool] would exist but wouldn't be invertible.

What does invertibility have to do with it? In the example above r is not assumed to be invertible. Indeed, I can't think of any functions with that type which are invertible.

In my view, the reorganization of type theory into category theory has been very harmful to the subject. It has reinforced the separation of types and terms, leaving few researchers to ponder systems combining the two or to even consider the possibility that such systems might be valid and useful.

Hm, that's a fascinating point of view. It's a bit like saying microscopes have reinforced the separation of man and bacteria.

Did you know that there are perfectly good categorical models of dependently typed languages? Or that the fact that one can easily describe consistent and general categorical models of dependent types is often touted as one of the success stories of category-theoretic model theory?

Why? Category theory's focus on isomorphism (instead of equality) doesn't scale.

Doesn't it? First, the `focus' on isos is not part of the letter, but rather the spirit, of category theory. Second, actually one should expect up-to-iso equivalence to scale better than up-to-equality equivalence, because it demands less, so there are more models which translates back to more implementations. For example, demanding a type iso to a particular stack ADT lets you use anything which behaves like a stack rather than that particular stack implementation. To me, this sounds like exactly the sort of thing you want for `large-scale' programming and specification.

John Baez's writings on things such as "the category of all categories" and "the category of all categories of all ..." illustrate the near-hopeless complexity of isomorphisms-of-categories perfectly.

Unlike set theory where one has an infinitely high tower of infinities?

As a result, simple notions like "the type of the type of all inhabited subtypes of the type of natural numbers" seem incomprehensibly difficult in category theory and hence in type theory.

You call that simple? :) (No, don't explain---I know what it means.)

Category theory brought useful advances to topology. No surprise here, because topology is precisely the study of isomorphisms between geometric objects.

Well, the study of certain equivalences---usually not isos---between geometric objects.

Anyway, maybe the `essence' of type theory and/or computation is also precisely the study of equivalences between computational objects? I'm not saying that it is but, if it can be characterized that way, maybe that fact would only become apparent after one tries to regard it that way.

It's not such a strange idea, after all. Church's thesis says that lambda-calculus, an archetypal equational language, is sort of the essence of computation. And Dana Scott's adaptation of topological notions forms the heart of domain theory.

Type theory, as it applies to any sort of actual programming system, just needs to answer the question, "which values are allowed in this context?" and there is no benefit to dragging isomorphism into this picture.

That is only the very tip of an enormous iceberg.

For most type systems, one can lay out the typing rules which answer your question in only a page or two. It doesn't just stop there. Then you develop the theory, to find out what properties characterize the type system: subject reduction, termination, confluence, etc. The question of which types behave the same way, that is, which types are interderivable, that is, which types are isomorphic, is a natural question to ask; for example, it's a measure of how redundant the type system is.

The state of type theory today seems to me to like a set theory where most refuse to acknowledge the possibility that sets might themselves contain sets, or the existence of functions from sets to sets, from values to sets, or from sets to values.

That's an interesting analogy.

Sure, the current approach brings some simplifications to reasoning about types, but only at the expense of a wealth of new tools for expressing, refining, and reasoning about types.

Blame Russell. He should never have made that remark to Frege. :)

But seriously, it is easy to complain about all the heavy machinery one needs for categorical type theory, but much harder to provide an alternative which solves the same problems. The work of people like Baez is actually an attempt to organize and reduce that machinery, although I guess you don't see that.

Oleg - Re: Theorems for free!  blueArrow
8/3/2003; 4:18:48 PM (reads: 751, responses: 1)
I'd like to caution that the applicability of the free theorems is _far_ more limited than one may think. The reason is simple: there are very few pure total polymorphic functions. For example, the only pure total function of a signature forall a. a->a is the identity function. The rarity of pure total functions is exactly what makes the theorems for free possible.

Furthermore, free theorems do not apply for impure functions, functions with effects, and in a language that permits pervasive reflection. The following are a few few less obvious examples for which free theorems do not apply.

Example: Suppose a polymorphic function r with type forall a.[a]->[a], then r satisfies the following theorem: for all types A and A', and every total function a:A->A', we have map a . r_A = r_A' . map a.

It's plain that if the function 'a' raises an exception, free theorems do no apply: such a function is not total. However, if a function has an effect, then the free theorems don't apply either -- even if the function is total. Let us consider the following example, in Ocaml:

# let a x = print_string "OK"; x;;
val a : 'a -> 'a = <fun>

# let ra x = if List.length x > 0 then [] else x;; val ra : 'a list -> 'a list = <fun>

# ra (List.map a [1;2;3]);; OKOKOK- : int list = [] # List.map a (ra [1;2;3]);; - : int list = []

Here, the function 'a' is plainly a total function A->A, ra also has the right type: it's a total polymorphic function on lists that takes any list into an empty list. The inferred types agree with our expectations. However, the behavior of r_A' . map a and map a . r_A obviously differs.

As we mentioned, a reflection can also be used to break the free theorems. Let us consider the following functions, in Scheme

(define (a x) (not x)) ;; Bool -> Bool

(define (ra lst) ;; forall. [a] -> [a] (map (lambda (el) (if (boolean? el) #f el)) lst))

(ra (map a '(#f #f #f))) ; ==> '(#f #f #f)

(map a (ra '(#f #f #f))) ; ==> '(#t #t #t)

Function ra is a polymorphic function forall a. [a] -> [a]. It works on any list -- and returns the list of the same type. And yet, (ra (map a '(#f #f #f))) and (map a (ra '(#f #f #f))) obviously produce very different results.

The above behavior is actually quite insidious -- and can be shown even in a language like OCaml.

# let foo = [1;2;3];;
val foo : int list = [1; 2; 3]

# let a x = -x;; val a : int -> int = <fun> # let ra x = if Hashtbl.hash x == Hashtbl.hash foo then [] else x;; val ra : 'a list -> 'a list = <fun>

# ra (List.map a [1;2;3]);; - : int list = [-1; -2; -3] # List.map a (ra [1;2;3]);; - : int list = []

Plainly functions 'a' and 'ra' have the right types. Function 'a' is total. Both functions are pure. And yet the free theorem does not work.

Ehud Lamm - Re: Theorems for free!  blueArrow
8/4/2003; 12:33:33 AM (reads: 745, responses: 0)
What if we reflect that functions have side effects using the type system (e.g., monads)?

Frank Atanassow - Re: Theorems for free!  blueArrow
8/4/2003; 4:17:05 AM (reads: 704, responses: 1)
Oleg wrote: I'd like to caution that the applicability of the free theorems is _far_ more limited than one may think. The reason is simple: there are very few pure total polymorphic functions.

There are also `few' integers on the real number line, yet I've heard that most programmers find them rather useful. :)

I agree that in most extant programs one finds more impure, non-polymorphic functions than pure, polymorphic ones, but that is largely an observation on programming style, and the fact that the few people program in purely functional languages. What you see as a defect of the theorem, I see as a strength of pure FP languages.

For example, the only pure total function of a signature forall a. a->a is the identity function. The rarity of pure total functions is exactly what makes the theorems for free possible.

No, that is not at all what makes them possible. What makes them possible is the `intuitive explanation' given at the top of this page: the fact that a value of an abstract(ed) type cannot be observed. It has nothing to do with cardinality or frequency, and I think it is not useful to imply otherwise.

Furthermore, free theorems do not apply for impure functions, functions with effects, and in a language that permits pervasive reflection.

Yes; like I said above, you have to buy into the idea of referential transparency to make it work, and that's a whole different discussion.

OTOH, I think the requirement that the function has to be total is actually bogus: sets and partial functions form a category with finite products, and call-by-name is a decision procedure for equality of ground terms. Also see my response to Ehud below.

# let ra x = if Hashtbl.hash x == Hashtbl.hash foo then [] else x;;

This one doesn't work because ra is not a natural transformation, because hash is not natural: it doesn't respect the abstractness of universally-abstracted types. Maybe you could call hash `pure', I don't know. (This demonstrates how vague the term `pure' is.) This is an interesting example, though; I didn't realize that these sorts of polymorphic primitives weren't natural. I never liked them; now I have a good reason not to like them. :)

Ehud wrote: What if we reflect that functions have side effects using the type system (e.g., monads)?

As long as the language is `referentially transparent', the theorem will hold. In fact, not only does a monadic effects system preserve the property, it depends on it, because the unit and multiplication (join) of a monad are natural transformations. Without naturality, for example, you would not have:

fmap f (return x) == return (f x)

If the language isn't referentially transparent, for example like in computational lambda-calculus or Algol, naturality still plays an important role, though it doesn't show up as a theorem involving fmap, but rather involving substitution. For example, O'Hearn and Tennent used parametricity (which is a kind of naturality) to explain the phenomenon of local state: "a non-local procedure is independent of locally-declared variables in the same way that a parametrically polymorphic function is independent of types to which it is instantiated."

So naturality actually has a lot to do with effects. Maybe something like this is true: if effects are controlled, then naturality is easier to recognize because you can express it in the concrete syntax of the language; if not, then it still remains an important feature at the semantic level (of categorical models).

Ehud Lamm - Re: Theorems for free!  blueArrow
8/4/2003; 5:10:47 AM (reads: 726, responses: 0)
As long as the language is `referentially transparent', the theorem will hold.

Right. And since most algebraic manipulation assume r.t. functions, i don't see this part of Oleg's ciritique as very damaging.