Observational Equality, Now!

In Observational Equality, Now! Thorsten Altenkirch, Conor McBride, and Wouter Swierstra have

something new and positive to say about propositional equality in programming and proof systems based on the Curry-Howard correspondence between propositions and types. We have found a way to present a propositional equality type

  • which is substitutive, allowing us to reason by replacing equal for equal in propositions;

  • which reflects the observable behaviour of values rather than their construction: in particular, we have extensionality — functions are equal if they take equal inputs to equal outputs;
  • which retains strong normalisation, decidable typechecking and canonicity — the property that closed normal forms inhabiting datatypes have canonical constructors;
  • which allows inductive data structures to be expressed in terms of a standard characterisation of well-founded trees;
  • which is presented syntactically — you can implement it directly, and we are doing so—this approach stands at the core of Epigram 2;
  • which you can play with now: we have simulated our system by a shallow embedding in Agda 2, shipping as part of the standard examples package for that system [21]. Until now, it has always been necessary to sacrifice some of these aspects. The closest attempt in the literature is Al- tenkirch’s construction of a setoid-model for a system with canon- icity and extensionality on top of an intensional type theory with proof-irrelevant propositions [4]. Our new proposal simplifies Altenkirch’s construction by adopting McBride’s heterogeneous ap- proach to equality.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

So quiet ...

There seems to be no one who wants to comment on this. Presumably, this is abstract enough that excepting those already expert in this field (I'm not) this seems largely irrelevant and/or wayyyy too confusing.

Personally, I don't really understand the details, but I would love for some expert to explain in simple terms what are all those properties like canonicity really about? What do they buy us (us poor Haskell-plebs, who probably have them but don't realise it)?

Finally, I'm *really* excited about the line of research that Epigram seems to be following. Especially the idea that the whole of the program pipeline, from editing to compilation, is up for hooking custom code into. Furthermore, it seems to be following a pattern I have come to greatly appreciate: explicit figure out heuristics to reduce the verbosity. For instance, type declarations -> type annotations, where we accept that general type inference is not possible, but that we should have it nevertheless for ML-like fragments; similarly (isomorphically?) for proofs. I would very much like to move things like optimisation into this world: first we're explicit about how the conceptually pristine source should be transformed into fast machine-executable code, but the compiler should know a set of common heuristics, which presumably encompass current techniques, and apply them automatically..

The ability to prove

The ability to prove observational equality is critical to a wide range of optimizations in addition to set-based languages that work with codata or first-class functions. For example, you could prove that two infinite lists are equal... without expanding them. The ability to identify functions/queries/predicates as identical can even result in combinatorial time and space optimizations in a multi-cast network or a large functional reactive program.

The ability to decide two functions as identical - i.e. as producing the same outputs for the same inputs - is in general undecidable. But it often can be decided for simple cases. That is, you can know that two functions are equal by comparing them structurally, and you can know two functions are disparate by testing just a few inputs or by abstract analysis.

In a proof system like Epigram, ability to prove observational equality - that is, without overmuch sacrifice of other features (like ability to represent arbitrary trees) - greatly widens both the number of theorems that can automatically be applied to new types, allows deeper optimization strategies in meta-programming, and allows that sets of functions or codata don't contain duplicates that are observationally equivalent. It's a huge win, perhaps more so than it would be in a regular functional language.

Haven't looked at the paper yet

Only just seen the story, but I think that Thorsten Altenkirch is constitutionally incapable of writing or coauthoring a research paper with the word "propositional" in it that doesn't substantively advance the sum of human understanding of type theory. He's an expert's expert.

I doubt the paper will actually spell doom for the study of intensional equality: I believe there are meta-theoretic reasons why you can't avoid them in an ambitious constructive type theory.

I'll say more when I have read the paper.

Wow

The math is a bit beyond me, but the implications of being able to prove observational function equality are huge.

Until now functions as a type have been without a very meaningful or useful notion of equality. This result implies that, in the cases considered, functional equality for statically defined functions can be proven statically. This allows all kinds of amazing stuff, starting with automatic coalescing of redundant function definitions when project code bases get merged.

In turn this allows compilers to automatically reduce or eliminate a lot of redundant computation and runtime equality testing and branching. In turn that would allow dead code elimination after branches that are currently not subject to static analysis. ... And it just gets deeper from there.

Think "the best compilers anybody can make just got an order of magnitude better, and the code they produce smaller, faster, and more elegant" and you'll have one of the major implications. Think "equality and possibly even a total ordering are possible on the set of all functions defined in static and dynamic languages" and you'll have another. After that think "Now we can think about how to create code browsers that efficiently and correctly manage projects built out of code written by widely separated people in different languages without keeping undue redundant definitions" and you'll have a glimmer of a third.

Bear

Halting problem remains unsolved

This type system (and I haven't read beyond the introduction yet) isn't going to infer a type for a function in a general Turing complete language. In order to obtain a well formed term, you must adhere to their typing discipline, which will probably be somewhat restrictive, requiring what are essentially proofs of correctness wherever a value is cast to a subtype. While this is an interesting advancement, I'm more interested in the story of how I won't have to use all of this machinery except where I want it.

Some misunderstanding?

This paper is really cool, but its goal seems to be different from what a few people have understood. I've not completed reading it, but I think I can give a tutorial on its general goal. I apologize for the length, but I've tried to make this explanation widely accessible.
A shorter explanation from its author is available here: https://lists.chalmers.se/pipermail/agda/2010/002506.html

Basically, this is about making some type theory expressive enough to use extensional equality, without making its use intractable.

What does extensional equality mean? Take two functions that produce the same output given the same input, whatever the input is. Using the principle of extensional equality for functions, we can conclude that those two functions are equal.

This concept of extensional equality is well-known from a lot of research, but not only it's not possible to decide it, but as far as I understand we can't even give a complete axiomatization of the concept, and this paper doesn't seem to change that. Hence I don't think this paper by itself enables more powerful optimizations.
However, many rules for proving programs extensionally equal are known. I like the presentation in Robert Harper's Practical Foundations for Programming Languages, chapters 47 and 48 (note: I started reading the book from those chapters, without too much preparation).

Now, what the paper instead does is to allow using such an equivalence doing theorem proving with theorems provers like Agda, based on dependent types and based on Intensional Type Theory (ITT), which have some advantages. The paper builds a different type theory that retains the advantages of ITT but where the builtin equality can be made extensional.

In dependent typing, types can depend on values. This complicates typechecking, which needs to verify that the type of a term is compatible with the its expected type. This means checking that values are equal. For example, if Vector n is the type of lists of length n, we would like Vector (2 + 2) to be equivalent to Vector 4. So the typechecker must run 2 + 2 and verify the output is equal to 4. This seems easy, but + is a library function; in other examples, the typechecker might need to run arbitrary programs instead of 2 + 2.

In an ITT, the equality used is fairly limited: to verify that two values are the same, the typechecker will only normalize them, but will not use proofs that the terms are equal. Alternatively, one can use a suitable proof that the two terms are equal (via Leibniz equality). However, to write this proof, one cannot use extensional equality, because it's not a theorem.

We can then try to add extensional equality as an axiom. The problem, however, is that in a proof assistent based on dependent typing, one also relies on the proposition-as-types principle (also known as Curry-Howard isomorphism). In this context, proofs of theorems are programs, which produce proofs of their thesis from proofs of their hypotheses. Axioms are also such programs, except that they don't have an implementation, hence they cannot be run. Canonicity says simply that a term in normal form (the result of running a program) is a value.
However, axioms don't have an implementation so they cannot be run. A program using an axiom would therefore stop executing when it tries to invoke the axiom: the resulting term would be a normal form without being a value, and this violates canonicity.

What the paper does, then, is simply set up its theory carefully enough that one can add the axiom of extensionality but never even tries to run a proof which might involve it. Thanks to the setup, the paper achieves proof irrelevance, so that two proofs of the same type (proving the same theorem) are indistinguishable. Therefore, trying to compute a proof is pointless, and proofs can be erased at runtime. Since all uses of the problematic extensionality axiom appear in such proofs, these uses will all be erased, and the program will never try to run the extensionality axiom.