Erasure and Polymorphism in Pure Type Systems

Erasure and Polymorphism in Pure Type Systems by Nathan Mishra-Linger and Tim Sheard

Abstract

We introduce Erasure Pure Type Systems, an extension to Pure Type Systems with an erasure semantics centered around a type constructor ∀ indicating parametric polymorphism. The erasure phase is guided by lightweight program annotations. The typing rules guarantee that well-typed programs obey a phase distinction between erasable (compile-time) and non-erasable (run-time) terms.

The erasability of an expression depends only on how its value is used in the rest of the program. Despite this simple observation, most languages treat erasability as an intrinsic property of expressions, leading to code duplication problems. Our approach overcomes this deficiency by treating erasability extrinsically.

Because the execution model of EPTS generalizes the familiar notions of type erasure and parametric polymorphism, we believe functional programmers will find it quite natural to program in such a setting.

This paper is a shorter presentation of some of the ideas of Nathan Linger PhD thesis. The general idea is that, in a dependent type systems, erasable terms are not defined intrisically (by their type or another classification), but by how they are used in the program. Functions are classified by two type-level constructs, one meaning that the function argument is used during runtime computation, and the other meaning that the parameter is computationnaly irrelevant (it only appears in type annotations, proof term, etc.). A very nice idea is that parametric polymorphism and erasability (type erasure, proof irrelevance...) may actually be the same thing.

Being more familiar with non-dependent systems (ML, System F), I'm interested in "type erasure semantics", but find it very nice that insights can be gained by looking at dependent sytems.

.

Background

Pure type systems are a generalization of the Lambda Cube, parametrized by arbitrary dependency between sorts (type polymorphism, type-level computations, dependent types...). They have been mentioned on LtU before:

While PTS may also describe non-dependent type systems, the problematic handled in this paper are mostly familiar to dependent type systems. Those have been discussed numerous times on LtU, among them, in chronological order:

Other approaches try to access to some of the power of dependent types without adding them in their full complexity. The programming language Ωmega, also by Tim Sheard, has been mentioned recently.

A nice thing about the erasure treatment in Nathan Linger's paper is that it avoids the duplication between different families of "erasable" (or type-level) and "non-erasable" (or value-level) terms, such as can be observed in Coq between Set and Prop, or between the different kind layers of Ωmega. I think this is a promising idea, both for programming-oriented and proof-oriented dependently-typed systems.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Very promising

Skimming through ATS prelude, we see that much of the code has to be duplicated; moreover, syntax is also influenced: we have similar notions for "statics" and for "dynamics" (functions vs proof functions, datatypes vs dataprops, etc.). Another separation in the case of ATS is that static part of the language is simply typed, so no static types can depend on static values. A workaround is to encode such relations through GADTs, which is sometimes burdensome.

I'm not sure however, can we support effects for value-level (unerasable) terms as defined in this paper? If yes, it seems like programming languages such as ATS can be simplified to some degree.

[Admin] Front page worthy

Please promote.

Conor McBride

Conor McBride sent a couple of things to me over twitter (he's @pigworker if you want to follow)

except that section 8.1's proposal to combine erasure-before-conv and proof-irrelevance is (and was then) known to be broken

You surely want to erase proofs (eg equality) at run time, but if you erase them in conv, false hypotheses break the typechecker.

My fault

Actually I think my presentation was a bit hasty: when I wrote "erasability (type erasure, proof irrelevance...)", I've been approximative. Proof irrelevance is not the same thing as "erasability", and the paper doesn't claim it.
It is a more complicated matter, that is presented in the paper in a less precise "Future work" section (though it seems it is discussed more in depth in Nathan Linger's dissertation). Apparently their are still fundamental difficulties, as Conor McBride highlighted, that would mean that you can't so easily get "proof irrelevance" for free from your work on erasability.

When writing my presentation I've been a bit incomforable with that quick&dirty keyword packing; I only wanted to mention related terms. Of course, once quoted in twitter format, it looks all different.

I've looked in the litterature (which I'm not familiar with) for a reference on the difficulties that Conor McBride pointed out. I discovered that proof-irrelevance and computational irrelevance have already been discussed on LtU, with On the Strength of Proof-Irrelevant Type Theories by Benjamin Werner. After a quick look, I didn't found this problem discussed here, but it led me to The not so simple proof-irrelevant model of CC, by Benjamin Werner and Alexandre Miquel, which seems to discuss the issue in more detail.

Two types of erasure

The reason for the mismatch (and why the EPTS stuff doesn't exactly capture "proof irrelevance") is that there are (at least) two justifications for erasure:

  1. There is at most one value of the given type (as far as any computation can tell).
  2. Computations don't even attempt to look at the value in question.

The former is the usual justification for proof irrelevance, and it covers equality proofs, or (probably) overlapping inductive proofs that don't eliminate over Set. The latter is more like the justification for erasure of type parameters in parametric polymorphism. The type parameters are just used by the static checker to verify coherence conditions.

These two justifications do have some overlap. For instance, you might write some code that carries around equality proofs that nobody ever really looks at for computational purposes, but serve to enforce some static invariants. However, in general, neither covers every case. For instance, it isn't unheard of to need to cast between, say, Vec A (n + 0) and Vec A n as subst (Vec A) n=n+0 v. The proof may be irrelevant, in that there is at most one such proof, but the value of the proof is, in essence, being 'looked at' in order to produce the computational result. The only way to change that would be to make subst parametric in the proof parameter, but as mentioned, that can lead to loops during type checking if you do erasure on open terms.

On the other end, many cases of parametric quantification are over types/universes with more than one inhabitant. Their erasure cannot be justified by the proof irrelevant argument.

Thanks for the

Thanks for the clarification.

While looking for more references on the issue, I've come across The Implicit Calculus of Constructions as a Programming Language with Dependent Types, by Bruno Barras and Bruno Bernardo.

To my untrained eyes, the calculus developped is quite similar to the one in the Sheard and Linger paper. They do not handle proof irrelevance directly but, as you noted, it makes proofs in non-computational positions irrelevant; this is similar to Sheard and Linger's treatment. They also have a conversion rule using convertibility on erased terms.

It may be interesting to note that they still keep separate Prop and Set kinds : in Coq, erasability is not the only difference between Prop and Set, they also differ in predicativity. It seems that, in this setting, you have to keep the discussed redundancy between data structures of the two worlds.

ICC

As I recall, either one of the papers or Mishra-Linger's thesis notes that the EPTS work is essentially extending ICC to arbitrary pure type systems. Actually, I suppose it's more accurate to say that their IPTS is the generalization of ICC, and the EPTS is the generalization of ICC*.

The ICC hierarchy actually isn't quite like Coq's, it's more like Luo's ECC. Prop is simply the bottom of the hierarchy, inhabits Type_0 and is impredicative, but there seems to be no predicative Set inhabiting Type_0. I suppose you might want to keep a distinction for predicativity purposes, but I'd probably root for just not having impredicativity.

I've also entertained the idea that perhaps parametric quantification could be impredicative, while normal quantification would be predicative, but I strongly suspect impredicativity must still be confined to the lowest level(s) to avoid Girard's paradox. It would still let you do some handy things, though. And I have vague memories of hearing that Reynolds talked about parametricity being about when it's safe to be impredicative. Maybe I'm just making that up, though.

See

...Fruchart and Longo's, "Carnap's Remarks on Impredicative Definitions and the Genericity Theorem", which offers a philosophical (in the style of Martin-Lof, though he would of course disagree with the conclusion) argument for F-style impredicative definitions.

Another reference

Alexandre Miquel should also be referenced here, as his PhD Thesis describes ICC with meta-theoretical properties (along with a very interesting model, which I have failed to understand). Sadly, it is in French, though there does exist an article.