Barendregt's Convention

Hi, I am doing some proofs on a calculus with existential types and I am fairly liberally applying Baredregt's convention (approximately that bound variables are distinctly named). My assumption is that I have to justify using this convention and that that justification is along the lines of 'the rules of the calculus abide by the convention'. However, I can not find any information about a formal way of justifying this. Does anyone have experience of this, or pointers to relavent papers?

Thanks, Nick

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Alpha conversion

Not quite sure what it is you want to justify, but it is not difficult to see that each term that does not respect the convention is alpha-equivalent to one that does. Is this what you were after?

Not quite

Not quite, I am more concerned about judgement rules in the calculus that might violate the variable convention, rather than individual terms. In particular, if I have a judgement that takes bound variables out of scope (e.g., opening existential types), then alpha converison can not be straightforwardly applied. It seems to me that such a rule violates Barendregt's convention and I want to prove that no such rules exist in my calculus. Or some equivalent property. I am wondering if such a property has been investigated previously.

Cheers, Nick

Which rule?

Are you talking about Plotkin's existential quantifier here? I think it would help if you gave the inference rule you were worried about.

Which rule?

The existential types are similar to Mitchell and Plotkin's. I am more worried about the calculus in general than any specific rule. That said, I have been particularly concerned with a subtyping rule, similar to the WS-Env rule for subtyping in the Wild FJ paper.

Barendregt's Convention

If you do a rule-induction over inductively defined judgements, the variable convention needs indeed justification. The reason is that there are inductive definitions where the variable convention is an unsound reasoning principle. Michael Norrish, Stefan Berghofer and I produced last year a paper about this. In short you have to make sure that every binder in a rule does not appear free in the conclusion of that rule.

Paper

Thanks, I had come across an earlier version of your paper, but the one you link to seems to have a bit more concrete stuff in it. It will take me a while to digest, but I think it has the answers I'm looking for, thanks!

"In short you have to make sure that every binder in a rule does not appear free in the conclusion of that rule."

Do you mean that bound variables in the premises of a rule must not appear free in the conclusion?

Barendregt's Convention

"Do you mean that bound variables in the premises of a rule must not appear free in the conclusion?"

Yes. If you have a binder in a rule (either in a premise or the conclusion) and you want to invoke the variable convention for it, then you have to make sure this binder is not free in the conclusion - roughly. The binder can be free in a premise. The typing rule dealing with lambdas in the simply-typed lambda-calculus is a good example for this. There you have a variable, say x, bound in the conclusion and free in a premise. The way this rule is defined, however, ensures that x is not free in the conclusion. Therefore it is fine to use the variable convention with the typing rules. In general this is however not the case. The paper gives an obvious example, but there are more subtle ones where it is not so clear that the condition for binders is violated.