Mechanizing the Metatheory of LF

Christian Urban, James Cheney, Stefan Berghofer. Mechanizing the Metatheory of LF. Extended tech report accompanying LICS 2008 paper.

LF is a dependent type theory in which many other formal systems can be conveniently embedded. However, correct use of LF relies on nontrivial metatheoretic developments such as proofs of correctness of decision procedures for LF's judgments. Although detailed informal proofs of these properties have been published, they have not been formally verified in a theorem prover. We have formalized these properties within Isabelle/HOL using the Nominal Datatype Package, closely following a recent article by Harper and Pfenning. In the process, we identified and resolved a gap in one of the proofs and a small number of minor lacunae in others. Besides its intrinsic interest, our formalization provides a foundation for studying the adequacy of LF encodings, the correctness of Twelf-style metatheoretic reasoning, and the metatheory of extensions to LF.

This paper is a substantial exercise in applying the approach of Harper and Licata in Mechanizing Language Definitions reported before, and is an advance towards a worthy objective in the campaign to mechanise metatheory that Paul espouses...

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

just to clarify a bit

Just to clarify things a little, the Urban/Cheney/Berghofer paper is not an application of the approach that Bob and I were talking about in the journal paper you linked to: they're using Isabelle/HOL, a different proof assistant than Twelf! Their paper is about studying LF itself, which is the framework that you use to represent systems in Twelf. So their paper is "meta-meta": it's about formalizing the metatheory of the language that you use when you're formalizing metatheory in Twelf, and they use an entirely different proof assistant, which isn't based on LF at all, to do it. Their work is related to our journal paper, in that part of the goal of our paper was to give a (non-mechanized) account of the metatheory of LF, and Urban/Cheney/Berghofer formalize an account of the metatheory of LF. But the account that they formalize is an earlier one, based on a logical relations argument, which is different from Kevin Watkins's sytntactic method, which Bob and I summarize in that article.

PS: there's a more recent version of the journal article you linked to in the other thread.

Mea culpa

In my haste to briefly sketch out a connection to some earlier LtU stories, I have been guilty of rather glibly oversimplifying.

I'll add some remarks, by way of making up, and probably dig myself deeper :-)

Not only are the authors using different logical frameworks, they are using frameworks that have a very different conception of what kind of work the user does in establishing results. I'm much less familiar with the Isabelle/HOL approach than I am with the LF/Twelf approach, however I have the sense that, while LF is a lean intuitionistic basis for theory construction that encourages users to make explicit use of proof representations (and in Twelf's case, supports automated structural induction over those proofs), HOL is a framework for specifications based on classical logic that encourages users to treat specifications truth-conditionally, and where automation is handled by means of Prolog-style refutation-search.

The author's achievement in making this translation is very much nontrivial, since it is hard to translate intuitionistic inferences into classical refutation-search cleanly. It leads to LF being realised as a package within Isabelle/HOL with a Harper-Licata-style mechanised metatheory. The conclusions signal the authors' intention to mechanise another piece of LF-family metatheory in HOL, namely the jewel in Twelf's crown, its ability to automate structural induction.

The interest to LtUers is that this kind of work breaks down the barriers between rival logical-framework communities, which makes all of them more attractive by allowing sharing of technologies, increasing mutual understanding and reducing, for newcomers, the difficulty of "choosing the right framework", which has been a deterrent to at least some hackers. All of these help build the foundations that allows PL support technologies such as Theorem proving support in programming language semantics be applied.

just to clarify a bit more :)

Hi Charles, Just a few more clarifications:

  • There's a difference between (a) studying one representation language using another proof assistant and (b) embedding that representation language in another proof assistant in such a way that you'd want to use the embedded language to do formalizations. In my opinion, this paper only does (a). While this could eventually eventually lead to (b) that would require a lot more work.
  • There's not as much dissonance as you suggest between intuitionistic and classical reasoning here: in this paper, the authors study the metathoery of LF (a lambda-calculus used for representation), not Twelf (a separate language for computing with LF terms, which is used to give intuitionistic proofs of theorems). Studying LF is, at a high level, just like studying any other lambda-calculus (simply-typed lambda-calc, System F), though LF is a bit hard because of dependent types.
  • While it's true that Twelf does have some rudimentary ability to automatically prove theorems by structural induction, the vast majority of what we use it for is *checking* explicitly given proofs.