The deBrujin Criterion and the "LCF Approach".

The "LCF approach", as I understand that term, was Milner's insight that allowing proof development solely in terms of a particular type and its operations was a safe way to allow extension of the LCF proof system with ML.

The deBrujin criterion seems to be the same thing -- allowing proven terms to be constructed using only a safe kernel language.

What is the relationship of these two terms? Are they synonymous?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

References?

You might want to include some links and references. That would make it easier for people to answer your question.

Stab

Taking my definition of "LCF approach" from these slides, my immediate reaction is that the "LCF approach" and "deBruijn criterion" are probably related and that relationship could probably be formalized in terms of Kolmogorov complexity. For the moment, it seems to me, however, that the "LCF approach" will not necessarily result in a system that satisfies the de Bruijn criterion as, e.g. I do not believe HOL4, Isabelle, or MetaPRL do (N.B. I could be completely wrong about this!) but Coq does (see especially Coq 8.2, which now includes a standalone proof checker that can be audited in precisely the deBruijn criterion fashion).

Shield

Taking my definition of "LCF approach" from these slides, my immediate reaction is that the "LCF approach" and "deBruijn criterion" are probably related and that relationship could probably be formalized in terms of Kolmogorov complexity.

Why do you think so? The Kolmogorov complexity seems hardly related, it is a measure for a piece of code, and usually not computable unless by brute force.

But I agree with the poster. A small kernel may imply, and usually does, that proofs can be checked by small verifiers. So, LCF should fit the "de Bruijn" criterion. Although I guess he was more thinking of type theory.

(Btw, I am Dutch, "de" is a definite article equivalent to "the" in English. So, the family name is "de Bruijn" which, I am guessing here - and relying on some stuff I hope I remember right, would translate to "the Brown". This probably means that during the French occupation of the Netherlands, when all families were listed and people were given family names, one of his ancestors was probably referred to in a township as "the brown one". A lot of Dutch people didn't really take the occupation too seriously so some strange family names in Dutch remain, like "Suikerbuik" which means "Sugartummy".)

[Btw, I think John still publishes the code which accompanies the mentioned slides.]

Systems following the LCF are an instance of the criterion

Systems following the LCF tradition like HOL do comply to the de Brujin criterion. But have you ever seen authors of other systems, which do comply to the de Brujin criterion, claim that they follow the LCF approach? (I have Coq in mind.)
My impression is that both terms are not defined formally. Nevertheless the idea behind LCF inspired systems is to safeguard via strong typing the thm type so that one can only make valid statements thus "ensuring" soundness.
I have only some knowledge of Coq (of systems that comply to the de Brujin criterion) so I hope what I say is not a distorted view. The concern that the de Brujin criterium aims to catter for is one of trust. Why should we trust the prover? It is a question if trust and The foundation of its trust is based on the decidability of the type checking problem. If you provide an inhabitant of a type (a proposition), it can be mechanically checked whether it truly is one. That provides a formal text (a manipulable object) that is a witness for the proposition the type "stands for"
But I think that this kind of "LCF approach" systems even if the only known manifestation of systems featuring the de Brujin criterion are not demonstrated to be logically equivalent.

Coq not strictly LCF

John Harrison describes the key ideas behind the LCF approach as follows:

  • Have a special abstract type thm of ‘theorems’
  • Make the constructors of the abstract type the inference rules of the logical system
  • Implement the system in a strongly-typed high-level language

The term De Bruijn criterion was coined by Henk Barendregt, with the following definition

A Mathematical Assistant satisfying the possibility of independent checking by a small program is said to satisfy the de Bruijn criterion.

I can conceive systems that follow the LCF approach but don't satisfy the De Bruijn criterion, for example because the logical system is very large. Systems like HOL(-light) do have a small kernel (the implementation of operations on the abstract theorem type) and proofs can (in principal) be independently checked.

Coq does not strictly adhere to the LCF approach. During the development of your proof it is possible to have an inconsistent proof state, for example with respect to guardedness conditions. In the end this will be caught by the Qed command that does a full check.

Dependent type theories and LCF

I think Coq and other proof assistants based on DTT that use terms to represent proofs are even further removed from the LCF approach because the central object is the concrete datatype of terms and types and it is usually possible to build ill-typed or incomplete terms containing existentials for example. Of course it is also possible to have an interface on top of that for building only valid proofs which is done in the LCF fashion with proof tactics in Coq.

Randy Pollak and supporters of HOL think like you ...

I can conceive systems that follow the LCF approach but don't satisfy the De Bruijn criterion, for example because the logical system is very large.

An interesting discusion of this point can be read in Donald MacKenzie's book "Mechanizing Proof". (This is an very interesting non technical book written by a sociologist of knowledge! ) The example system there is Nurpl, which had in rhe late 90's 75 iference rules. The system had been criticised on the grounds of its extensive rule set.
I want to say here that this by itself does not mean that Nurpl does not comply to the De Bruijn criterion as I understand it. The criterion refers to the proof object which must be produced and be further manipulated. Adam Chlipala puts it nicely in his intro for his new book.
Proof assistants satisfying the "de Bruijn criterion" may use complicated and extensible procedures to seek out proofs, but in the end they produce proof terms in kernel languages.

and he adds:

ACL2 and PVS do not meet the de Bruijn criterion, employing fancy decision procedures that produce no "evidence trails" justifying their results.

It is by this device what Barendregt calls "petrified proofs" that an independent program can re - check.

Thanks for the link to the paper by Barendregt. I do not remember where I first read about the criterion now I have at least a citeable reference. Is this paper you mentioned the original one that christened the term "de Bruijn criterion"?

Earlier reference

Is this paper you mentioned the original one that christened the term "de Bruijn criterion"?

The earliest reference I have is from 1997 in a paper by Henk Barendregt and Erik Barendsen (Autarkic Computations in Formal Proofs). This appeared a few years later as an article in the Journal of Automated Reasoning 28(3):321–336, Apr. 2002. The definition used in this paper is

the`de Bruijn criterion' of providing statements verifiable by a small program (that can be checked by hand).

LCF vs. De Bruijn criterion


I can conceive systems that follow the LCF approach but don't satisfy the De Bruijn criterion, for example because the logical system is very large.

Metamath would be an example of a non-LCF system that satisfies De Bruijn criterion (assuming definitions provided by Ronny Wichers Schreur). The original system was written by Norman Megill in C, since then independent checkers have been written in Python (300 lines), Haskell (400 lines), Java, Lua and C#.

Substitution

They seem to avoid Russel's paradox by a substitution restriction scheme. Does anyone here understand how it works?

Separation

I understand the Russel's paradox is dealt with in the way usual for ZF set theory - by restricting comprehension to subsets, i. e. by using the Separation axiom, (see also ssex) rather than Comprehension axiom. There is some discussion of that at http://us.metamath.org/mpegif/ru.html.

Ah...

I got sidetracked by a comment on disjoint variables.