Dead-end theorems

Define dead-end theorems to be theorems that are not used to prove other results. What percentage of proofs in PLT papers are of dead-end theorems?

For example, a type soundness result is not often used to prove more fundamental results. It is just a "house keeping" theorem.

Does this percentage tell us anything important?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Not obviously important

A theorem might establish an important property about a proposed formalism. The formalism might not be taken up "as is" in future work, due to various shortcomings, but the idea of such a formalism having such a property could still become influential on future work.

In general, a theorem and its proof might be influential without being used in a deductive sense in later work.

...not used to prove other results...

yet

That's like asking

which program invariants established by your typechecker don't really contribute to the correctness of the program. You could ask, but the answer might change.

(It's also a lot like the annoying "unreachable code" warnings that some compilers are want to emit when the optimizer discovers dead code in the program text. It may be an indication of a problem, or it may be simply an indication that the compiler is doing its job; programmers and especially metaprogrammers frequently have very good reasons to insert unreachable code into program source).

I'm not sure I'd agree that establishing a type system as sound is a dead-end theorem. It establishes that the upper bound on the usefulness of the system is sufficiently high that further work is of interest--there is little practical use of unsound type systems, after all.

type soundness

I would also strongly disagree that type soundness is a dead-end theorem. It's certainly used to prove lots of little theorems, e.g., that particular programs satisfy the safety policy, or that certain type-directed compiler optimizations are correct, or that programs of type ⊥ do not terminate (analogous to cut-elimination being used to prove consistency of a logic). That there aren't yet (m)any(?) deep theorems that rely on type soundness may be an indication that we haven't formulated type soundness at a sufficiently abstract level, rather than that it is a useless notion.

I never said it was a

I never said it was a useless notion!

Sorry, I was confused by

Sorry, I was confused by this paragraph:

Charles & Scott are of course correct that being dead-end does not mean the theorem is useless or unimportant. This is why I did say it does [...]

where I think you meant to write "not" between "did" and "say". You can ignore what I wrote after "sufficiently abstract level".

Thanks for spotting that

Thanks for spotting that typo. I fixed it.

Charles & Scott are of

Charles & Scott are of course correct that being dead-end does not mean the theorem is useless or unimportant. This is why I did not say it does, and asked about a more well-defined property (i.e., being "dead-end" as define above).

Burton, you may be right. I should have made my definition more exact: "that is not currently used in the proofs of other results". Anyway, in many routine cases it is easy to tell that chances re the theorem as is is going to be of little use in future proofs (it may be used once or twice, of course...)

Inferential impact analysis?

It sounds like you are talking about something like citation impact analysis on formal inferential dependencies across a scholarly corpus.

I'm sure that gathering information on who cites which result in order to use it as a starting point for further analysis is exceedingly interesting, but I doubt that the actual measure "%age of dead-end papers" as you currently have it is of more than very slight independent interest, unless dead-endedness is modified to allow for influences of the form "The proof of theorem X in in this article is based on the proof of theorem Y in article T".

Postscript: I've just read Scott's comment below, and I see he is (still) going in the same direction as me. I doubt the usefulness of relevance logic here, though, and I rather see both theorems and their proofs as being like plasticene, which can be and often are remade through analogical reasoning, which is rather my objection.

"The proof of theorem X in

"The proof of theorem X in in this article is based on the proof of theorem Y in article T".

I guess the book Applied Proof Theory: Proof Interpretations and their Use in Mathematics might be of interest to some of the people in this thread.

Here's another question for you

Suppose I prove Theorem A. Then I prove, independent of my first proof, Theorem B.

Then I write a second proof of A, depending on B; and likewise use A to prove B.

A and B are nowhere else cited.

Are they dead? (Are we using tracing or ref-counting to evaluate the liveness of the theorem?)

:)

Maybe the "publication" metaphor helps us here. An object which is created on the heap, but whose sole reference is immediately discarded, is of course semantic garbage. An object which is created somewhere and then "published" in a directory of some sort, such that other clients who have a reference to the containing repository and who can (re)generate the key necessary to reach the object, is certainly not garbage. Even if nobody else bothers to access it; the fact that it is published means it probably should be left alone. (One could subsequently decide to withdraw it--removing it from the repository--at which point it might then become garbage, assuming no other references).

Likewise, a theorem jotted down on an envelope and then crumpled up and thrown into the trash, is (literally, in this case), garbage. A theorem which finds its way onto the pages of Communications of the ACM has been (literally again) published, and by that act of publication, is live. Even if nobody ever uses it in their research. (What happens if CACM subsequently withdraws the paper is left as an exercise to the reader).

Theorems are, of course,immutable creatures, so copies are as good as aliases--meaning that withdrawing one from the totality of man's knowledge is nigh near impossible, once the horse has left the barn.

But still... the liveness of an object is really only an interesting property when the size of the set of objects is bounded, and one needs to find objects to cull. While it's often good sport to complain about "junk research" (and tenure committees and others may have good reason to identify producers of excessive trivial research prior to granting tenure), the global information capacity of academia is not so constrained. Unused thorems simply get ignored, after all.

Switching gears away from garbage collection metaphors, perhaps relevance logic has some relevance...