Interactive Tutorial of the Sequent Calculus

Interactive Tutorial of the Sequent Calculus by Edward Z. Yang.

This interactive tutorial will teach you how to use the sequent calculus, a simple set of rules with which you can use to show the truth of statements in first order logic. It is geared towards anyone with some background in writing software for computers, with knowledge of basic boolean logic. ...

Proving theorems is not for the mathematicians anymore: with theorem provers, it's now a job for the hacker. — Martin Rinard ...

A common complaint with a formal systems like the sequent calculus is the "I clicked around and managed to prove this, but I'm not really sure what happened!" This is what Martin means by the hacker mentality: it is now possible for people to prove things, even when they don't know what they're doing. The computer will ensure that, in the end, they will have gotten it right.

The tool behind this nice tutorial is Logitext.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Polyglot

It is also relevant to LtU that Logitext is an example of polyglot programming: as the following blog post explains, it is written in Haskell + Coq + Ur/Web.

I know I'm a language geek, because I find that even more impressive that the end result. Chris Double mentioning in the comment that he himself uses Ur/Web + ATS (+Factor?) also has me salivating.

Related: Gamifying algebra

Well...

Rinard is not going to learn a lot of people anything unless he drops the highbrow pedantic comments. The tool is lots of fun, though.

Point of view

To me, his quotes/comments were encouraging.

Ah well...

I used to be a teacher, so his remarks just strike me as 'not too encouraging to the general audience.' In teachtalk, one says people have different aptitudes; which is about as true as it can get.

No biggy. Academia should be blissfully unware of the real world, good to see it still is.

On another note...

There is no such thing as 'the' sequent calculus. In general, a sequent calculus is a proof theory on sequents. The calculus shown works on a first-order sequent calculus.

As a side note. I always found sequent calculi a natural match for expressing type theories since, in general, usually type obligations need to be proven from a context of type assertions.

My pet project uses a sequent prover for discharging type obligations on an input language, but I never got around to fleshing out an exact type theory for that, so enough of that.

Same Doubt

Yes indeed, there is not "the calculus". The implemented calculus is for example not the same as the calculus LK used by Gentzen. Gentzen had for conjunction on the left side two rules:

G, A |- D
------------- (&L1)
G, A & B |- D 

G, B |- D
------------- (&L2)
G, A & B |- D 

But what has been implemented in logictext is:

G, A, B |- D
------------- (&L)
G, A & B |- D 

Similar differences are found in the right introduction rules for disjunction. The most striking difference is that the leaves are differently formed. In LK we find, and there are explicit structural rules such as weakening (Verdünnung) and permutation (Vertauschung) in LK:

A |- A

But the implementation only uses:

G, A |- A, D

What the implementation did is very common for mechanized proofs. Getting rid of structural rules and accumulating formulas in the antecedent and consequent of a sequent, only to wait to find an A simultaneously in the antecendent and in the consequent.

The implemented calculus can be shown complete since it allows contraction (Zusammenziehung) in (forall L) and (exists R). But besides these aforementioned differences there can be much more differences among sequent calculi:
- They could include equality rules
- They could include higher order/abstraction rules
- They could include cut/mix rules
- They could include induction/recursion schemas
- They can be weaker than classical logic, i.e. intuitionistic, minimal etc..
- What else?

On the positive side we find that what has been implemented is really a calculus which consists purely of left/right introduction rules, and not of introduction/elimination rules. But nevertheless I guess it would be better to find a more specific title for the presented implementation.

Bye

Larry Paulson's course notes

I loved Larry Paulson's course (and notes) on Logic and Proof at Cambridge.

And while we are at it, the course notes for Semantics of Programming Languages" (currently taught by Peter Sewell) are also very nicely done.

The Semantics course is

The Semantics course is actually currently taught by Sam Staton.

Woa. This Logitext is really

Woa. This Logitext is really a gem. Thank you for the link, Manuel!

Yah

I should have been more positive. It really is a lot of fun to do the exercises with this tool, and I think it'll make a marvelous teaching aid. Stunning, really.

Is it just me

Or is the last lemma in the tutorial a bit fishy?

The classic fish

If by "fishy" you mean "needs classical logic", then yes. It has a relatively clear interactive computational meaning however, if you consider a snapshot/backtracking/continuation primitive to model classical logic: I give you some witness, and if you prove me wrong with another (for which P does not hold) I can restart at the beginning to change my mind with the one you chose.

The way the proof work is maybe not terribly apparent in the sequent calculus derivation, at least to the untrained eye. The interactive tutorial is good at letting you get used to the symbolic manipulations of the calculus, but does not accomplish the impossible wonder of letting you see the dynamic meaning of the proof for free. For this you have to look at the underlying program.

Wonderful

Nice... It just 'felt' wrong since the rules for the logic implicitly handle unknowns (sometimes called 'holes,' I think) and skolem constants without explicit annotations for that, something I am not used to.

It tried it in PVS, I don't think it's provable in their *weird* mixture of classical and intuitionistic logic, but I may be wrong. (I forgot most of the commands by now, it's been ten years. I'll try it anyway, for the heck of it.)

Why does it look fishy to

Why does it look fishy to you? For concreteness' sake, let the variables range over the integers, and let P some predicate on the integers. The statement is saying: there is an integer x, such that if P(x) were true, then P(y) is true for all y. There are two cases to consider:

  1. P(x) is true for all integers x
  2. P(x) is not true for all integers x

In the first case, the statement is trivially true, since the conclusion is true. In the second case we have an x such that P(x) is false. But in that case the implication is true as well, since false implies everything. For example if P were the predicate that determines whether its argument is prime, then we could choose x=4 to prove the statement.

The apparent weirdness comes from the implication. Intuitively, when we see a statement P(x) => Q, our brains fool us into assuming that P(x) must be true for some x. I think this is because our brains are wired to seek usefulness: if P(x) is not true for any x, then why would anybody have bothered to write down P(x) => Q??

Quantifying over Thin Air...

The thing is in the introduction of the unknown x of type T satisfying P, it may not exist, it falls out of thin air. (In the sense that T may be empty.) If it doesn't exist, the proposition is still trivially true, but it bothered me.

(Anyway, it's been lots of time since I did anything with PVS, so if someone has the proof, I'ld be delighted to see it.)

Technicalities

I tried it myself in PVS. It turned out my hunch was correct; at least, I think so. It isn't a valid proposition unless you assume the underlying type to be non-empty.

(You can see it in both of other poster's proofs. In cases, they both take an argument 'out of thin air.')

And elaborating a bit: It's, as stated, a difference between an intuitionistic logic and a classical logic. In one, you can rely on the fact that all types are populated, in the other, you cannot.

And to elaborate a bit further (this is getting boring, mind you). It is probably not a lemma which you can prove in PVS. Probably, since my hunch is that you probably need to quantify over all PVS types to prove it's false; which you probably can't do since probably that's impossible in the logic of PVS. In essence, the logic of PVS is that baroque, that I have no idea whether the lemma is true, unprovable, or false. Sheesh...

Uhm, to be really boring,

Uhm, to be really boring, but very precise, I guess the correct answer is that according to a classical logic (the one you use for math on paper) the lemma is trivially false.

?

There must be a misunderstanding here. The lemma we are talking about is

forall P. exists x. P x -> (forall y. P y)

It is "trivially true" in classical logic (though I don't think "trivial" is the good word here), see Jules's proof, or the proof that you can construct through the tutorial, which is a sequent calculus formulation of first-order classical logic. I think it is non-provable in intuitionistic logic (because it is false under a constructive interpretation of the existential: you can't prove it constructively for P(x) = "if x is prime then it is the sum of two odd numbers" without giving an explicit witness, that is deciding the Goldbach conjecture). I have no idea what the PVS logic is, you seem to say that it is a logic of intermediate power, so I don't know whether it holds in it or not.

I believe it seems "fishy" because it uses classical logic as its core. Jules believe it is "fishy" because of the way folk logic deals with implication. I find my explanation more poetic and like it better, but I wouldn't know, and we don't know why it seems "fishy" to *you* because of the confused signals you're sending about it.

Counterexample

You forget the quantifiers. (Mind you I am not that sure too, but I'll try and give the counterexample.)

The lemma with _all_ quantifiers: for all sets S, and for all P a predicate over S, exists x in S, P(x) implies for all y in S, P(y).

Assume S is empty. Then "for all y in S, P(y)" is trivially true. Then you should be able to provide an x for which "exists x in S. true" There is no x in S. Contradiction.

(In your proof, "the Classic Fish," you relied on being able to take an element out of S when you chose the wrong one. You can't do that if S is empty. Classically, it remained unproven.)

(Another manner to interpret the lemma is to rewrite the implication as a disjunction. You end up with "exists x, (not P(x)) or (forall y, P(y))". So P doesn't hold somewhere, or holds everywhere. In case S is empty, it trivially holds everywhere, and you also end up with "exists x in S, true", which under a classical interpretation, like PVS, encodes the obligation to provide a witness that S is non-empty. I think...)

Thanks for the

Thanks for the clarification; what you say is true, and I should have paid more attention to the "inhabited" remark you made about PVS.

That said, when considering first-order logic, we usually don't restrict quantifiers to range over any domain, rather they're understood to denote any element in the domain/universe of discourse (which in general is not empty; indeed the logic of nothing is not a very interesting topic). What you describe by quantification (P over S) is done by relativization, that is turning the formula into a different one such as:

∀S. ∀P. ∃x. S(x) ∧ (P(x) ⇒ ∀y. (S(y) ⇒ P(y)))

That is, if you think of a "domain" S, restricted quantifications `∃x∈S. foo` and `∀y∈S. foo` are expressed as `∃x. S(x) ∧ foo` and `∀y. S(y) ⇒ foo` (using the usual trick of turning a set into its membership predicate).

This formula, different from the one we discussed, is indeed false.

Oh cool

Yah. It doesn't really matter for colloquial use since most people will assume a set like S to be non-empty. (Like filled with integers.)

Excuses for the editing, I was just posting on the spur of the moment while taking care of the kids ;), so not everything is thought through that much.

Cool, I was right. Interesting. (I 'triggered' on one of the inference rules. It just seems wrong from a classical perspective.)

LOL

I retract the 'proof'. I am not sure of anything regarding this wonderful example. ;-)

Logics are just formal

Logics are just formal systems so objectively there is nothing fishy about any of them. However we choose the axioms of our logics in such a way that we can use the logics to reason about things we care about, i.e. the real world. The way this works is that we translate human understanding into the logic, then we use deduction rules and then we translate the result into human understanding. So if something seems fishy about a logic, that just means that one of the translations went wrong.

In this case the deduction step (in classical logic) is just that forall P. exists x. P x -> (forall y. P y) is equivalent to true. The translation from true to human understanding is unlikely to cause the fishyness, so it must be the translation from forall P. exists x. P x -> (forall y. P y) to human understanding. I suspect that marco's human translation is/was "for each statement P, we can find an example x in the set of things that make P true, such that this example allows us to show that P is in fact true everywhere". This is of course extremely fishy (and false). For example if P(x) is "x is even", then this reading says "there exists an even number that allows us to prove that all numbers are even". The problem with this translation is that in the logic, the main reason why the statement is true is that false -> true. If we can find even a single example where P is false, then we're done. So rather than finding the magic even number that allows us to prove that all numbers are even, we merely have to find a single odd number. Anyway, we can indeed only guess at marco's interpretation, and perhaps the problem is that his interpretation technique was for intuitionistic logic.

No, it was set-theoretics.

In one of the proof steps, an unknown is introduced. Since I worked a lot with PVS, which has a 'classical' interpretation of types, where with 'classical' I mean that the logic 'seems' to assume that types are somewhat akin to sets, I triggered on it, since you cannot, in general, introduce an unknown out of a set (since that set may be empty).

Having said that, it is just a hunch, I am not sure in PVS the lemma is provable either way, though somewhere in the axioms it may be inherently assumed that all types are empty or not empty, in which case it may be provable false. I think.

A lot of assumptions on my side. Sorry, this is all ages ago for me.

(I found it a 'fishy' lemma since the proof looked wrong, and admittedly I wondered whether your interpretation of the lemma would be true in that logic too. As stated, I found the inference rules a bit on the odd side too because of how they mix unknowns with skolem constants. And I wondered whether there was a 'bug' there.)

bug: forall right implementation does not match description

forall right's description, in the tutorial, says:

In the case of the forall right (∀r) rule and the exists left (∃l), the system picks a variable for you. But it doesn't pick any old variable: the variable it picks is required to be distinct from anything pre-existing in the sequent: this is often referred to as the "no free occurrence" rule.

But forall right's implementation currently does not work this way.

For example, search in the tutorial for the phrase "Quantifier rules. The rules for the quantifiers are quite interesting. Which of these four statements is true?" and then click on the "forall" quantifier on the right hand side (in the next line after that phrase) to see this bug.

Currently, the example under

Currently, the example under "Quantifier rules" does not show any problem (because it currently does not have the same variable on the left and right hand side of the implication). So, currently, this illustration is non-conclusive. Needs a better test.

I doubt there is a bug...

I was toying with the examples and it neatly generated fresh unknowns everywhere. The thing which bugged me is at most a difference between logics.

Anyway, I tried to prove the odd lemma in PVS with simple equational reasoning; and that failed. I am still interested in someone who could supply a PVS proof script for the offending lemma.