## An intuitionistic logic that proves Markov's principle

â€œAn intuitionistic logic that proves Markov's principleâ€, Hugo Herbelin, LICS 2010.

We design an intuitionistic predicate logic that supports a limited amount of classical reasoning, just enough to prove a variant of Markovâ€™s principle suited for predicate logic.

At the computational level, the extraction of an existential witness out of a proof of its double negation is done by using a form of statically-bound exception mechanism, what can be seen as a direct-style variant of Friedmanâ€™s A-translation.

Markov's principle is an axiom of "Russian constructivism". It says that if P is a decidable predicate (i.e., the formula ∀x. P(x) or ¬P(x) is constructively provable), then if we know that P is not always false (ie ¬(∀x. ¬P(x))), then there exists an x such that P holds (ie ∃x. P(x)).

One operational way of understanding this axiom is that it says is that if we know P is not always false (on, say, the natural numbers), we can find an element for which it is true with a while-loop: check to see if 1 is true, and then check to see if 2 is true, and so on, stopping when you hit a number for which it holds. This means that Markov's principle is a principle of unbounded search, and as a result it has controversial status in intuitionistic mathematics -- should we regard a program with a non-constructive termination proof as a constructively terminating program? (The answer is, as usual, "it depends": in different applications you can want it or not.)

However, just chucking an axiom like ¬(∀x. ¬P(x)) → ∃x. P(x) into your proof rules is not very nice from a proof-theoretic perspective. It mixes up a bunch of different logical connectives, and adding it as an axiom will break things like the cut-elimination theorem for sequent calculus.

In this paper, Herbelin exploits (a) the fact that Markov's principle is a (mildly) classical principle of reasoning, and (b) the fact that classical logic is connected with control operators like continuations, to give a logic which is proof-theoretically well-behaved, but which supports Markov's principle. The trick is to add first-class continuations, but only for value types (ie, you can throw products and sums, but not functions).

What I find surprising is that this same class of restriction also arises in another paper on control operators at LICS -- Noam Zeilberger's paper "Polarity and the Logic of Delimited Continuations" uses it too. (This paper deserves a post here too.)

## Comment viewing options

### Thanks

That was a very good explanation of something I've been mildly curious about (after recent discussions), but never managed to look up.

I'm certain the "supports a limited amount of classical reasoning, just enough to prove a variant of Markovâ€™s principle" will not appease the hard-core intuitionists. It seems a lot like gaming the logic.

### "gaming the logic"

As someone who pretty much identifies with the label "hardcore intuitionist", I have to disagree -- it's only through "hardcore intuitionism" that we find reason to seek computational meanings like this for classical reasoning. To me, axiomatic proof systems feel like "gaming the logic" more than any work along these lines.

### Really?

It seems to me that any claim of the form: "I just added enough axioms from this other logic that you guys disagree with until I could prove my theorem" would qualify as 'gaming the logic' - not quite on the order of Nomic, but one is still changing the rules to fit the theorem.

As someone raised on classical logic (and who has only recently begun to appreciate constructive/intuitionist logics), I must ask: Do we have an intuitionist proof of the undecidability of the Halting problem? People who claim to be intuitionists keep bringing the Halting problem up in their arguments, so I'm curious.

### I have been thinking about similar lines

But haven't really had time to work out the argument. Can you write down what the intuitionist argument is?

[I guess it is stated in the original post. Can someone explain what: "It depends" means?]

### The standard proof...

...of the halting theorem is a perfectly good constructive proof. You assume that you have a halting oracle, and then explicitly construct an input to this oracle which leads to absurd results, from which you conclude that no such oracle exists.[*]

This has no connection to the "it depends" in my original post. That is just a reference to the fact that there are many different models of intuitionistic logic, and Markov's principle is inconsistent with some of them. So whether you accept it depends on what you need to do.

[*] If you think this sounds like a proof by contradiction, you should read the following post by Andrej Bauer, which he wrote to explain the distinction between proofs of negative statements and proofs by contradiction to Timothy Gowers(!!!).

### Thanks for the clarification

So, I am a sceptic and am gonna do some nitpicking: Why is a program which loops forever partial?

### Thanks for that insight.

Thanks for that insight, esp. the link to Andrej Bauer's post.

### Yes

I believe that the "usual" proof of the undecidability of the halting problem is formalisable in intuitionnistic Heyting Arithmetic. Remember that proof by contradiction is allowed for negative statements in intuitionnistic logic, it is only the use of contraposition that is suspect.

As per your other comment, I believe that the argument is not so much about Hard Core intuitionnnists versus classical logic proponents (classisists?) but more about what computationnal content one can extract from proofs in such a logic. It is interesting to note that many proofs in ordinary classical logics (in fact, all the \Pi^0_1 statements) have a strong computationnal content with, for example the witness property. This paper proposes a refinement of these kind of statements. The title is just to attract attention...

### A most artistic package of a jumble of ideas

It seems a lot like gaming the logic.

There's nothing hardcore about my intuitionism —I'm a "semantic intuitionist", which means I think semantics should be done in a rigorously constructive fashion, but I don't have hangups about, say the consistency or use of ZFC+(regular cardinals), and I don't tell mathematicians that what they do is incorrect— but I think this is too much for any but the most paranoid ultra-finitist.

GÃ¶del's Dialectica interpretation shows how all theorems of Peano arithmetic cleanly match theorems of Heyting arithemtic in such a way that we see that the Pi-0-2 theory of classical and intuitionistic arithmetic are the same. Fernando Ferreira (2008, A most artistic package of a jumble of ideas, Dialectica 62-2) shows how we can view this as a constructive interpretation of Markov's principle.

### But is Â¬(âˆ€x. Â¬P(x)) â†’ âˆƒx. P(x) by itself valid?

Wouldn't it need to be ((âˆ€x. P(x) \/ Â¬P(x)) /\ Â¬(âˆ€x. Â¬P(x))) â†’ âˆƒx. P(x)? It looks to my naive understanding that âˆ€x. P(x) \/ Â¬P(x) introduces a law-of-the-excluded-middle limited to the P() predicate, and that lets you apply double negation eliminiation it in your de Morgan's law Â¬(âˆ€x. Â¬P(x)) â†’ âˆƒx. Â¬Â¬P(x). Or am I missing the point?

### You're exactly right

I had assumed that was P ranging over decidable predicates, but if you want to make that assumption explicit then the way you wrote it is exactly right. You're also correct that Markov's principle allows a limited form of double negation elimination: if you write it as (¬¬∃x. P(x)) → ∃x. P(x) (for decidable P), then it is very clear.

(I wrote it the other way around because I thought it made the idea that it licenses you to use nonconstructive termination proofs a bit clearer. Herbelin gives it in the double-negation-elimination form in his introduction, to make the connections with call/cc clearer. I think it would be nice to study his translation more carefully, to see exactly how these two perspectives connect.)

### âˆ€x. P(x) \/ Â¬P(x) is trivially true

"Wouldn't it need to be ((âˆ€x. P(x) \/ Â¬P(x)) /\ Â¬(âˆ€x. Â¬P(x))) â†’ âˆƒx. P(x)?"

Whoa! Doesn't
1. âˆ€x. P(x) \/ Â¬P(x) reduce to
2. âˆ€x. true which reduces to
3. true

I can "and" any number of true statements to any statement whatsoever without altering the overall expression's truth value. Isn't that what you've done here? What's gained?

### Not Trivial in Intuitionist logics

Intuitionist logics are effectively classical logic without the law of the excluded middle. You cannot say P(x) just because you can prove Â¬Â¬P(x). That is, you say P(x) because you can directly (constructively) prove P(x).

Thus âˆ€x. P(x) \/ Â¬P(x) is a non-trivial statement that you can prove P(x) or you can prove Â¬P(x) for any given x - which is, essentially, to say that P(x) is decidable. (For example, you couldn't replace P with "does this program halt?" because a "Halts(program)" predicate is not generally decidable.)

I'd assume an extra requirement that 'x' be from a countable set. But I'm not well enough versed in intuitionist logics to know whether that countability is traditionally implied.

The formulation of Markov principle described here requires finite but unbounded search (P(x) because I can prove P(x) if given finite unbounded search). Is this acceptable? Well, as the OP says, "it depends" - apparently upon the mathematician and upon the problem.

### Origins and essence of Markov's principle

I'd assume an extra requirement that 'x' be from a countable set. But I'm not well enough versed in intuitionist logics to know whether that countability is traditionally implied.

You can do mathematics with intuitionistic logic and uncountable sets, but it's not constructive anymore. Also, you really need something more than countability: you need to have some kind of structure you can recurse over.

The original statement of Markov's principle of constructive choice is, I understand, from A. A. Markov, 1954, Theory of algorithms, and is formulated with quantification over the natural numbers. Markov's book is a goldmine of thinking about what algorithms are, and what it means for mathematics to be algorithmic.

The usefulness of Markov's principle in constructive analysis is what has led to more liberal versions of Markov's principle.

(For example, you couldn't replace P with "does this program halt?" because a "Halts(program)" predicate is not generally decidable.)

In fact, Markov's principle in essence says that there is a fact of the matter about whether a program halts, something about which purer forms of intuitionism are agnostic.

### Markov's principle in essence

Markov's principle in essence says that there is a fact of the matter about whether a program halts or not, something about which purer forms of intuitionism are agnostic.

Could you explain this? Does it have something to do with step-wise decidability of halting? Could I replace 'P' with 'Halts'?

### Stepwise decidability of halting

Does it have something to do with step-wise decidability of halting?

If you like — actually that's a nice phrase for this kind of thing. For any program e, let E(n) be the predicate that says e halts at the nth step. Then I think it is clear how to get your result from Markov's principle.

Could I replace 'P' with 'Halts'?

No, because Halts(-) is not a decidable predicate. Markov's principle doesn't tell you whether the existential is true or not, only that there is no excluded middle between it being true and it being false.

### The connection with Noam Zeilberger's work...

is not so surprising... given that they work together at PPS in Paris :)