Bottom

Just a small thing I've often wondered about. In math, people are often content separating algorithms into two classes: those which terminate, and those which don't. For the latter, often the bottom symbol is introduced. But isn't there obviously a third class: Those algorithms for which I don't know whether they terminate?

So, I've always wondered: Don't you introduce a logical inconsistency by cleanly separating algorithms? I.e., since you state something you can't know, it must be inconsistent, right?

Looking for views/the best answer/etc. Or, an open invitation to shoot with your best wisdom.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Sic et non

Assuming it's a precisely defined deterministic algorithm (including its input), either it terminates, or it doesn't. That's not about what we know. It's in the knowing that you get the Halting Problem and such. In addition to the class of those that halt and those that don't, which is an exhaustive partition of all cases, there is the class of those we can prove halt, and the class of those we can prove don't halt. With any reasonable proof system, if it halts in a reasonable amount of time, we can prove that it does so, just by simulating it; there may be cases we can prove would eventually halt, even though simulating them might take a trillion trillion years or whatever, and presumably there are cases we can prove won't halt. Assuming our proof system doesn't have any bugs in it, though, there will also be cases that don't halt, that we can't prove don't halt; and cases whose proof would require more resources than we have; and cases whose proof we haven't figured out.

If you have a proof system that partitions all cases into those known to halt and those known not to, then indeed there is an inconsistency; that's one of Gödel's theorems.

Confused

You are confusing denotations and proofs. Bottom is simply the denotational value of non-terminating computations. That we (have to) allow for such a value in a semantics says nothing about our ability to *prove* that a given program *has* this value (for some input) -- no more than the existence of the value 2 implies that we can prove that a given program returns 2.

That's not the point

The question is whether logic is then inconsistent.

What is?

A given computation either terminates or it doesn't. In the latter case we denote *this specific "result"* by bottom. We can further classify algorithms by whether there exists any input for which they may result in bottom, or not. This implies nothing about whether you can *prove* (in finite time) what the result of a computation is going to be, in which class an algorithm falls, or what the viability is of any logic you may construct to try doing so. There is a simple category mistake underlying your question.

Or in other words: A mathematical definition of some set does not require that membership in this set is decidable.

Definition

I imagine the definition isn't computable, you cannot write down a decision procedure, which is just another view on whether the logic would be inconsistent?

Undecidability and inconsistency

A proposition is undecidable (in a given formal logic) if the formal logic cannot prove it true or false. The formal logic is inconsistent if there is some proposition it can prove both true and false. Per Gödel, for any sufficiently powerful formal logic, either there are propositions that cannot be proven true or false, or there are propositions that can be proven both true and false. (There might, of course, be both.)

Yah, that's the point

Yah, that's what triggered me. If you write down a specification for programs, in a type, which roughly states "All programs terminate, or not. (Provably so.)" didn't you state something blatantly untrue? I.e., is there an inconsistency proof around the corner?

There's a catch

There's a catch. Type systems are taken to be restrictive, i.e., programs are rejected if the type system can't handle them. So even if a type system classifies all typable programs according to their exact termination behavior, that doesn't account for the untypable programs.

If the type system determines the exact termination behavior of all typable programs, and all programs are typable, then the type system must be wrong. However, if the type system determines the exact termination behavior of all typable programs, the type system can still be correct — but then, either the programming language is decidable, thus not Turing-powerful; or typing itself is undecidable, thus might never halt.

Computers are not Turing machines ...

... because they cannot emulate truly unbounded tapes. A programming language involving data/codata can be decidable without being useless.

Not necessarily inconsistent

If all programs in a type provably terminate or not, then there is simply some limit on which functions you can express (in that type).

It's feasible to type all programs in a language, but only if the language is syntactically restrictive. I.e. we can enforce some type system constraints at the parser, if we're clever about our encodings and selective about our type system.

intuitionistic logic

I think the issue of whether you can prove some proposition (e.g. that an expression diverges) would be left to an external logic. Intuitionistic logics seem relevant if you wish to carefully avoid the law of excluded middle.