Wittgenstein versus Turing on Inconsistency Robustness

Turing differed fundamentally on the question of inconsistency from Wittgenstein when he attended Wittgensteinâ€™s seminar on the Foundations of Mathematic:ssee Historical development of Incompleteness and Inconsistency]:

Wittgenstein: Think of the case of the Liar: It is very queer in a way that this should have puzzled anyone â€” much more extraordinary than you might think... Because the thing works like this: if a man says 'I am lying' we say that it follows that he is not lying, from which it follows that he is lying and so on. Well, so what? You can go on like that until you are black in the face. Why not? It doesn't matter. ...it is just a useless language-game, and why should anyone be excited?

Turing: What puzzles one is that one usually uses a contradiction as a criterion for having done something wrong. But in this case one cannot find anything done wrong.

Wittgenstein: Yes â€” and more: nothing has been done wrong, ... where will the harm come?

Turing: The real harm will not come in unless there is an application, in which a bridge may fall down or something of that sortâ€¦. You cannot be confident about applying your calculus until you know that there are no hidden contradictions in it.

Wittgenstein: There seems to me an enormous mistake there. ... Suppose I convince [someone] of the paradox of the Liar, and he says, 'I lie, therefore I do not lie, therefore I lie and I do not lie, therefore we have a contradiction, therefore 2x2 = 369.' Well, we should not call this 'multiplication,' that is all...

Turing: Although you do not know that the bridge will fall if there are no contradictions, yet it is almost certain that if there are contradictions it will go wrong somewhere.

Wittgenstein: But nothing has ever gone wrong that way yet...

Because contemporary large software systems are pervasively inconsistent, it is not safe to reason about them using classical logic. [see Common sense for inconsistency robust information integration].

Comment viewing options

Sometimes Wittgenstein wasn't so smart. His reply might be OK for a hypothetical person who wasn't interested in formal systems, but since he WAS interested in formal systems he should have seen Turing's point. Although maybe Turing could have put it more persuasively, thus:

* We believe (or at least Wittgenstein believed) that classical logic is applicable to real-world situations.

* Barring solutions to the liar paradox, classical logic seems to entail inconsistencies. Of course there are many ways of rescuing classical logic from inconsistencies, but the whole point of the conversation you quoted is that Wittgenstein didn't see the need for them.

* We don't like inconsistencies in real-world situations.

* Therefore classical logic is not always applicable to real-world situations, no matter how well the parts we've used so far seem to have worked. And we have no theory of when it will be applicable and when it will fail (at least, we didn't in Wittgenstein's time; some might argue that relevance logics give us such a theory now).

* However, there may well be systems of logic which are always applicable to real-world situations.

* Therefore a defender of classical logic at least has a case to answer against alternative logics.

I wonder what Wittgenstein would have said about Curry's paradox (http://en.wikipedia.org/wiki/Curry%27s_paradox), which seems even more obviously applicable to real situations than the liar paradox.

I like your conclusion. I have no idea whether the best solution is to make our software more consistent, or to start reasoning about it using paraconsistent logic. Or both.

Wittgenstein was one the first to advocate formal inconsistency-robust inference circa 1930:

Indeed, even at this stage, I predict a time when there will be mathematical investigations of calculi containing contradictions, and people will actually be proud of having emancipated themselves from consistency.

It required over three-quarters of a century for computer scientists to achieve Wittgenstein's prediction motivated by pervasive ineradicable inconsistency in universally-used software systems.

Good for him.

Good for him.

There is a proposal how to corral the Liar's Paradox, Curry's Paradox, etc. in section "Roundtripping Reification and Abstraction" in
Common sense for inconsistency robust information integration [arXiv:0812.4852].

Cool. There are also many

Cool. There are also many other methods to corral inconsistency, as I'm sure you know.

Self-inferring Incompleteness while corralling the paradoxes

The trick is to be able to self-infer incompleteness while corralling the paradoxes so that they don't produce Inconsistency In Garbage Out Redux (IGOR). Wittgenstein pointed out that self-inferred incompleteness implies inconsistency (see Self-inferable Incompleteness implies Inconsistency). But this is OK for inconsistency robust logic :-)

The point

the whole point of the conversation you quoted is that Wittgenstein didn't see the need for them.

Isn't his point simply that if classical logic should prove to be inconsistent, it's the wrong logic (eg. if you're doing multiplication, and by implication, building bridges) and we should switch to using a different one, or use it in a different way. Ie. he's not saying we shouldn't rescue classical logic, just that it's no big deal if we do have to.

I wish I could find the Wittgenstein quote elsewhere we he says that if mathematicians find a contradiction it wouldn't be a big deal, they'd just fix mathematics and carry on.

Morris Kline

Not unrelated is Morris Kline's observation (Mathematical Thought from Ancient Through Modern Times), "Progress in mathematics almost demands a complete disregard of logical scruples."

Lovely quote! (And generally

Lovely quote! (And generally a great book.)

Wittgenstein thought very

Wittgenstein thought very different things at different times. I don't read the main quotation above the way you do, but if he did mean that then fine ... except then I don't know what he thought he was arguing against, because Turing wassn't saying anything that disagrees with the position you attribute to Wittgenstein.

Turing versus Wittgenstein

Turing said "You cannot be confident about applying your calculus until you know that there are no hidden contradictions in it.

whereas Wittgenstein said "Indeed, even at this stage, I predict a time when there will be mathematical investigations of calculi containing contradictions, and people will actually be proud of having emancipated themselves from consistency."

How do you reconcile the above two statements?

re turing v wittgenstein

A simple(?) answer of how to reconcile the two statements:

Turing wants a calculus with no hidden contradictions if he will use it to calculate the life-critical parameters of a bridge. If he believes that "20 inches" is the unique answer to calculating the needed diameter for a cable, he wants confidence that the calculus doesn't also predict 5 inches and 100 inches. That confidence will help him at trial, should the bridge fall. It will help him sleep better.

[edit: think of this in terms of Wittgenstein's "brick game". The language game only actually has utility if when I say "bring one brick" you either bring one brick or aren't playing correctly. We could define a different game where "bring me one brick" could be satisfied by any damn thing you do -- but so what? I'll pick that first game if my goal is to build something using bricks.]

Wittgenstein anticipates that some calculi might be interesting even if for some provable X, also not X can be proved. He doesn't seem to be nihilistic (as in, for all X, X and not X) but he regards "not" and "and" as of flexible semantics. Which is a fairly boring insight compared to where the maths guys were. Turing could have just said "Right. So?" to Wittgenstein.

Frankly, Wittgenstein's insights into "language games" are interesting -- but his understanding of the "crisis" in math was poor. He correctly sensed that understanding math as a "language game" meant that, for some definitions of "and", "not", and so forth, a calculus that supports "for some X, X and not X" is not automatically a lose -- but that's about as deep as he got and it really wasn't anything his contemporary math guys didn't understand.

Wittgenstein doesn't contradict Turing there. He's just kind of confused.

Wittgenstein on Inconsistency Robustness

Wittgenstein said
â€¢ Can we say: â€˜Contradiction is harmless if it can be sealed offâ€™? But what prevents us from sealing it off?
â€¢ Let us imagine having been taught Fregeâ€™s calculus, contradiction and all. But the contradiction is not presented as a disease. It is, rather, an accepted part of the calculus, and we calculate with it.

Self-inferring incompleteness makes PM inconsistentent

Principia Mathematica (PM) was taken to be the foundation of all mathematics. But Wittgenstein pointed out that if incompleteness of PM is proved in PM, then PM is inconsistent!

Classical logicians wanted to have incompleteness proved and to also preserve classical logic. They adopted the strategy of being very vague about which theory was used to prove incompleteness and then badmouthing Wittgenstein claiming that he didn't understand the incompleteness theorem!

Interesting. Thanks. I

Interesting. Thanks. I know that Russell failed to understand the devastating consequences of GÃ¶del's work for PM (at least sometimes, e.g. in his intellectual autobiography, "My Philosophical Development", 1959 ... of course, like Wittgenstein, he thought very different things at different times, but as far as I can find out he never really got to grips with GÃ¶del's theorems anywhere).

BTW, a couple of minor typos at http://knol.google.com/k/incompleteness-theorems#Historical_development_of_Incompleteness_and_Inconsistenc: (1) Something seems to have gone wrong with the grammar in the second sentence. (2) PM was by Whitehead and Russell (in that order BTW), not (as the bibliography claims) by Russell.

Principia Mathematica was Russell's baby

Whitehead played a supporting role in Principia Mathematica. Russell conceived the project and did the initial work. Whitehead joined in to help produce the multi-volume monster. But then Whitehead had a falling out with Russell. So Russell finished off the project with a new publication of PM that he solely authored. In fact, Wittgenstein referred to PM as "Russell's System."

I think that is fair to say that Wittgenstein understood GÃ¶del's work better than GÃ¶del understood Wittgenstein ;-)

PS. I can't quite locate the typo that you found.

Trying to close the rogue

Trying to close the rogue bold tag Ok that didn't work.

bibliography

No doubt Principia was Russell's baby in some sense. But I'm stumped on your bibliographic claim. I can't find a citation anywhere to "a new publication of PM that he solely authored". Are you sure that there is one?

"software systems are pervasively inconsistent"?

"If a man hacking at a block of wood make therein the image of a cow", said Stephen Daedalus, "is that image a work of art?" Or, to put it another way, does that image assert anything? If not, it cannot be inconsistent. Similarly, software systems (or so it seems to me) don't assert anything.

Large software systems formally inconsistent with specifications

Large software systems are formally inconsistent with their specifications. And their formal specifications are internally inconsistent as well!

These specifications are

These specifications are usually informal. So it doesn't make much sense to attribute their relation with the software systems as "formally inconsistent". Also, just because large software systems currently cannot be specified formally and proven correct with respect to their specification, that doesn't mean that it isn't possible to construct large software system according to a formal specification and prove correctness for this construction.

I think people get confused by words. Why not just call "using inconsistent logic" experimenting/exploring instead? Like I might try to integrate something using a novel integration technique I am not sure yet if it is correct or not. I might be satisfied for a while or maybe even indefinitely to have this integration method without knowing for sure that it works correctly. But that does not mean that it is impossible to prove that technique correct in a logic that is believed to be consistent by most people.

* Only if you insist ...

... that such systems assert consistency with their specifications. I don't think they do, any more than a cake asserts consistency with its recipe. It was made by following the recipe or it wasn't.

Plans are very helpful in debugging, practically speaking. I prefer to make code assert a plan is followed, so it fails faster after departing from the plan. When folks ask me for help in debugging, I ask, "What was the plan? What was it supposed to do?" Usually I get a nervous look. (For comic effect, just once I wish someone would sing, "Plans?! We don't need no stinking plans!")

So I explain: First, decide whether a plan ought to work. If there's no way the plan could work, usually code won't either. Then see whether the plan is being followed. The result is always helpful, positive or negative, since you can either fix deviation from the plan or examine why the plan didn't work. Here debugging amounts to asserting consistency with the plan, or else updating the plan.

In this context I use plan to mean very informal specification. (Systems are usually inconsistent with these too, let alone formal specifications.) It can be as informal as a code comment saying, "I'm trying to save this for use later, after I call foo()."

I suspect you mean assert in a fairly abstract sense. (I did read a little Wittgenstein at age twenty; okay, I was odd.) So the rest of this post is irrelevant personal anecdote I hope you take as friendly kidding, in case the term is similar to the sense in which an artist says, "I assert this toilet is a work of art."

Many years ago I wrote a paper for my ex, when she was a senior in art history, on the presentation some famous visiting professor gave at a Duchamp conference. I recorded it and and found very interesting rhetorical devices inside, such as sentence fragments that didn't actually form propositions, after controversial opening clauses. She got an A+ on the paper, and interesting commentary from her professor, based on how I dissected the amusing rhetoric. (Why did I write the paper? Showing off. I said I could; She said prove it.)

I don't think there's much useful substance to self-referential intentionality in art or in science with information logic focus. Mind games are more distraction than help.

Is it this season again?

I think the biggest obstacle to your deploying inconsistency tolerant logic will be finding inconsistency tolerant users.

People are highly inconsistency robust

People are of necessity highly inconsistency robust. There will be a symposium this summer at Stanford (Inconsistency Robustness 2011) that explores the issues.

Exactly. People may claim

Exactly. People may claim to be intolerant of inconsistency (and maybe with good reason), but despite that they're inconsistent. When their deductions work despite the inconsistencies, their behaviour can presumably be described by some logic which tolerates inconsistencies (i.e. a logic in which you can't derive anything you like from an inconsistency). There are many such logics.

A standard inconsistency robust math for software engineering!

It's going to be very bad for software engineering if there are many incompatible inconsistency tolerant mathematical foundations.

Common sense for inconsistency-robust information integration

Because contemporary large software systems are pervasively inconsistent, it is not safe to reason about them using classical logic. The goal of Direct Logic is to be a minimal fix to classical mathematical logic that meets the requirements of large-scale Internet applications (including sense making for natural language) by addressing the following issues: inconsistency robustness, contrapositive inference bug, and direct argumentation.

For example, in classical logic, not WeekdayAt5PM can be inferred from the premises not TrafficJam and WeekdayAt5PM infers TrafficJam. However, Direct Logic does not thereby infer not WeekdayAt5PM because this requires additional argumentation. The same issue affects probabilistic (fuzzy) inference. Suppose (as above) the probability of TrafficJam is 0 and the probability of TrafficJam given WeekdayAt5PM is 1. Then the probability of WeekdayAt5PM is 0. Varying the probability of TrafficJam doesnâ€™t change the principle involved because the probability of WeekdayAt5PM will always be less than or equal to the probability of TrafficJam.

Also, in the Tarskian framework of classical mathematical logic, expressing argumentation is indirect and awkward. For example a classical theory cannot directly represent its own inference relationship and consequently cannot directly represent its rules of inference.

GÃ¶del and Rosser proved that nontrivial mathematical theories are. This paper proves a generalization of the GÃ¶del/Rosser incompleteness theorem: theories in Direct Logic are self-inferably incomplete using inconsistency robust reasoning. However, there is a further consequence: since the paradoxical proposition (â€œThis proposition is not inferable.â€) is self-inferable, theories in Direct Logic are self-inferably inconsistent!

This paper also proves that Logic Programming is not computationally universal in that there are concurrent programs for which there is no equivalent in Direct Logic. Consequently the Logic Programming paradigm is strictly less general than the Procedural Embedding of Knowledge paradigm.

Direct Logic makes the following contributions over previous work:
â€¢ Direct Inference (no contrapositive bug for inference)
â€¢ Direct Argumentation (inference directly expressed)
â€¢ Inconsistency-robust Natural Deduction that doesnâ€™t require artifices such as indices (labels) on propositions or restrictions on reiteration
â€¢ Boolean Equivalences hold
â€¢ Inference by splitting for disjunctive cases
â€¢ Incompleteness self-inferred

Inconsistency Robustness

Do we humans have a built in conscinenceness of inconsistency? Is it called a sense of humor? Check out a classic: Whose on First.

Yes indeed!!!

"I don't give a darn": that's our shortstop ;-)