Inconsistency Robustness in Foundations

Inconsistency Robustness requires repairing mathematical foundations in the face of contradictions that have arisen over and over again. In the Inconsistency Robustness paradigm, deriving contradictions are a progressive development and not “game stoppers.” Contradictions can be helpful instead of being something to be “swept under the rug” by denying their existence, which has been repeatedly attempted by the Establishment (beginning with some Pythagoreans). Such denial has delayed mathematical development. This article reports how Inconsistency Robustness has recently influenced the foundations of mathematics for Computer Science since the first International Symposium in 2011 continuing a tradition developing the sociological basis for foundations.

That mathematics is thought to be consistent by an overwhelming consensus of working professional mathematicians justifies the use of Proof by Contradiction. In addition, Proof by Contradiction can be used to infer the consistency of mathematics by the following simple proof:

The self-proof is a proof by contradiction.
In order to obtain a contradiction,
assume that mathematics is inconsistent.
Then there is some proposition Φ such that ⊢Φ and ⊢¬Φ.
Consequently, both Φ and ¬Φ are theorems that
can be used in the proof to produce an immediate contradiction.
Therefore mathematics is consistent.

Please note the following points: The above argument mathematically proves the theorem that mathematics is consistent and that it is not a premise of the theorem that mathematics is consistent. However, the proof does not intuitively increase our confidence in the consistency of mathematics because it does not rule out that there might be other proofs in mathematics that result in contradictions.

The above theorem means that the assumption of consistency is deeply embedded in the architecture of classical mathematics. The above proof of consistency can be formalized in Direct Logic [Hewitt 2010] (a powerful inference system in which computer systems can reason about their own inferences). Having a powerful system like Direct Logic is important in computer science because computers must be able to carry out all inferences (including inferences about their own inference processes) without requiring recourse to human intervention.

Self-proving consistency raises that possibility that mathematics could be inconsistent because of contradiction with the result of Gödel et. al. that if mathematics is consistent, then it cannot infer its own consistency. The resolution is to provide a proper grammar of mathematical sentences so that a fixed-point operator cannot be used to construct the kind of “self-referential” proposition used by Gödel in his results by exploiting an untyped grammar of sentences. Self-referential propositions of the kind used by Gödel do not seem to have any important practical applications. (There is a very weak theory called Provability Logic that has been used for self-referential propositions coded as integers, but it is not strong enough for the purposes of computer science.) Direct Logic makes use of a typed grammar that prevents the construction of self-referential sentences using unwanted fixed points. The bottom line is that self-referential propositions are “monsters” (in the sense of Lakatos) that are ruled about by a proper grammar of sentences for the mathematical foundations of computer science. It is important to note that not having self-referential propositions does not place restrictions on recursion in computation, e.g., the Actor Model, untyped lambda calculus, etc.

A mathematical theory is an extension of mathematics whose proofs are computationally enumerable. For example, group theory is obtained by adding the axioms of groups to Direct Logic along with the requirement that proofs are computational enumerable. If a mathematical theory T is consistent, then it is inferentially undecidable (i.e. there is a proposition Ψ such that ⊬TΨ and ⊬T¬Ψ (which is sometimes called “incompleteness” in classical terminology) because provability in T is computationally undecidable [Church 1936, Turing 1936].

Information Invariance is a fundamental technical goal of logic consisting of the following:
1. Soundness of inference: information is not increased by inference
2. Completeness of inference: all information that necessarily holds can be inferred
Direct Logic aims to achieve information invariance for the mathematical foundations of Computer Science. But that a mathematical theory is inferentially undecidable (“incomplete” in classical terminology) with respect to Ψ (above) does not mean incompleteness with respect to the information that can be inferred because (by construction) it is provable in mathematics that ⊬TΨ and ⊬T¬Ψ.

This article has become highly controversial and the Establishment has censored (without substantive justification) its publication in electronic archives that are intended to record priority dates for scientific results. Consequently, this article is itself another episode in the ongoing saga of inconsistency robustness in foundations.

The article for the above abstract can be found here: Inconsistency Robustness in Foundations

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

What part do you hope folks will discuss?

What's the programming language angle? (Assuming you think there's a PL aspect to this.) I don't see one in the abstract. Lower on the first page, the following mentions typed grammar:

Direct Logic makes use of a typed grammar that prevents the construction of self-referential sentences using unwanted fixed points.

Can you identify any consequences your paper has on PL design or implementation? Or perhaps on the way someone would write system software, or user applications? I'm guessing relevance is underpinnings of computer science, but without any specific effect on what anyone does practically, unless you can cite a practical detail. I hope to get you to say something other than quoting the first page. LtU is a great spot for you to relate what you're doing to PL research.

Logical foundations are for axioms and Logic Programs

Logical foundation are for axioms and Logic Programs. Logical foundations are also important for security since inconsistencies in foundations can cause security problems. At the same time foundations must be strong enough so that computers can carry out all logical reasoning including reasoning about their own reasoning processes :-)

Axioms and Logic Programs will become much more important and practical in future computer applications.

Restrict negation to proofs from a lower universe

As someone learning this stuff (still...) I would like to ask a question, apologies in advance if my terminology is a bit wrong (please feel free to correct me). The contradiction in the above logic appears to come from allowing self-referential statements. It is the equivalent of the halting problem and is effectively an infinite loop in deduction. It would appear the simple solution is to stratify the logic, as done in type-theory into a stack of infinite universes, where reference to a clause in universe N automatically puts you in universe N+1. This would appear to have problems with implementing realistic programs.

The other thought is that without negation such proofs would not be possible. If proofs can only be built constructively (again like some branches of type theory) then such things would not be possible. Again this would appear to have limitations implementing realistic programs.

The approach taken by Kernel-Prolog looks interesting (with a few tweaks), which is to replace 'cut' and other builtins by meta-interpreters, which are effectively nested universes. If an interpreter does not implement failure as negation, then such contradictory proofs would not be possible inside that 'universe' and we can allow recursion. We can the close that meta-interpreter and implement negation in an outer interpreter (the next universe up).

I can already see some problems, as I have been playing with implementing Model Elimination, which mostly requires extensive use of negated predicates, but "not_p" is not the same as "\+p" which has me slightly confused at the moment.

Is there a reasonably short proof that is self contradictory that could be used as a test for this sort of thing?

Wittgenstein: self-referential propositions infer inconsistency

According to Wittgenstein:

Let us suppose I prove the improvability (in Russell’s system) of
[Gödel's self-referential proposition] P; [⊢⊬P where P⇔⊬P]
then by this proof I have proved P [⊢P].
Now if this proof were one in Russell’s system [⊢⊢P]—I should in
this case have proved at once that it belonged [⊢P] and did not
belong [⊢¬P because ¬P⇔⊢P] to Russell’s system.
But there is a contradiction here!

The set of all sets that are not members of themselves?

The Russell paradox can be expressed in Prolog as the Barber paradox:

male(barber).
shaves(barber, X) :- male(X), \+ shaves(X,X).

It would appear Datalog already addressed this problem (in more or less the way I thought above):

If a predicate P is positively derived from a predicate Q
(i.e., P is the head of a rule, and Q occurs positively in
the body of the same rule), then the stratification
number of P must be greater than or equal to the
stratification number of Q

If a predicate P is derived from a negated predicate Q
(i.e., P is the head of a rule, and Q occurs negatively
in the body of the same rule), then the stratification
number of P must be greater than the stratification
number of Q,

So the implementation I am looking at based on kernel prolog that does not have failure-as-negation in the language, but requires you to use a meta-interpreter:

first_solution(X, G, Answer) :-
    answer_source(X, G, Solver),
    get(Solver, R),
    stop(Solver),
    eq(Answer, R).

not(G) :- first_solution(_, G, no).

The meta-interpreter runs on the same program clauses, but is initialised by answer_source(X, G, Solver) where X is the answer pattern, G is the goal, and Solver is the fluent, then get(Solver, R) gets each solution to G, until stop is called. The meta interpreter returns the(X) or no with appropriate substitutions, so not(G) is true when there are no solutions to the goal G.

So by not including the negation as failure builtin, the only way to get a 'not' relation is to use a meta-interpreter, which results in a simply implemented stratification with no special 'rules' it is just an emergent property of the language semantics.

I like this, and it seems to deal with the above problem.

The other issues can be addressed by using the techniques from PTTP (Prolog Technology Theorem Prover), which is sound unification (occurs check, I am using rational-tree unification with a post-cycle check on any repeated variables), complete inference (achieved for Model Elimination by checking the current goal against the negation of each goal in a history stack). Finally although not mentioned above, complete search by using depth-first with iterative-deepening.

Syntactic stratification is unworkable for Computer Science

Syntactic stratification is unworkable for Computer Science, e.g., it is impossible implement interpreters (i.e. Eval in Lisp).

Implementing Interpreters

Do you mean it is impossible to implement interpreters in Prolog? Obviously prolog itself is an interpreter, so you just make the existing interpreter available through reflection. This is quite neat as all the 'mechanics' of a Prolog implementation become available as builtins, 'copy term with fresh variables', 'unification', 'unfolding'. I guess whilst this is practical for application, it may cause problems if you are trying to use logic as the fundamental basis of computation.

An interpreter can be implemented as the transitive closure over the unfolding operation:

element_of(I, X) :- get(I, the(A)), select_from(I, A, X).
select_from(_, A, A).
select_from(I, _, X) :- element_of(I, X).

unfold(Clause, Clause).
unfold(Clause, Answer) :-
    unfolder_source(Clause, Unfolder),
    element_of(Unfolder, NewClause),
    unfold(NewClause, Answer).

So the problem must be in defining 'unfolder_source', but that is the operation:

A0 :- A1, A2,..., An (+) B0 :- B1, B2,..., Bm =mgu(A1, B0).(A0 :- B1,..., Bm, A2,..., An)

I am not sure where the difficulty in implementing that is? We can implement unification, and the remainder is simply a rearrangement.

Implementing Eval messages requires type Any

Implementing Eval messages requires type Any as in the interface below for type Expression so that an expression receives an Eval message with an Environment and can return any value:

Expression ≡ Interface Eval[Environment]↦Any

Prolog is untyped

In Prolog the type returned from the meta-interpreter is aways a term, and can be normalised to a clause. This is clearly visible in the implementation of the answer_source fluent, in fact all the source fluents only return terms (and the sinks write them). What am I missing?

A programming language without types is hopeless in practice

A programming language without types is hopeless for practical work.

Combine the interpreter with a type-case

You can add types to prolog quite easily, and there is a solution to the type of the interpreter, that is to combine it with a 'case' type statement. So if you imagine a typed prolog where each functor was classified instead of f/2 it would be f/int/int. You can then pass the interpreter a functor name, and the interpreter will match the functor on return (or fail if there is no match).

a(X::int) :- ...
a(X::string) :- ...
:- first_solution(X, G, Answer), a(Answer).

So although we cannot know the type returned by the interpreter, we can know the type after matching that returned term against known clause. By passing the output term as an argument to the interpreter we avoid having any untyped terms in the language.

Edit: have fixed the syntax for the interpreter, 'X' is a variable in 'G' the goal that is returned in Answer.

Eval messages with types

Below is a simple PlusExpression:

PlusExpression[left:Expression,right:Expression]:Number ≡
   implements Expression using
      Eval[anEnvironment]→ left.Eval[anEnvironment] + right.Eval[anEnvironment]

where

Expression ≡ Interface Eval[Environment]↦Any

Seems Possible


plusExpression(X1, G1, X2, G2, Z) :- first_answer(X1, G1, A1), first_answer(X2, G2, A2), add(A1, A2, Z).

The value returned from the interpreter would be constrained to be a type accepted by 'add' by type inference. 'X' is the variable in goal 'G' to return from the meta-interpreter. The meta-interpreter runs on the same source clause database as the 'main' interpreter (unlike 'eval'). 'add' is assumed to be a builtin, but all the arithmetic could be built up from Peano arithmetic s(s(z)) etc. Type inference seems tricky, but possible.

It occurs to me as the meta-interpreter is working on the same source clauses as the 'main' interpreter, all the type information from the type-inference pass will be available, so it should be possible to statically type type return type of the meta-interpreter and check it against the required type for add. This can be a polymorphic HM type system and could use the nice compositional algorithm with principal-typings I have discussed elsewhere. I have a Prolog(ish) interpreter I have been working on that has meta-interpreters, and I plan on adding the type-inference.

Unclear

What is "first-answer" and how does it implement an "Eval" message?

Clarification

Sorry, I have used first_answer, where earlier I used first_solution. To clarify I intended these to be the same. This paper explains the principle of the meta-interpreters Fluents: a Uniform Extension of Kernel Prolog for Reflection and Interoperation with External Objects