## Godel and Meta-Circularity

I had an old forum topic where I discussed a couple of ways that I thought we could solve the OS upgrade problem. I proposed a number of ways that I claimed would work around this problem, but all of my proposed solutions ended up being flawed in one way or another. I was pushing around a lump in the carpet.

The OS upgrade problem is how to build machines that allow provably correct self-upgrade. The goal is to allow the OS to install a new version of itself that could be exactly the same as the previous version or might have new additions or optimizations. My idea on how to do this has been to the OS and the lowest level exposed hardware abstraction and require the programmer to supply a proof that his new OS implementation conforms to the OS standard. The standard axioms should include all of the properties that you want every future OS to include, such as all operations terminating in an acceptable time, security requirements, etc.

The most powerful approach to this would be to show the soundness of the OS inside the language, since then you'd be able to show that an arbitrary upgrade is correct. But this would require also modeling the language implementation as part of the OS, which runs us into Godel's incompleteness theorems.

The practical solution in a logic with universes is to to start with 2^64 universe levels and go on with your life. Since each universe models the one below, you'd be able to model an OS with 2^64-1 levels on the first upgrade, 2^64-2 on the second, etc. Each time you upgrade your machine, you will only be able to prove the correctness of a logic with one fewer level, but in practice you'll never run out. This is kind of inelegant, though, so I wondered if we could do better.

I think I now see a way to do it that meets the requirement I imposed above - the OS upgrade should be able to implement itself without *any* change (even just the loss of a single universe out of 2^64). The idea is this: we axiomitize that a certain machine operation returns a number that is guaranteed to be a lower bound on the number of levels. There is a software operation that OS can use to decrease the number of levels (until it reaches zero). The current version of the OS can correctly prove that if it decreases the level count by one before passing control to the new OS, then the new OS will behave correctly (because it thinks there is one fewer level and we can prove that correct).

The trick is to give the user a way to increase the number of levels reported by the machine by pressing a button *in hardware* on the outside of the machine somewhere. When the user wants to upgrade his OS, he can only do it if he has enough levels. Upon running out, the button resets the universe count to 2^64 or whatever.

Taking the machine out of the loop on the level increase is the key to making this work. Any attempt to automate the process by teaching the OS how to press the button itself seems to require additional levels to avoid Godel.

This construction satisfies my goal of building machines that allow provably correct self-upgrade without any loss in consistency strength over time.

## Comment viewing options

### Universe Quota

So, my summary for the above is: proofs have a universe quota, and humans are allowed to configure it or increase it interactively during assisted proof.

Any cyclic arguments that are difficult to discover structurally will be discovered when they encounter the quota limits. The human won't know whether increasing the quota will help. But we'd have an opportunity to render, inspect, and profile to see whether the proof is spinning its wheels according to sapient judgement.

This seems reasonable to me. We'll almost always have a human in the loop during development time compilations, and anything else can be cached. A GAI could take over for the human, eventually.

Of course, I'm also one of those epistemological finitists who think that even provably terminating computations need a quota. :D

### Self-Verifying Theories

Thank you for posting. This topic is germane to my interest in bootstrapping and succession, broadly construed, and the computational embodiments of these concepts, which reveal the necessary internal structure of formal approaches to them.

Regarding the OS Upgrade Problem: There is a non-trivial literature aimed at determining the exact degree of logical expressivity which incurs Goedelian Incompleteness, and various attempts to sidestep, bypass, or limit the applicability of the incompleteness effect. For example:

Dan Willard: Self-Justifying Axiom Systems/Self-Verifying Theories [0]. Design a logic within which multiplication is not a total function. This will prevent it from building numbers fast enough to encode proofs that it cannot construct them symbolically, evading diagonalization.

Sergei Artemov: Provability of Consistency in Peano Arithmetic [1]. Modify the notion of consistency from a single formalizable PA sentence to an infinite schema of PA sentences, prove that each can be proven in PA. Acknowledges our friend Carl Hewitt.

I am working on building a language that implements on of Willard's logics, to see how ergonomic it is for exactly such computational behaviors as self-verifiable program self-modification, and would be happy to discuss this topic with you.

[0] https://en.wikipedia.org/wiki/Self-verifying_theories

[1] https://arxiv.org/abs/1902.07404