A Natural Axiomatization of Church's Thesis

A Natural Axiomatization of Church's Thesis. Nachum Dershowitz and Yuri Gurevich. July 2007.

The Abstract State Machine Thesis asserts that every classical algorithm is behaviorally equivalent to an abstract state machine. This thesis has been shown to follow from three natural postulates about algorithmic computation. Here, we prove that augmenting those postulates with an additional requirement regarding basic operations implies Church's Thesis, namely, that the only numeric functions that can be calculated by effective means are the recursive ones (which are the same, extensionally, as the Turing-computable numeric functions). In particular, this gives a natural axiomatization of Church's Thesis, as Gödel and others suggested may be possible.

While not directly dealing with programming languages, I still think this paper might be of interest, since our field (and our discussions) are often concerned with computability (or effective computation, if you prefer).

The idea the Church's Thesis can be proven is not new, and indeed there seems to be a cottage industry devoted to this issue (for a quick glance search the paper for references to Gandy and work related to his).

Even if the discussion of ASMs is not your cup of tea, it seems like a good idea to keep in mind the distinctions elaborated in the conclusions section between "Thesis M" (the "Physical C-T Thesis"), the "AI Thesis", and the "standard" meaning of the C-T thesis.

Another quote (from the same section) that may prove useful in future LtU discussions: We should point out that, nowadays, one deals daily with more flexible notions of algorithm, such as interactive and distributed computations. To capture such non-sequential processes and non-classical algorithms, additional postulates are required.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Gandy's Principles for Mechanisms

Since Church's thesis is about the expressive limits of programming languages, I think this stuff fits pretty firmly within the bounds of LtU, though I admit to belonging to the "Theory First" faction here...

If you want to get to grips with the "cottage industry", I'd recommend getting hold of the late Robin Gandy's Church’s thesis and principles for mechanism, which AFAIK is not available in an unencumbered form, but is not too diffiult to get hold of since it is in the Barwise&Keisler "The Kleene Symposium". Robin Gandy was a student of Turing, and his paper outline's four principles that a physical machine might satisfy: such machines then embody a Turing machine. Jack Copeland and Oron Shagrir have a nice discussion of this paper in Physical Computation: How General are Gandy's Principles for Mechanisms?

I think this stuff fits

I think this stuff fits pretty firmly within the bounds of LtU

It's not you that I was worried about...

Foundations

Another widespread foundational thesis: "Anything knowable is computable". You can prove anything entailed in a set of constraints, axioms, facts, etc. True by definition???

metaphysics

"Anything knowable is computable"

So, there's "physical-C.T." and then "metaphysical-C.T.". :-). Oddly, that's relevant to current events (the "New Atheism" of Dawkins, et al.) and, yes, I think PL theory (particularly its ability to formulate questions about the semantics of source forms) is relevant to that debate but probably (hopefully!) it wanders afield of "Lambda the Ultimate".

-t

Nope.

Another widespread foundational thesis: "Anything knowable is computable". You can prove anything entailed in a set of constraints, axioms, facts, etc. True by definition???

Second order logic is incomplete.

Proof of what?

I have great respect for this kind of math but I don't think that we can settle the question this way. For instance we need a definition of knowledge. Knowledge is usually assumed to be unambiguous, concise and complete, if I remember correctly. This rules out incompleteness. Incompleteness is a problem of self reference (?) which is faulty knowledge construction. Anyway my thesis seems to be useful with a few qualifications perhaps.

Is "knowledge" really relevant to this?

I have great respect for this kind of math but I don't think that we can settle the question this way. For instance we need a definition of knowledge.

Do we, really? You framed your question in terms that did not appeal to "knowledge," but rather to proof and entailment ("You can prove anything entailed in a set of constraints, axioms, facts, etc."). The notion of incompleteness is stated precisely in those terms.

Knowledge is usually assumed to be unambiguous, concise and complete, if I remember correctly. This rules out incompleteness.

I don't know what you mean by "knowledge," but you don't seem to mean "justified true belief" or a variant thereof. Hmm, it's not even clear to me that you're talking about a relationship between a knower and a proposition.

In knowledge representation

In knowledge representation the goal is to avoid problems like incompleteness. This is done by being careful with the representation itself, and/or by using special logics that behave better such as Description Logics.

The thesis as I stated it can be made more clear: anything knowable is computable and vic versa; for descriptive logic this is certainly true. We also have in mind the Curry Howard isomorphism where a proof in descriptive logic is a program. Also we are talking about a "thesis" and not a theorem. Given proper restrictions the thesis approaches a theorem. Sorry about any misunderstanding. The thesis applied to second order logic might fail. We have knowable things that don't compute (ie run forever).

edit: We have all heard about true statements that can't be proven from Godel's theorem. There seem to be several ways to look at this. while they may be true they certainly arn't very useful and so don't count as part of a knowledge base. Another impression I have gotten is that these statements are actually incorrect because they involve self reference or are "inconsistent". Correct logic is paraconsistent. If anyone can clear this up I would be grateful.

Set theory in sheep's clothing

Are you claiming that the theory of second-order logic is knowable?

What do you mean by "knowable"?

Axiomatizations (i.e., "theories") of second order logic certainly do exist, and it is possible to "know" them. However, I suspect that's not what you mean by your question.

SOL

The logic most commonly known by the name "second-order logic" is not axiomatizable (the other logics are really multi-sorted first-order logic). If this isn't the logic you meant with your comment on Hank's mention of the "anything knowable is computable" thesis, then I'm at a loss with what you meant by it.

On Topic

I see my statement to be solidly on topic both for this thread and for Ltu. It is simply a statement about knowledge representation and/or logic programming and by extension declarative programming. All represent knowledge and prove statements and thus provide answers to queries from the user. Also at work here is the well known Curry-Howard isomorphism; the idea that a proof is a program and vic versa. These systems "work" and are "correct" because the "knowledge" of the problem space is completely specified, thus known and knowable. It is the knowledge content (the types as well as assumptions about the semantics of the source domain) that guarantees the results of the computation.

Compare this to the vagueness of the thesis:
every classical algorithm is behaviorally equivalent to an abstract state machine
The problem recognized by all is that this needs to be cleaned up. Thus more axioms, etc. Basically we need to define "algorithm" more broadly as not only math but types and target domain semantics that are implicitely assumed. This is nothing but standard computer science.

Last but not least I reject any connection between logic, knowledge representation, and atheism.

Let's try to concentrate on

Let's try to concentrate on the technical and formal aspects of C-T. General discussions of atheism are certainly not on-topic for LtU, of course.

Huh?

I reject any connection between logic, knowledge representation,

So, logic isn't about knowledge and knowledge doesn't need logic?
I'll be curious to understand what you mean by "logic" and by "knowledge".

Id say logis is correct

Id say logis is correct reasoning from axioms, and knowledge is having good axioms.

Misquote

You left out "and atheism", that is an answer to a previous post. The quote should be "I reject any connection between logic, knowledge representation and atheism".

On purpose

You left out "and atheism",

Yes, just as Ehud Lamm I think this is irrelevant (though I am an atheist).
Does this means that without the reference to atheism your statement is incomplete (distorted or even meaningless)?
Weirder and weirder!
If you cannot make sense of the "shortened" quotation I am afraid we cannot discuss anything.

logic and knowledge

It is a challenge to think of anything I could add to the large volume of literature that is almost as old as computer science. But basically I think of the relation between logic and knowledge as "commonsense". Common Sense may be common but that doesn't mean it is easy to understand. Like many issues involving cognition it is too close to home. We may use it but we don't have or easily form theories about what we are doing. John McCarthy first made this connection in "the paper that started it all": "Programs with Common Sense". The link between logic and knowledge is pervasive and shows up in any use of logic once you see the link. Predicate Calculus makes this link particularly clear and is of course a basic logic for knowledge representation. The same logic is also the basis of logic programming. The Curry-Howard isomorphism having nothing to do with knowledge links proofs and programs. It is so pervasive it is hard to argue with, but as we all know it is still controversial almost 50 years later.

semantics

Basically I agree with the commonsense theory but I think there is another explanation based on language theory. What if we equate "semantics" with "knowledge". When we expand semantics into a real, application domain we have a little bit of knowledge.

Questions for the Theory folks

I'm not versed in this subject much beyond undergrad Theory of Comp, so forgive my ignorance. I followed the paper well enough, but lack the knowledge to put the ideas in their proper context.

To me, their Postulate IV, "Only basic (computable) operations are available initially", seems to be limiting what an algorithm can do to what we already know we can compute, and then saying that Church's Thesis is true for these cases. Are they proving anything beyond "a finite sequence of recursive functions is recursive"? Is a proof of that statement important in itself?

Looking at it another way, the paper argues that we can only reach something "new" in computability by coming up with a new intrinsic function that's not built of existing routines. (I'm only addressing breaking the fourth postulate with oracles or similar.) Is this a more thorough formalization than previous work, or just a unique dressing on existing state machine theory? My intuitive grasp of ASM told me this was true, but that's not the same as proving it.

What is accepatble C-T

This may seem like a crazy question, but here is my problem. In the forties and fifties there was a popular theory for dealing with problems likt the one in the above paper. It was called Cybernetics. We would simply think of the algorithm as a "system" consisting of connected input/output subsystems. For instance an adder would be a box with input and output, also a multiplier, and so on. These "boxes" would be arranged to correspond to the steps of the given algorithm. Showing that this system can be implemented on a "machine" is simply a matter of showing an isomorphism between the given system and a possible system on the machine. Ashby's book linked above actually explains all the steps in this process in an informal introductory way. This result is fundamental to all computing and many people would take it for granted. But apparently not in computer science. What is so bad about Cybernetics?

Cybernetics

I don't think anything is bad about Cybernetics. It is just that the main insights from cybernetics became well known and are used in mnay different fields (I am thinking mainly about system-level thinking and feedback phenomena), and people became discouraged of Cybernetics as a separate and independent field. Many areas of research these days (from complex systems theory to sytems biology) are related to cybernetics, and carry the torch. To this list of fields one is tempeted to add the work on coalgebras, the various calculi for distributed and concurrent systems etc.

One thing that did happen is that the term Cybernetics itself went out of fashion, so people don't often refer to themselves as Cyberneticians...

Agreed

So can we think of Church's thesis as proven, based on a systems theory argument?

Lately I have been rereading a little book: An Introduction to Automata Theory by M. W. Shields. I have noticed some parallels with more familiar computer science such as lambda calculus (with and without types). Basically if we reduce a system (an input/output automata) to single state partitions we get a pure functional system. You can take this with a "grain of salt", since I am not a certified mathematician. Also most of the results in Shields seem to propagate into category theory and coalgebra.

This isn't a "proof" of C-T.

It looks to me like what they've done is reduce one informal thesis (Church-Turing) to four informal postulates. Then they introduce formal counterparts to the postulates, and use those to build a formal proof of a formal counterpart to C-T.

Interesting, but not at all a "proof of C-T" as some of the comments here seem to try to make it into. It arguably provides some good avenues for research, but let's not pretend that it turns water into wine, i.e., informal theses into formal theorems.

Thesis or theorem?

Perhaps C-T will always be a thesis and not a theorem because we are interested in practical cases. Is this algorithm computable, is this class of algorithms computable, etc. This is the problem that Church and people of that era were dealing with. Basically "what is a computer?" Is an electronic, analog system a computer for solving differential equations? This seems to be the significance of "systems theory" which I mentioned above. It provides a way to individually state and answer these questions.

Isn't that what "axiomatization" means?

It looks to me like what they've done is reduce one informal thesis (Church-Turing) to four informal postulates.

That's exactly right, and it's exactly what they set out to do. The title claims the paper will provide an "axiomatization of Church's thesis." What they mean is something like this: Church's thesis is not, by itself, obviously true. We claim to come with a minimal set of 'obviously true' axioms from which we can prove Church's thesis (and hopefully other non-obvious results). So yes, it is a proof of the thesis, a proof with respect to some axioms. The whole point of axiomatics is that the axioms are simple, independent and self-evident.

So there are a couple of different ways to judge this work. On the one hand, we can ask whether CT actually follows from their axioms, and on the other hand, we can ask whether their axioms are the right ones. Finally, we can also ask whether an axiomatic theory of computation is useful or even possible.