The Church-Turing Thesis: Breaking the Myth

This paper seeks to explode the myth that Turing Machines (TM) are the universal model for all computation.

Church-Turing Thesis: Whenever there is an effective method (algorithm) for obtaining the values of a mathematical function, the function can be computed by a TM.

[...] The Church-Turing thesis has since been reinterpreted to imply that Turing Machines model all computations, rather than just functions. This claim, which we call the Strong Church-Turing Thesis, is part of the mainstream theory of computation. In particular, it can be found in today's popular undergraduate theory textbooks:

Strong Church-Turing Thesis: A TM can do (compute) anything that a computer can do.

It is a myth that the original Church-Turing thesis is equivalent to this interpretation of it; Turing himself would have denied it. [...] In fact, the Strong Church-Turing Thesis is incorrect - the function-based behaviour of algorithms does not capture all forms of computation. For example, as explained in [Weg97], interactive protocols are not algorithmic.

[...] The reasons for the widespread acceptance of what we call the "Turing Thesis Myth" can be traced to the establishment of computer science as a separate discipline in the 1960's. To understand these reasons better, we can identify three distinct claims that make up the myth, and examine them individually:

  • Claim 1. All computable problems are function-based.
    • Reason: Adoption of mathematical principles for the fundamental notions of computation, identifying computability with the computation of functions, as well as with TMs.
  • Claim 2. All computable problems can be described by an algorithm.
    • Reason: Adoption of algorithms as the central and complete concept of computer science.
  • Claim 3. Algorithms are what computers do.
    • Reason: Broadening the concept of an algorithm to make it more practical.
Each claim is then explained in detail, and refuted, along with a few other corroborating claims. The paper concludes with an extension to TMs to model interactive computation.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Related paper previously posted

A related paper by one of the same authors (Dina Goldin) was previously posted to and discussed on LtU:

http://lambda-the-ultimate.org/node/view/203

Disclaimer: I am no expert

As far as I can understand, the paper disagrees with the opinion that what a modern computer does today can be simulated by a Turing machine, and it tries to introduce something called a Persistent Turing Machine that offers Sequential Interactive Computations, which make it more powerful than a plain old Turing Machine.

The PTM stuff is actually introduced in another paper, which I couldn't look for at the moment.

However, the conclusion is worth talking about. The statement is TM's do not model all of computation [which the PTM's can, I assume]. However, it is not known what such a computation is. If we follow their example, the TM-based robot car cannot navigate through downtown because it needs to know the future behavior of the other objects in the traffic before it starts the engine. However, the claim is that the PTM-based robot car with a video feed can navigate.

But, I just can't fathom how this doesn't convert this PTM-based car into a series of TM's that compute from video frame into video frame ending with a video frame that shows the destination. There can be many computations that terminate with the destination frame (and some of them will include intermediate frames that show accidents). To make a distinction between them is like making a distinction between bubblesort and quicksort. Sorry about the pedestrians, but we just are not interacting with them unless we have modeled their behavior (e.g. They won't pull a gun and shoot the camera), and at the moment we have enough restrictions for a proof, we'll have a TM to drive home safely.

Wow!

This is a great paper!!! It confirms some things I've been musing about myself. Namely, if TMs can't do things like analyze programs for termination, why is it that humans can? The obvious answer is: "Because humans are not TMs." But then I tried to reconcile that conclusion with the bedrock claim of CS that every computing device in the universe is a TM of some sort. And I still believe that. But what I believe is that the TM that a brain is when it starts to analyze a program is not necessarily the same TM that it is when it decides that it doesn't terminate. For some reason, I overlooked the very obvious fact that TMs are not interactive, and I concluded something entirely different: That brains are a type of TM that can modify their own state tables! That is, the reason brains can do such magnificent things is because they turn themselves into different TMs on the fly! I still think that's the fundamental power behind brains, but the interaction also opens up another interesting avenue of consideration; namely, time.

What is it that separates mathematics from computer science? They are both about problems and solving them. I don't think the difference is about the machines which solve them, because brains and computers are not so different (as I suspect we will eventually come to realize via an exemplar). I've decided that the difference between mathematics and computer science is time. In particular, CS has it, and math pretends it doesn't exist. Math is about purity. Timelessness. Changelessness. We talk about "variables", but we don't mean things that change with *time* so much as *space*. Calculus is the closest that math comes to time, and even that is pure. But the fundamental notion of CS is not the function, it is the state. Math has functions. What it does not have is state, because math lives in a world without time. CS is exactly the application of mathematics to time. This is what makes it fundamentally different, and a field in its own right. State is what records change, and the analysis of how state changes under various mappings is what we call CS.

State is exactly what makes computers powerful. We could build machines that just computed pure functions and they would be blazingly fast. But they would not be very useful or interesting. Computers are interesting because they contain state and allow us to manipulate it in all kinds of ways. They allow us to capture time itself, and the information that flows through it. What is an algorithm, other than a device that allows us to convert a computable function from space to time, or vice versa?

It is for this reason that I feel the obsession with ultra-pure functional languages is a misguided path. The value that the functional paradigm brings is a mathematical rigor that is sorely lacking in CS in general. But the pitfall is that this path leads one to deny the very thing that makes computation more powerful than mathematics.

Brains are not pure functions, no matter what the functional behaviorists wildly insist. Brains are the ultimate stateful devices, and they are powerful because of the states they store and manipulate. TMs are everywhere, so I think the correct way to think about them is that they are the "atoms" of computation. And they form all the "elements" of computation. But you can only do so many things with raw atoms. To do really useful stuff, you need to be able to combine them into molecules and do chemistry on the resulting goo. That's where things like the PTMs and choice machines come in. The problem is that once you allow the machine itself to be modified while it is running, problems become much, much more difficult.

Obsessed to control effects

It is for this reason that I feel the obsession with ultra-pure functional languages is a misguided path.

Perhaps I'm misreading your comment, but if you take a look at the papers accepted to ICFP'05, for example, you'll notice that there are several papers that (basically) deal with the integration of imperative (or effectful) and functional programming. The FP research community is exceptionally well aware of the issues.

Actually there is nothing tha

Actually there is nothing that says that a TM can't analyze another TM for termination. Church-turing only says that there are cases that are undecidable. As far as I know it might be posible that the undecidable cases could be as rare as to not show up in practice. (for some sufficiently advanced analysis process)

for all \= for each

There is a diffrence between "a TM can analyze another TM for termination" and "a TM can analyze all other TM for termination".

I'm not sure we understand ea

I'm not sure we understand each other so let me refrase me:

For some TM1 there exists some TM2 that can take TM1 as input and output True if TM2 will terminate for all posible input, and False if TM1 might suspend.

That's trivial

...even if we upgrade "some" to "each".

As you can pick TM2 based on TM1, TM2 can ignore its input and just output a corresponding boolean. Do you really mean: "There exists TMTrue that (always) outputs True, and there exists TMFalse that (always) outputs False"?

uhm, yeah know that you menti

uhm, yeah know that you mention it of course it is. And yes that seems to be what I meant in a way. My point on the other hand was more like that the halting problem need not be a problem in all cases.

Deciding what someone else meant

Felicia has already said that what you said is what she meant, but I think it might have been appropriate to mean something else: Namely, that there exists a machine T such that, if T halts with input a given machine S, then it correctly returns an answer to whether or not S halts on all inputs.

Of course this is itself trivial—T could just be taken to loop on any input—and any modification I can think of (like requiring it to halt on some input) could still be easily met; but I think it's a bit more in the spirit that one might want.

I know

I was being very sloppy in my wording because I didn't want to spell everything out. I was really making the much more powerful claim that in principle, I believe that brains can analyze the termination of TMs for which no UTM can decide an outcome. How's that for controversy!

Termination of TMs

Show me a TM A and your conclusive analysis of its termination and I'll show you a TM B that decides whether TM A terminates or not.

Indeed

I have no argument with that. However, what might be more difficult for you to do is to show me a TM C that computes TM B given TM A. ;> I argue that C is your brain, and that C is fundamentally different from B and A.

I doubt that my brain...

...can determine whether any given TM terminates or not.

However, it should be easy to implement a TM C that computes such a TM B given an arbitrary TM A and the result of a conclusive analysis of the termination of the TM A. ;-)

Your brain on TMs...

I doubt that my brain can determine whether any given TM terminates or not.

Give yourself some credit! ;> The problem with the TM architecture is that it is a Harvard architecture. It segregates code and data, and forces code to be immutable. In fact, this is why the Halting Problem is undecidable for TMs. Given a TM X that can decide haltability for all TMs of size <= N, it is easy to contrive a TM which X cannot analyze. But that's just because X is too brittle. So we have to create a new TM Y, which can analyze the domain of X plus our new TM.

Now, what if X could somehow morph into Y? What if we invented a new architecture which freely mixes code and data? What if we put the TM's state transition table on the tape itself?! Now we would have a different beast entirely. I claim that such a machine would be strictly more powerful than TMs, because I claim that such a machine would be able to solve the Halting Problem! Furthermore, I claim that your brain is a finite instance of such a device.

Unfortunately, I don't have anything by means of a proof. But perhaps my wild hand-waving will be fairly persuasive. ;>

Have you heard of a thing cal

Have you heard of a thing called Lambda Calculus? deviced by Alonzo Church as in the Church-Turing thesis. In lambda calculus there exists absolutly no distinction between code and data. And it is proven to be exactly equivalent to a turing machine.

Disagree

At the very least, the lambda operator is always "code". I submit that this corresponds to the core functionality of the Turing machine which is represented by the state transition table. Furthermore, that implies a corollary to my claim that lambda calculus is hamstrung by undecidability because the lambda operator is not powerful enough.

Well the fysical laws of our

Well the fysical laws of our universe might be seen as imutable code in any system. So your brain isn't any better off.

Universal TM

Your brain is relatively hard-wired as well. Your neurons don't really change that much after a certain point. The key is that the TM state machine isn't directly equivalent to "code". You can encode "code" as TM tape data as well. That's what a universal TM does.

Dynamic TM

I disagree. I claim that the state table *is* directly equivalent to "code", but that you can *also* encode "code" on the tape. Apparently, I am claiming that this ability to put code on the tape is not sufficient; and that changing the state table dynamically (especially by increasing its size) confers a new ability upon TMs that allow them to solve a greater range of problems. Just making a wild guess here, but perhaps the onerous task of "swapping code to tape", so to speak, during the course of solving a "hard" problem like the Halting Problem causes the machine's behavior to diverge more quickly than it needs to, and increasing the size of the state table reigns in this runaway growth.

Or perhaps it's fatally flawe

Or perhaps it's fatally flawed by the existance of things like universal turing machines. Lisp, to give an example, doesn't have the limitations you complain about - yet you can't solve the Halting Problem for either Lisp or TMs in Lisp.

UTMs

The problem with a UTM is twofold: one, you need to provide the definition of a TM as input, and two, you can only emulate a TM with it. And, as I pointed out before, a TM already has a limited computational repertoire, so emulating it with another TM doesn't get you anywhere. The fact that you have to provide a TM definition tells you something about UTMs: they are just shells. Big deal. A TM is already a pretty simple device, so emulating one in one isn't that impressive a feat. I think it is easy to assume that "universal" in "universal Turing machine" means more than it does. What would be more interesting is a TM that emulates the dynamic TM that I described. It seems that it ought to be possible, but I haven't thought about it hard enough to decide either way.

Take a UTM, add a couple of s

Take a UTM, add a couple of special states intended to extend or modify the state table of the TM it's emulating. No big deal.

Try it...

...and tell me how it works out for you. ;> The devil is in the details, as always.

It'll work, it'll still be ru

It'll work, it'll still be running on a TM, therefore it'll be no more powerful than a TM. It turns out that "copy this to that pre-marked location" and variants thereof are all doable on TMs, so building the modified UTM isn't that hard if you've been playing with them - I haven't for a year or so, and thus I'm not about to provide you with the actual machine, but I know it's possible in much the same way I know I could write a TM emulator in something like Haskell.

There is no segregation

If you execute a VM on a Turing Machine, its instructions are just data which can be modified (as is indeed done with many real VMs).

Undecidability is an artefact of the infinite memory available to a Turing Machine. It can be proven that the halting status of a Turing Machine with bounded memory can always be determined (albeit with a ridiculous amount of calculation, since each combination of memory states may possibly have to be examined).

Since all known Turing Machines have bounded memory, we can, in principle, determine whether each of them will halt or not.

Non-scientific

I believe that brains can analyze the termination of TMs for which no UTM can decide an outcome.

How do you plan to decide whether a particular brain has correctly analyzed a TM? Or is your belief religious in nature?

Proof?

I would ask for a proof, of course. How do you decide if a TM has correctly analyzed another TM?

At least a hint

At the least, we'd expect a hint as to how interactivity could dismantle the halting problem proof. I can't even see how interactivity enters the picture.

Misunderstanding

I never meant to imply that interactivity has anything to do with the Halting Problem. My apologies if I somehow led you to that conclusion. My claim about the Halting Problem stems from self-modifiability, which is quite distinct from interactivity.

Self-modifiabilty buys you nothing.

Self-modifiability doesn't buy you any computational advantage.

You seem to assert that a "modifiable" turing machine (one whose definition/programming can be changed on the fly) is more powerful than a classical TM. You equate TMs with Harvard architectures, whereas modern computers are Princeton (von Neumann) architectures.

Someone else points out the existence of UTMs, which treat the "programming" of a classical TM as data--essentially they are analogous to von Neumann architectures.

You seem to suggest that although the machine the UTM is emulating may be modified by the TM's action, the UTM itself is fixed. What if someone uses a UTM to emulate another UTM? What if this is carried out ad naseum?

I don't mean to be rude... but you're making claims that have long been proven false. You might go re-read the relevant literature on the topic.

Literature?

I don't mean to be rude... but you're making claims that have long been proven false. You might go re-read the relevant literature on the topic.

I'd love to see literature which refutes the points I'm making. But I suspect there is a subtlety that is being missed.

Someone else points out the existence of UTMs, which treat the "programming" of a classical TM as data--essentially they are analogous to von Neumann architectures.

No. A UTM doesn't get you any further than a TM. In particular, the UTM does not modify the simulated state table of the simulated TM. If it did, then by definition it wouldn't be simulating a TM.

You seem to suggest that although the machine the UTM is emulating may be modified by the TM's action, the UTM itself is fixed. What if someone uses a UTM to emulate another UTM? What if this is carried out ad naseum?

That would be great, except you still don't escape the Halting Problem. Why? For a few reasons. One, emulating UTMs is the wrong tack. A UTM isn't any more powerful than a TM. Two, the level of nesting is always finite, which is exactly the reason for the undecidability of the Halting Problem for TMs. "If we could only extend our HP TM to analyze *that* TM!" Three, the deeper the nesting, the larger the input tape. At some point, solving the problem would reduce to specifying the correct enormously large input, in which case the TM itself is but a silly formality.

Now, perhaps what you are trying to say is that a TM could emulate a DTM (as described here). However, I've already conceded that I don't know whether this would be a successful venture, and why I suspect that it would not. That is, I don't see a reason off-hand why a TM could not emulate a DTM, but I entertain the possibility that *even if it could, it might not halt for all inputs that a true DTM would halt on*. But looking at my DTM spec, or trying to design one yourself, I think you will begin to see that it is an interestingly different beast worthy of at least a pause for consideration.

Clarification

I'd like to try to clarify something. In another comment, you state "that such a machine would be able to solve the Halting Problem! Furthermore, I claim that your brain is a finite instance of such a device." Now, we already know that humans can't solve the halting problem (see Can humans solve the halting problem?)

However, the substance of your argument in fact seems to be claiming something quite different: that a key respect in which brains (human ones, at least) are different from TMs in that they are able to generate fresh proofs, in some cases, given arbitrary problems such as particular instances of the halting problem.

So, I'll assume for the moment that your claim about brains being "able to solve the Halting Problem" was not meant in the usual general sense. You are claiming that the human ability to generate original proofs distinguishes them from TMs (by "original", I mean "not generated by a known algorithm").

While this is intuitively appealing, no valid argument has been proposed to support this. The idea that TMs (and their lambda-calculus-based cousins) cannot modify their own state has been pointed out as false. Is there any remaining (scientific) basis for this belief about brains vs. TMs?

If this were settled, then it

If this were settled, then it would also settle the debates regarding the question whether an AI would ever "really" think. I of course think that it would, because I strictly believe in the identity of indiscernibles and not leading LtU offtopic.

Argument?

Well, it *hasn't* been proven that humans cannot solve the halting problem; but of course, it would be foolish to claim that any living ones currently can (talk about covering my bases!). Nor did I claim that humans could (or mean to imply that they could). Rather, I claimed that brains are based on an architecture which has the capability of solving the Halting Problem. Let us call this architecture the DTM (Dynamic Turing Machine).

Now, it is *not* false that TMs cannot modify their own state tables. This is true by definition. A TM could modify the state table of a *simulated* TM, but that is different from saying that a TM == a DTM. Note that a UTM could not do this because UTMs only emulate TMs. So the question is, would a DTM simulated on a TM be as powerful as a real DTM? If the answer is yes, then it follows that DTM == TM, and my claim falls on its face. If, however, a DTM turns out to be able to solve a strictly greater set of problems than a TM, then we will have the curious fact that emulating code on tape is not as powerful as making that code an intrinsic part of the machine.

This probably sounds like a fantastic claim to make, so why do I believe it might be true? If we were to design the DTM to be as similar to the TM as possible, its design might look something like so:

  1. We move from a 1D tape to a 1.5D tape. By that I mean we extrude a horizontal tape in the +y direction only. Since a finite set of tapes does not increase the computational power of a TM, and since I promise to only use a finite but unbounded area of the tape at any given time, I suspect there will be no objection?
  2. We index the tape. Each vertical strip of the tape has an "address", and the read/write head starts at 0.
  3. We introduce the "instruction set", which is a set of 4 symbols corresponding to "left", "right", "up", and "down". We map them to the integers 0-3.
  4. We introduce the "instruction pointer" (whose address we call IP), which is like a read-only version of the read-write head, with a few special properties:
    1. It can move to any address instantaneously.
    2. It can read an unbounded number of symbols vertically instantaneously. Now, to prevent "cheating", we will say that the instruction pointer always reads symbols from the "bottom" of the tape vertically until it reaches the first blank cell after having read at least 1 + log_N 4 cells.
    3. It starts at address 0.
  5. The set of symbols is mapped to the shortest contiguous sequence of the naturals starting at 0. Multi-symbol strings are translated to addresses by the usual base-n notation. The number of symbols is called N.
  6. The state transitions occur as follows:
    1. The symbol under the read/write head is inspected. Let us call its address-value A
    2. The instruction pointer moves to address IP + A.
    3. It reads the vertical symbol-string at that address. This value is the encoded state transition instruction, which is as follows:
      1. The first cell is the write-symbol, which the read/write head writes to the cell it is over.
      2. The next log_N 4 cells are the "opcode", which correspond to "L", "R", "U", "D". If the opcode decodes to any other value, or a "D" instruction is issued when the read/write head is at the base of the tape region, it is considered a halt instruction. The read-write head moves in the direction specified by the opcode.
      3. The rest of the cells are the new state S. If S is empty, the machine halts.
    4. It moves to address S * N.
Now, we notice that the DTM I have specified has some interesting properties. First, the IP must perform both addition and multiplication. It might be possible to design a DTM which avoids these operations, but at any rate, the transition operation requires making an N-way choice. Second, the machine "knows" about integer operations intrinsically (or at least operations on the naturals). Whether this translates to fundamentally greater computational power, I can't rightly say at this time. But the TM only requires increment/decrement and a fixed-size table lookup, so it at least seems plausible that DTMs might be in a different computational class. Third, not only can the DTM's IP traverse an arbitrary number of cells in each step, it can read an arbitrarily large state string as well. This, more than anything else, may contribute to the DTM's ability to perform meta-Turing computations (to coin a term, if I may). I would be very interested to see proofs that the DTM is or is not more powerful than the TM, but I'm afraid I won't be able to produce any right away.

random access machine

Some of the seemingly-unemulatable-by-TM properties you've listed are present in the "random access machine" computational model. TMs can emulate random access machines.

Reference

I can't find any good references on random access machines. Do you happen to know of any?

Tapes are a distraction

If, however, a DTM turns out to be able to solve a strictly greater set of problems than a TM, then we will have the curious fact that emulating code on tape is not as powerful as making that code an intrinsic part of the machine.

Which seems like enough to refute your argument, at least intuitively. What causes you to suspect that such a situation might be possible?

All your designing of tape-based machines seems rather quaint to me. The lambda calculus has a fundamental philosophical lesson to teach here: everything that ordinary Turing machines do can be achieved simply with abstraction via names, binding names to values, and dereference of names. If you want to claim that your DTM improves fundamentally on what TMs and LC can do, then rather than throwing out a list of v2.0 features for your tape machine to have, it'd be much more convincing to describe an augmented lambda calculus (or similar model which doesn't appeal to arbitrary physical analogies) which exhibits this new computational property that allegedly cannot be successfully simulated.

That would force you to deal with the logical properties of this new computational model, rather than hiding at one remove behind a list of "hardware" features which you're speculating, with no evidence, have a profound impact on the computational properties of the machine.

Rather, I claimed that brains are based on an architecture which has the capability of solving the Halting Problem. Let us call this architecture the DTM (Dynamic Turing Machine).

IOW, your conjecture is that self-modifying TMs are not only equivalent to human brains, they also have the capability of solving the halting problem in general, somehow overcoming the logical contradictions this entails (and incidentally resolving a large number of unsolved problems in mathematics). I think at this point, all I can say is: I bet you're wrong.

Brains == Monads?

That is, the reason brains can do such magnificent things is because they turn themselves into different TMs on the fly!

Interestingly, the reason functional programs can do such magnificent things is because they turn themselves into different functional programs on the fly! That's exactly what happens each time you apply a function like (lambda (x) (lambda (y) (+ x y))) to a value: a new function is created, essentially turning the running program into a new program, which embodies some additional state (a particular value for x). This is just a minor example to illustrate one of the ways in which functional programs interact with state.

Extrapolating from this, one gets to monads, which package this capability into a standardized, reusable form which augment themselves each time they are invoked. "The reason monads can do such magnificent things..."

The parent post demonstrates a common misconception about state in functional programs, and reaches the wrong conclusions as a result. Mutable state is an optimization in which the time dimension is folded so that subsequent values overwrite previous values. Correctly preserving the history of previous values, as pure functional models do, provides a more faithful model of any system involving a time-like dimension, including the real world.

This is why things like accounting systems and transactional file systems are purely functional at the logical level.

The claim that "Brains are the ultimate stateful devices" is certainly not at odds with a functional model. One could just as accurately say that "functions and/or monads are the ultimate stateful devices". The implication that "functional" and "stateful" are somehow at odds betrays a misunderstanding of the relationship between the functional model and state. I highly recommend working through the at least the first three chapters of SICP as a partial antidote to this.

MONAD THE ULTIMATE

MONAD THE ULTIMATE

No, no, no! It's a lambda...

Yes, thank's for that input Marcin. ;-)

I don't think the statement was intended to lift monads yet another step on their stairway to heaven. On the contrary, one way of getting 'statefull' computations into a functional program is to pass around (curried) functions, which by all means can be thought of as computations which carry around a state (for example, the argument they were curried with, or some invariants which hold on the function passed around).

Question then becomes, are we then still talking about 'state' in the traditional sense (whatever that is) (?), or about something completely different? And, can the traditional notion of statefull computation be modeled by passing around (curried) functions?

Lambda the ultimate imperativ

Lambda the ultimate imperative (ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-353.ps). It's top of the list of papers. It would probably be relevant for anyone who hasn't read it and is trying to seriously follow this line of discussion (of which, I must confess that I am scoring 0/2).

Functional state representation 101

And, can the traditional notion of statefull computation be modeled by passing around (curried) functions?

Sure. The use of monads in a pure functional language like Haskell provides an existence proof of this. But to understand the principle, we don't have to go that far. SICP introduces the underlying concept in What Is Meant by Data? (See Exercise 2.4 in particular.)

Will you answer "no" to this question?

But what I believe is that the TM that a brain is when it starts to analyze a program is not necessarily the same TM that it is when it decides that it doesn't terminate.

This is well-trodden ground. You should read "Godel, Escher, Bach" (particularly, the Contracrostipunctus dialogue, if I remember correctly) for why this doesn't help. Basically, in order for a machine to change itself there needs to be some process doing the changing, and unless you can prove that this process is more powerful than a Turing Machine, then your whole system, while elaborate, still has the same problems with incompleteness and the Halting Problem. To put it another way: the machine (program) you pass it to analyse for termination includes not only the beginning machine (start state) but also the mechanism by which the machine is changed, which is after all part of the same machine.

GEB

I did read that a long time ago (but I should probably read it again). Anyway, the process doing the changing would be exactly the environment in which the machine runs. That is, the inputs during (or between?) the computation would change the way the computation proceeds (because it would change the rules of procedure themselves).

Are you sure, human?

Are you so sure, human, that there is no Turing Machine beyond your power to analyze for termination?

No

I'm not sure. But it seems to me that given enough time and resources (which may well be beyond human, solar, galactic, or even universal lifetimes), human minds would be able to recognize the patterns that lead to termination or divergence without executing the entire TM. It's not so much that I believe in the power of any instance of a human brain, but rather that I believe in the plasticity of its computational model. What brains appear to be able to do is add on layers of abstraction at will, reducing complex problems to ever simpler ones. When brains fail, it most often appears to be due to a lack of computational resources, and not a failure in principle.

See www.aemea.org for mathematically disproving Church's thesis

Check out www.aemea.org. I provide a mathematical proof that the Church-Turing thesis is false. I do this by providing a solution for solving the Turing Immortality Problem for any given Turing machine.

Michael Fiske

Interesting but not a counter-proof

I think this is interesting work. It appears to be a new class of computation, but it doesn't disprove the Church-Turing thesis because the machine cannot be constructed or simulated with a finite amount of labor.

From the paper:

The DRM has an unbounded number of registers labelled R0, R1, R2, . . . each of which contains a natural number.

From the Church-Turing thesis:

... fulfilling the requirement of finite labor at each step and yet was provably capable of carrying out computations that cannot be carried out by any Turing machine.

So moving data between an unbounded number of registers would disqualify the machine as requiring finite labor.

Infinite vs. unbounded

A Turing machine has an unbounded tape. I haven't looked at the paper much, but skimming the section on the DRM, it doesn't appear to have any instructions that touch more than finitely many registers in finite time, so this isn't much different than the Turing machine's tape. In fact, at first glance I don't see why this new machine couldn't be simulated by a Turing machine... I suppose I'll have to read more.

Re: not a counter-proof

For any computation of the dynamic register machine, as long as the program terminates, it only uses a finite number of registers. This is the same assumption that Sturgis and Shepherdson make in their register machine paper. This is analogous to the Turing tape being unbounded but for a non-halting computation, it visits a bounded number of tape squares.

Questions

Can you explain what about your dynamic register machine would prevent its implementation in, e.g. newlisp?

Also, this theorem,

THEOREM 4.16 If machine (Q, A, η) has an immortal configuration, then it has an immortal periodic point.

seems to contradict the existence of counterxamples such as the one constructed in this paper. Do you know why it doesn't?

Can you explain what about your dynamic register machine

You have a number of interesting questions so I will address one per reply. First, let's take a step back. The purpose of a dynamic register machine was to capture in the simplest math formalism the notion that a program should be able to change while it is executing. This has advantages when using machine learning methods in the design of a program.

Yes, it can be implemented in newLISP but will only execute up to the memory bounds of the machines that are executing newLISP. If there were a way to implement newLISP so that the length of a list is unbounded, then it would entirely support the formalism. [I will follow up on this with Lutz Mueller. See www.newlisp.org]

In fact, I wrote a newLISP dynamic register machine that executes the 1,590 line program shown in section 11. I have tested the code on small Turing machines, but have been unable to do so on large Turing machines as I do not have adequate computing resources.

So this brings up an interesting practical question. Can an interpreted program whose length can change while it is executing be more powerful than a compiled program whose executable code stays fixed throughout the computation? I think the answer is yes and what I just mentioned along with sections 0 to 11 explains why.

By the way, Jef Raskin, famous Apple engineer -- http://en.wikipedia.org/wiki/Jef_Raskin -- knew intuitively that there was something missing in theoretical computer science that was not capturing practical computing. He has an essay on the web where he mentions this, but I can not find it. Otherwise, I would cite the essay or commentary in my references.

Hmm

If there were a way to implement newLISP so that the length of a list is unbounded, then it would entirely support the formalism.

But you don't actually need infinite lists at any point, right? You just need arbitrarily large finite lists? I'm asking because it's well known that lisp can be emulated/encoded in lambda calculus and lambda calculus can be implemented by a Turing machine. So using these emulators, wouldn't your 1590 line program provide a way to solve the Turing immortality problem on a Turing machine - a contradiction?

If machine (Q, A, η) has an immortal configuration, ...

In regard to the paper:

On the presence of periodic configurations in Turing machines and in counter machines by
Vincent Blonde, Julien Cassaigne, and Codrin Nichitiu

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.31.1435

it does not appear to contradict my work. I took a quick look at the first page and their configuration space is a different topological space then mine.

Different topology

I understand you have a special topology, but your theorem 4.16 seems to me to be saying that there is some extended configuration for every immortal Turing machine that puts it into a loop (up to equivalence). That would seem to contradict the construction in the paper I referenced that demonstrates a Turing machine that continues from any configuration without halting or looping.

Edit: In other words, your statement of theorem 4.16 doesn't seem to involve your topology much (only its proof does).

Can you send me the explicit Turing machine?

My configuration space is a different topological space.

Instead of speaking in generalities, can you send me the explicit Turing machine, what you are calling the counterexample, in terms of a table similar to what I have in examples in Section 3?

I will go through how the methods in sections 7 and 8 work for this particular machine but I probably won't be able to get to this until the weekend.

If the explicit Turing machine is too big to post, please email

If the explicit Turing machine is too big to post, please email to mike@aemea.org

My configuration space is a

My configuration space is a different topological space.

Yes, but points in your space correspond to ordinary (infinite tape) configurations. I don't remember (or see) where you defined 'immortal period point' at the moment, but I thought that was supposed to mean that such a tape configuration that will, upon some finite number of machine steps, return to an equivalent configuration. Is that correct?

Regarding your request for an explicit Turing machine, I'll see what I can do. I wouldn't mind understanding the counterexample a little better so I may send you something.

I am glad you are asking these questions

I am glad that you are asking these questions. The work has been submitted for peer review. Also, there are two different perspectives, math and computer science. Actually, the topology does matter because that determines what the points look like. In order to talk about periodicity, you need to have the topology or metric well-defined. Another point to consider is that the Turing machine in Section 2 that I define is "hyperbolic" in the sense that the rewrite and the left / right tape head move occur as a part of the same Turing instruction.

The "hyperbolic" Turing machine can carry out any computation as the machine defined in the paper -- and any halting machine with respect to their formalism maps to a halting machine in section 2 formalism -- you referenced. I can provide details for this. It is
straightforward so I did not include it in the sections. I figured that it was probably already well-known by the theoretical computer science community.

Assuming the paper's math is correct, their machine that contains an immortal configuration but no periodic configurations will produce a periodic point with respect to the Section 2 "hyperbolic" machine. This may be the answer to your question but I can not determine this unless I can see a state / symbol table that completely specifies their counterexample.

Also, did you notice that the paper was sloppy on how they defined their configuration space? Q x {T: T: Z ---> A} is their definition but their points in their analysis are not points in this space because they also include the tape head. See my definition in section 2, where there is an additional coordinate that accounts for the tape head location.

tape configurations

My configurations are defined in 2.3. They are of the form (q, k, T) where T: Z --> A
where A is the alphabet, but the Phi map in 2.11 creates an equivalence relationship so that two distinct configurations may represent the same point.

Tape configurations

Right, I think I understand this part. And I think understand that you want to associate these (pre-equivalence) configurations with a rectangular subset of the 2D plane (though, see my comment [*] below). But where do you define "immortal periodic point"? Does that not mean what I think it does?

Also, this isn't the best forum to use for back and forths such as this, so I'll send subsequent follow-ups to your email address above.

[*] You don't seem to address that the standard map from infinite digit strings to real numbers isn't an injection. For example, the real number 2.5, in decimal, has digit strings 2.5 and 2.4999999... Thus certain points in the plane would seem correspond to 2 or 4 distinct Turing configurations.

Not again...

This has to be the umpteenth paper on this this topic (the interactive compuation community seems to emit one such paper every couple of years--a whole list of 'em can be found here). It seems to shoot at fish in a barrel (the so-called "strong" Church-Turing thesis, of which there are actually several, including the rather ill-formed proposition which is debunked in the paper), and while the SCTT as described is indeed ill-formed, this is hardly a remarkable or notable result. The limitation of Turing machines and the lambda calculus to mathematically-describable functions is well-known.

Either that, or the paper misrepresents the Strong Church-Turing Thesis (one possible interpretation of which is "any mathematically-describable function computable by a (digital) computer is also computable by a TM"--this interpetation, if taken to exclude I/O and instead refer only to mathematical functions, is almost certainly true).

In either case, the fact that the CTT disclaims IO/interactivity (and Turing's so-called "A" machine was formulated explicitly to avoid it, by requiring inputs to be pre-loaded onto the tape before execution) is well-known. In that sense, this paper contains no profound result that I can see.

A good online reference on the Church Turing Thesis is here.

Where this paper (and others like it) err, is in the suggestion that the debunking of the strong Church-Turing thesis is a profound result which reasonably leads to the conjecture that parallel (multi-agent) models of computation are (or may be) more powerful (in some sense, presumably other than being faster) than sequential (single-agent) models. As far as I'm aware, such a result has not been demonstrated. While the inherent "interactivity" in various multi-agent models is touted (though I dispute that agent-to-agent communcations in such a model is "interactive", at least not any more so than one function in the lambda calculus invoking another constitues interactivity), practical sequential systems are also ultimately interactive--in that they communicate with the real world via transducers of some sort.

It's certainly well-known that n Turing machines (where n >1, but is finite) is exactly as powerful as a single machine. It's also certainly true that multi-agent models are executed on the same hardware as single-agent models.

Unfortunately, the term "interactive" in this sense isn't very well-defined. It seems to include anything form of input generated after execution has started, or produced before it has finished. Turing himself fashioned this limitation to exclude the possibility of what he called O-machines (a Turing machine communiticating with a hyper-Turing "oracle"--such an aggregation would certainly be hyper-Turing in power). And while we cannot build oracles, there remains considerable conjecture about the computational capability of the human mind--there are many problems that human minds solve easily, but we don't know how to solve with digital computers. But again, unless a human (or an oracle) is interspersed between two nodes in a multi-agent network, it is hard to discern how the existence of a simple communications channel between two processes increases the computational capability of such an aggregation, compared to a single process.

In particular, I seriously doubt that a network of Persistent Turing Machines (PTMs), interacting among each other but not with the outside world, is any more capable than a single Turing machine. I also seriously doubt that a network of PTMs which is allowed to interact with the outside world (for some value of outside world) is any more powerful than a single PTM doing the same.

Not a CS researcher, but...

and while the SCTT as described is indeed ill-formed, this is hardly a remarkable or notable result. The limitation of Turing machines and the lambda calculus to mathematically-describable functions is well-known.

Well, the paper argues that there are many such misconceptions in CS literature, so it's clearly not as well understood as the authors would like.

But again, unless a human (or an oracle) is interspersed between two nodes in a multi-agent network, it is hard to discern how the existence of a simple communications channel between two processes increases the computational capability of such an aggregation, compared to a single process.

The real world itself strikes me as either a more advanced Turing machine than we have can hope to create, or a true Oracle/O-machine. Continual feedback from sensors seems much more feasible than predicting the progress of the real world from a given fixed state, particularly in circumstances when we don't even know how that state evolves, or the computation itself is intractable.

In particular, I seriously doubt that a network of Persistent Turing Machines (PTMs), interacting among each other but not with the outside world, is any more capable than a single Turing machine.

Theoretically, I agree.

I also seriously doubt that a network of PTMs which is allowed to interact with the outside world (for some value of outside world) is any more powerful than a single PTM doing the same.

Theoretically, I also agree with this. Networks of TMs/PTMs are more of an engineering issue than a theoretical CS issue.

I think I/O with the outside world, and not simply with other TMs/PTMs, is the definitive "advantage" of the PTM model. One could probably argue that PTMs are simply the Church-Turing thesis augmented with the possibility of communicating with O-machines. This is clearly strictly more powerful than TMs alone. I'm not sure that this buys us any useful theoretical enhancements though.

"Universal" Turing Machines

It's certainly well-known that n Turing machines (where n >1, but is finite) is exactly as powerful as a single machine. It's also certainly true that multi-agent models are executed on the same hardware as single-agent models.

This makes it sound like you are "proving" that multi-agent models aren't any more powerful than single-agent models. But the whole point is that the hardware which runs either is not a TM, and is strictly more powerful than a TM with a finite tape of comparable size.

In particular, I seriously doubt that a network of Persistent Turing Machines (PTMs), interacting among each other but not with the outside world, is any more capable than a single Turing machine. I also seriously doubt that a network of PTMs which is allowed to interact with the outside world (for some value of outside world) is any more powerful than a single PTM doing the same.

For an example of the first kind, suppose that one of the PTMs is simulating the local environment in which one of the other PTMs would "ordinarily" find itself. That is, do you accept that the laws of physics can be suitably encoded as a TM? Do you believe the universe is *computable*? If so, then we can take any particular subset, simulate it, and provide it as the input to another PTM. In fact, this is probably one of the things that makes the brain such a useful tool...the fact that it can simulate outcomes instead of experiencing them (which is obviously useful when dealing with lions and tigers and bears, oh my!).

The second claim can be addressed in the same way. A network of PTMs can help simulate the outside world, allowing for the evaluation of strategies that would be more difficult within a single PTM. Now here is where we get into some difficulties. If, in fact, the universe is computable, then we can simulate the entire thing within a single, monstrous TM, thereby eliminating the need for interactivity. The only "inputs" are the initial conditions, and they don't change during the computation. So strictly speaking, this is a critical factor in determining whether PTMs are more powerful than TMs. That is, the Strong CTT is only false *if the universe is somehow non-computable*!

On the other hand, this line of reasoning mostly violates the spirit of the original paper in question, as it seeks to understand machines that are not arbitrarily large, and especially machines that have limited information available to them. These restrictions make interactivity relevant again, since it now makes sense to once again talk about the "outside world". However, the real question, which I sense that many people are driving at, is whether TMs are the right level of abstraction at which to tackle these problems of concurrency and interaction; and I agree with the general sentiment that perhaps it is not.

This makes it sound like you

This makes it sound like you are "proving" that multi-agent models aren't any more powerful than single-agent models. But the whole point is that the hardware which runs either is not a TM, and is strictly more powerful than a TM with a finite tape of comparable size.

The claim that "a machine that can do I/O is more powerful than one which cannot" strikes me as a tautology. And it's one which was well known to Turing himself. At any rate, the definition of "power" is being muddled--in the Church-Turing Thesis (and in the literature on computation theory in general), computational power relates to the ability of different machines to solve different mathematical problems. I/O is intentionally excluded. If one redefines "powerful" to include interactivity (itself a term which is ill-defined), then one hasn't invalidated the CTT; one is simply proposing a new thesis with different premises and ground terms. Why this result, known to Turing 60+ years ago, is worthy of a paper in 2005, I've no idea.

For an example of the first kind, suppose that one of the PTMs is simulating the local environment in which one of the other PTMs would "ordinarily" find itself. That is, do you accept that the laws of physics can be suitably encoded as a TM? Do you believe the universe is *computable*?

I have no idea whether the laws of physics can be encoded as a TM, and neither does anybody else. We still don't know what all of the laws of physics are. Nor do we know whether mass/time/distance are quantized or continuous. Nor do we know a bunch of other things, including whether the universe is computable.

If so, then we can take any particular subset, simulate it, and provide it as the input to another PTM.

In other words, "if we assume the universe is computable, then we can reason about it with a computer?" Yet another tautology.

In fact, this is probably one of the things that makes the brain such a useful tool...the fact that it can simulate outcomes instead of experiencing them (which is obviously useful when dealing with lions and tigers and bears, oh my!).

Which has little to do with the computational power of a human brain, or a PTM, or a bunch of PTMs hooked up together. We still don't know for sure how the human brain relates to Turing machines in computational ability. There is no reason to assume a PTM is a reasonable model for the human brain, if that's what you are suggesting.

The second claim can be addressed in the same way. A network of PTMs can help simulate the outside world, allowing for the evaluation of strategies that would be more difficult within a single PTM.

What do you mean by "more difficult"? If you mean "requires more time", or some other performance claim; that's irrelevant to discussions on computability. If you mean that two (or some finite n) PTMs can solve a problem that one cannot; that would be an astounding result. But nobody has demonstrated such. For classical TMs, the opposite has been proven; and other than systems of PTMs which interact with an external, extra-Turing intelligence (an O-machine, essentially), I have a strong hunch that any system of PTMs you can devise can be turned into an equivalent system of TMs. After all, a PTM looks to me like a 3-tape TM, with two of the tapes artifically constrained--also something exactly as powerful as a 1-tape TM.

Now here is where we get into some difficulties. If, in fact, the universe is computable, then we can simulate the entire thing within a single, monstrous TM, thereby eliminating the need for interactivity. The only "inputs" are the initial conditions, and they don't change during the computation.

Exactamundo!

So strictly speaking, this is a critical factor in determining whether PTMs are more powerful than TMs. That is, the Strong CTT is only false *if the universe is somehow non-computable*!

If the Universe is non-computable, any argument above which assumes that the universe is computable becomes moot. At any rate, if the Universe is "non computable", and there were some ways to harness its properties to enable extra-Turing computation, and that were hooked to a Turing machine--you'd have an O-machine, or something like it.

At any rate, this paper contains nothing new that I could see. Nor does it enlighten us in any significant way about multi-agent systems vs. sequential computation, as you have demonstrated (hopefully to yourself). If I may be a bit bold (and a bit rude), a significant bit of the "interactive computing" literature promulgated by Wegner and his colleagues veers dangerously close to pseudo-science. Not because of deflating the "strong CTT", but because of the underlying suggestion that maybe, jest maybe, interactive systems (which seems to include any multi-agent system, on the questionable grounds that the agents "interact" with each other) are more powerful (in some signficant way) than "classical" systems, becuase the interactive system might someday find an oracle to talk to. At best, such claims are mere speculation--interesting, but not really scientific. At worst, such claims border on FUD.

Wegner and company would do better IMHO to study closed systems, an environment in which they can formulate testable hypotheses and thus produce useful results, than engage in pie-in-the-sky speculation about computers that can talk to the proverbial Flying Spaghetti Monster.

If one redefines "powerful" t

If one redefines "powerful" to include interactivity (itself a term which is ill-defined), then one hasn't invalidated the CTT; one is simply proposing a new thesis with different premises and ground terms. Why this result, known to Turing 60+ years ago, is worthy of a paper in 2005, I've no idea.
Nobody claims that the CTT is invalid. The only issue is how the CTT is interpreted to a broader set of implications than is warranted. And "interactivity" is very well-defined. It is simply the property of allowing state to be modified externally while computation is being performed. What is vague about that? The reason it's relevant now is that most people implicitly accept the Strong CTT, whether you personally do or not. We should be writing papers about the sphericity of the earth if most people believe it is flat, even if scholars knew otherwise for thousands of years. Why is that a mystery?
I have no idea whether the laws of physics can be encoded as a TM, and neither does anybody else.
I have some idea. ;>
We still don't know what all of the laws of physics are.
True, but we're pretty close.
Nor do we know whether mass/time/distance are quantized or continuous.
Nor do we know a bunch of other things, including whether the universe is computable. Yup, but tantalizingly enough, it seems that the laws of physics need nothing more than Peano arithmetic to model them mathematically (and possibly not even all of Peano arithmetic).
After all, a PTM looks to me like a 3-tape TM, with two of the tapes artifically constrained--also something exactly as powerful as a 1-tape TM.

But that's wrong, because, as you stated before, "power X" is defined by the metric: "cardinality of the set of problems that can be solved by X", and since PTMs can solve problems that require "interactivity", as well as all the problems that TMs can solve, then PTMs are strictly more "powerful" than TMs, despite the apparently trivial modification.

But I'm not really a die-hard PTM apologist. I agree that they probably do not offer strong theoretical advantages over TMs. If anything, I think the value in thinking about PTMs is the nature of interactivity (or, as one poster reduced it to: "control"). We have good theoretical results about pure computations, but few if any solid theoretical results about dirty ones.

I don't know what this has to do with LtU's theme, but

Do you believe the universe is *computable*?

Quantum mechanics would appear to provide a pretty solid answer to that question: No.

Now, one could posit a deity with the capability of influencing quantum resolution and thus being able to decide whether Schrödinger ended up with a companion or a corpse. One might expect the number of such interferences to be trivial in comparison with the possibilities, and thus indetectable by low-level statistical analysis, leaving the existence (or not) of such a deity outside of the realm of scientific proof.

Furthermore, one could posit some aspect of conscious lifeforms (human beings, say) making them also capable of such influence (perhaps to a lesser extent); under the right circumstances, this would allow "free will" to coexist with a stochastic computational model of the universe. Again, it would be difficult if not impossible to observe such a phenomenon using scientific tools.

I do not believe that LtU is the correct forum for discussing such a concept, so I will not engage here in any debate on the above; indeed, I am pretty well indifferent as to what anyone else might think about it. However, I do highly recommend to those whose immediate reaction is strongly negative that they (re-)read John R. Searle, particularly The Rediscovery of the Mind or The Mystery of Consciousness.

[OT] Feasibility of Quantum Computing?

Quantum mechanics would appear to provide a pretty solid answer to that question: No.

Hmm, last I know quantum computing had same annoying problems with irreversible functions and other stuff. It gave me the distinct feeling that at the moment there, well, there isn't anything there yet.

Anyone better informed than me who understands complexities better than me has an explanation of QC?

Computable Universe?

Quantum mechanics would appear to provide a pretty solid answer to that question: No.

Do you make this statement because of quantum indeterminancy, or some other reason?

[OT] computability of quantum mechanics

Quantum mechanics would appear to provide a pretty solid answer to that question: No.

I'd certainly like to know what makes this answer that solid. I for one cannot see why the universe's Schrödinger Equation couldn't be calculated to any desired precision. Granted, it's a partial differential equation in 10^150 variables, give or take some, but heck, that's nothing compared to the infinite tape of a TM!

Calculating this way will make the damn cat either dead or alive, and Schrödinger looking into the box won't change anything. There's only a "half dead/half living" cat if the system was absolutely symmetric and undisturbed to start with -- which is impossible. (Plus, he should have used a dog anyway...)

Sure the cat is either alive or dead...

But which is it? That's the halting problem with a vengeance.

In other words, you could in principle compute the Schrödinger equation to any desired precision (though whether you could do that fast enough for a prediction is another question) but at the end of that you would still only end up with a probability.

Actually you can't compute it

Actually you can't compute it at all. The machine to compute it wouldn't fitt in this universe, I'm talking compexity vise. Current estimates in information physics says that the universe is 10^80 bits, so it realy can't contain an equation with 10^150 variables. It's basicaly the same thing as the halting problem and gödels incompletnes theorem. A problem has a certain minimum and it can't be emulated by any less. thats why you can't generally know what value a function will return with less work than running the function. A speciall case of that being the halting problem.

Sure you can

Just run it outside of the universe you are emulating. For that, use the meta-object protocol to the meta-universe, and...

Seriously, this thread already became quite meta-physical. I believe there is nothing wrong with this (except probably running off the topic of PLT), as the question of notion of computation is a foundational one, and as such lies on the border between science and beliefs. For example, the discussion of the design of human brains assumes we believe that all their powers come from the structure contained in this universe. Imagine a naive programmer (programmatically) duplicating descriptors of the (hardware) processors of his computer in hope to increase the performance. Or Dr. Square duplicating a 2D section of a lovely Sphere only to discover that the copy does not behave like the original at all. Or Mr. Anderson duplicating... You got the idea.

My point is that if we really want to discuss issues like the "real" nature of computation, we have to get a license from Ehud (a philosophy department?).

What's sure about that?

but at the end of that you would still only end up with a probability.

Actually you don't. You end up with a (quite abstract) function, the amplitude of which has been interpreted by Max Born to be a probability. One of the cats has a _very_ low amplitude there.

To decide whether the cat is alive, you have to know the state of the whole universe, since basically everything depends on everything else to some extent (it's partly a chaotic system). Since you don't know the whole picture, you have to throw some information away, averaging over large swathes of the function. This gives a probability, but is no longer an adequate description of the state of the universe.

To try and get back to the topic: Since solvers for differential equations can be implemented on a TM, and differential equations seem to describe physics accurately, the whole universe is no more powerful than a TM. CTT generalized to an extreme. Currently, there doesn't seem to be any indication to the contrary.

[OTish] Re: DiffeQ solvers on TM

Hm, aren't those only getting approximations (how would they represent things like PI)? The real universe would thus more complicated than what a TM can represent, no?

PI and the countably infinite

While pi is a transcendantal number (and the transcendantals are uncountable), there are many useful countably infinite sets which include pi; and many useful ways of representing pi exactly with a finite string--it can be represented as a Taylor series, as a continued fraction, etc.

At any rate, many practical solvers of differential equations use numerical rather than symbolic methods for computation--any term is likely to have some error associated with it. For this reason, exact representation of coefficients, including well-defined constants like pi, is usually not required--a suitable approximation will do. (In engineering applications; measured quantities are assumed to have an error from the start; it's folly to pretend otherwise and represent an imprecise measurement with an exact figure).

One of the longstanding follies of the PLT community (some members, anyway), is the belief among some of us that we can do a better job of number-crunching (usually using symbolic processing techniques) that mathematicians who are experts in this sort of thing (and generally reject symbolic approaches in favor of numeric). I always chuckle when I read somebody from the PL community ridiculing use of things like floating-point, on the grounds that their favorite BigNum implementation is more precise.

Just because computer scientists use math (and are frequently domain experts in the various discrete subfields of math, including theory of computation), doesn't make us domain experts in all of mathematics.

The experience that Java developers had with the numerical community ought to be an instructive lesson to all.

You wrote: "While pi is a

You wrote:

"While pi is a transcendantal number (and the transcendantals are uncountable), there are many useful countably infinite sets which include pi;"

One cannot say that "the transcedentals are uncountable". A number cannot be countable or uncountable, it's a set attribute, not a number's.

While the set of real numbers is uncountable, pi belongs to a set of computable numbers (a subset of reals) which are of course countable.

[OTish] Re: DiffeQ solvers on TM

They get discrete approximations, but to any desired precision. This could even be programmed lazily: evaluate more digits whenever the need arises. Granted, that's expensive on a real computer, but who's counting when the tape is infinite? PI, being an algebraic number, is of course represented as a lazy promise :)

The real universe would thus more complicated than what a TM can represent, no?

Or not. Around the magnitude of Planck's constant, things become discrete, not only charge, energy, momentum, but probably even space and time. If so, there's not even a need for diffeqs.

Not breaking out of Church Turing, refining it...

In the limiting case, yes, I agree with you absolutely: it's nonsense to suppose that distributed systems magically have more computational power than individual Turing machines. N Turing machines communicating are no more powerful that 1 Turing machine.

(I would go a lot further, and say that I even find the claim that "IMs" (to quote Wegner, "interactive machines") are somehow outside of SCCT pretty absurd, not merely 'technically true but irrelevant' as I think I could paraphrase you? It's true that SCCT narrowly, technically, does not apply (you can't "simulate" IO on a machine that doesn't do IO.) But you can view the process of interest as simply a "function extended in time", and simulate that function.* Whereupon all the usual interesting consequences of the SCCT apply, I suspect.)

But there are three caveats:

The first caveat is that once you have parallel composition of processes, you do often need "reactive" models of your processes. Not because the set of computable functions changes. But because parallel composition allows something else to happen in addition to divergance: deadlock. (To speak loosely, we are *narrowing* the set of successfully-terminating processes here, rather than broadening it. This is one way of looking at why we're not really changing SCCT.)

The reactive behaviour of a process is a *very* useful *refinement* of a process' partial-function behaviour. If you don't have it, a process which consumes two apples, then produces two oranges seems the same as a process which eats an apple, produces an orange, rinses and repeats. And the two will deadlock in different (parallel-composed) contexts. Oops.

The second caveat is that parallel composition *really* changes things with non-Turing powerful models of computation, and a lot of programs (and thinking) lives here.

I'll explain the second half of that first: quite a lot of programs carry around with them (if only in their authors' heads) a rough proof of termination.** So at some level, they don't really require the full generality of a TM. Programmers are in the habit of using the limiting results of Turing intuitively to know when they are getting into hot water. If you write something which is suddenly interpreting something that looks like a full-blooded language of directives... you know to watch out for non-termination.

The problem comes with the following (admittedly erroneous) piece of intuition: "I'm composing n boring processes which all, eg, behave like fancy PDAs, I feel safe."

Which brings me to the second point: parallel composition is Alice in Wonderland for some models of computation, notably PDAs. (Arbitrary composition is Alice in Wonderland for DFAs...) Communicating PDAs are Turing powerful (Ramalingam). Communicating DFAs are PDAs, with the right set up of communication.

This is especially bad when people have written in DSLs which are meant to be non-Turing powerful, and then composed the results -- it is made worse by the fact that distributed computation makes it harder to "see" the algorithm which is getting run. This is, for me, where it all starts to be very LtU-relevant again: languages are notations, and implicit models of computation. So what notations make it easy to reason about what happens when things compose in parallel?

The third caveat is quite boring: some processes are intended to run forever. So you need some way of breaking them down. Most of the time, you can break them down into a "functional-reactive" style model. But even this is actually a (primitive) way of treating so-called interactive processes.

What would I conclude out of all this? Just that Wegner is right that we need to pay attention to interactivity (or as I probably prefer, following I think Nielsen et al, "reactivity") --- but he's wrong about the reasons. It's not "breaking out" of the Church-Turing model of computation, it's refining it.

Someday, we'll all be quite relaxed about Turing, and realise that a new model of computation doesn't have to "break the Turing barrier" to be interesting.

(as an aside, it reminds me a bit of of Godel (surprise surprise -- I'm surprised no-one has really trotted out the Turing-Godel correspondence yet)... and how people get a bit too worked up about it. In particular Smullyan has a rather cute statement about why the second incompleteness theorem, whilst wonderful, isn't as bad as it seems: in essence, if a formal system (or salesman...) walked up to you in the street and told you it could assert it's own correctness (or honesty...) would you believe him/her?)

Cheers

*(So the input tape is just a temporally ordered list of inputs, and you expect the output tape to be a temporally ordered list of responses. Sure, some input tapes will be invalid following certain prefixes of output tapes... but you can avoid these if the machine is deterministic. If it's non-deterministic, just allow an "error, rest of tape is garbage" symbol on the output tape.)

**The exceptions tend to be things like interactive programs (which arguably are already reasoned about in an interactive model) and interpreters, which people tend to know about -- at least after the Turing-tarpit crowd get on to it and notice you can use your filelist specification language to write a minesweeper clone that describes the board in morse code...

The subject of this thread is

The subject of this thread is only barely on topic for LtU.

Let's try to keep the discussion at least semi-related to programming languages, if we can. It's ok to give references to resources about AI etc. but let's not debate the chinese room here (and/or Lucas's argument).

Andris is our in-house guru on this matter. I wonder what he has to say.

From a PL Perspective...

...it probably means that we need to add I/O and State to our programming languages. One wonders how we got along without these for so long.

Computational power != expressiveness

I think the point of the article is that frequently Turing Machines are presented in a way that makes you believe that they can do all that a real machine can do and this is not true. A real machine can cope with a infinite number of inputs and outputs (to model this you would need a infinite number of Turing Machines or use a model like Persistent Turing Machine, that is cited in the article).

I think I agree

At first I was highly skeptical of the claims of the paper, and tended to disbelieve them. But after thinking about it for a while, I tend to agree.

I offer functional programming into evidence that Turing Machines don't capture all of computation. Namely, nearly every functional programming language sacrificed purity in the name of doing input and output. The few languages that didn't had highly awkward or non-existent IO. It wasn't until the invention of monads that IO become reasonably natural in a purely functional framework, a problem solved only within the last 15 years.

The pure lambda calculus is exactly equivalent to Turing machines. However, since it was so difficult to integrate the pure lambda calculus with IO, it seems to me that there really is something to the authors' claims.

It wasn't until the invention

It wasn't until the invention of monads that IO become reasonably natural in a purely functional framework, a problem solved only within the last 15 years.

Even then, monads don't make for a "purely functional execution", if such a thing can even be envisioned -- that is, conceptually, monads still rely on run-time side-effects for their execution.

At least, that's how I see monads, with my admittedly limited experience.