The Origins of the Turing Thesis Myth

The Origins of the Turing Thesis Myth
In this paper, we identify and analyze the historical reasons for the widespread acceptance of what we call the Turing Thesis myth, which extends the Turing Thesis to imply that Turing Machines model all computers.
The paper discusses how
Turing Thesis: Whenever there is an effective method (algorithm) for obtaining the values of a mathematical function, the function can be computed by a TM.
became
Strong Turing Thesis: A TM can do (compute) anything that a computer can do.
While certainly nothing new for LtU regulars, the paper still has some educational value.

[on edit: Warning, some of the statements in the paper may aggravate Haskell programmers]

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Yampa?

Functional Reactive Programming appears to be a way for TMs to do what the authors say TMs cannot do. See Yampa for an example of some FRP ideas at work.

(edit) I do not know that it is not the case that while a computer can do what FRP can do, a TM proper could not. However, I would suggest that FRP at least upholds the honour of "the mathematical worldview" in the area of computation.

Is Haskell program equivalent to TM?

Well, I warned you :)

Do you imply that Haskell program, compiled, and linked with Haskell runtime support, is still equivalent to Turing machine? Or is Haskell interpreter equivalent to universal Turing machine?

I don't know

Isn't it?

(Another edit) What I'm not sure of is just where the equivalence ends, assuming it does. What I don't want to do is try to jump off at the deep end and then find there's another three feet of diving-board in front of me.

I think the point about interaction is starting to sink in. But at one end of the interaction, what you have is still a Turing machine. If things are plumbed in such a way that the (unpredictable, at least by the machine itself which doesn't have a model of the external world to predict with) consequences of its own outputs are fed back to it as inputs, then you have a control system. But it's a control system based on a Turing machine nevertheless.

Haskell is a "choice machine"

Haskell is equivalent to Turing's "choice machine" (although I haven't proved that ;) by virtue of the presence of input functions like getChar, which provide input to a computation after it has started.

Of course, this is true of all real computers and programming languages (which support interactive input).

If things are plumbed in such a way that the (unpredictable, at least by the machine itself which doesn't have a model of the external world to predict with) consequences of its own outputs are fed back to it as inputs, then you have a control system.

That arrangement is formally distinct from an ordinary TM, though. An ordinary TM's input and output are both its tape. You're describing a system in which the tape is modified by an outside agent during a TM's computation. That's pretty much the sort of thing that a choice machine is designed to allow.

One of the issues here, as I understand it, is that theoretical work and proofs have mostly focused on ordinary Turing Machines, not choice machines. Regardless of how small the difference between the two may seem informally, that doesn't mean that all the formal results for TMs follow for CMs.

[P.S. In case anyone missed it, choice machines are mentioned on the first page of the paper - the above assumes that this was noticed.]

Author's reply

I had an email exchange with this paper's lead author, Dina Goldin. She tried to register here to post a response, but had some problems (which I'm investigating).

In the meantime, here is her response to the above message:


Dominic Fox wrote:
But at one end of the interaction, what you have is still a Turing machine [...] it's a control system based on a Turing machine nevertheless.
I am the author of the "Origins of the Turing Thesis Myth" paper, as well as other papers on interaction. It is true that there is a Turing machine inside a sequential interaction system (i.e. a Persistent Turing machine, or PTM). However, this does NOT imply that that their expressiveness is the same. After all, if one goes back to the definition of the Turing machine, its computation is nothing more than a series of lookups in a transition table, and it could be viewed as a sequence of computations by a finite automaton. However, as we all know from our Theory of Computation courses, the expressiveness of Turing machines is greater than that of finite automata. Similarly, as our I&C paper shows (http://www.cse.uconn.edu/~dqg/papers/its.pdf), the expressivenness of PTMs is greater than that of Turing machines.

"Mathematical Worldview"

They seem to equate functions and mathematics.

From taking many courses and reading materials, this is such a narrow and unexposed viewpoint. If they say that it's not mathematical they are basically saying it's unformalizable.

However, as we all know from

However, as we all know from our Theory of Computation courses...

That would be where I missed out, then.

Does it then follow that even if two "Turing complete" programming languages are equivalent in terms of the functions they can evaluate, they are not necessarily equivalent in terms of the computations they can perform?

Turing-complete is not Turing-limited

Does it then follow that even if two "Turing complete" programming languages are equivalent in terms of the functions they can evaluate, they are not necessarily equivalent in terms of the computations they can perform?

As I understand it, Church-Turing thesis assumes all computable functions are expressible by Turing machines. Being "Turing complete" means capable of at least that. Most of practical PLs are "Turing complete" (can compute any "compute" function), but are not limited to just functions (mostly because of their support for interaction).

So...

Are there then any significant differences between "Turing complete" PLs in terms of what computations they can perform (if by "computations" we mean more than just the computing of computable functions)?

If I've understood correctly so far, the fact that Haskell can be used to write interactive programs and (unadorned) XSLT can't, means that even though both languages are "Turing complete" Haskell can perform computations that XSLT can't (although it can't compute any functions that XSLT can't).

Could we say, for instance, that Haskell can do anything a PTM can do (and vice versa), and that all PTM-capable languages are equivalently computationally expressive?

Yes, we could, but...

A short answer is yes - just make all PLs equivalent by definition. Which is probably not what you meant.

A simple idea from the top of my head: introduce a binary relation between PLs, say "can be used to write an interpreter of". It is clearly a preorder (reflexive and transitive). Use it to induce an equivalence "interpretationally equivalent", and a partial order between the classes of equivalence "interpretationally stronger". Looks consistent, though may be not very useful in practice - the initial preorder needs to be defined more formally (something along the lines of simulation?). I believe that most of the general-purpose languages will be "interpretationally equivalent", while being "interpretationally stronger" than languages without I/O or DSLs.

Also, there are subtle problems with what is a PL - what to do with libraries, FFI...

Or categorically

That preorder just establishes the fact that some language is "interpretably" by another. If one is interested in specific interpreters, a category might be better idea.

Consider a category of PLs, with arrows from L1 to L2 being interpreters of L2 written in L1. Composition is obviously running an interpreter by another interpreter.

On edit:

Or better yet, think of PLs as types, interpreters (or compilers) as functions, and you get a lot of intuitions for free (including false ones). Just imagine a polymorphic compiler, or curried interpreter.

Turing completeness

Note that a total language (all programs terminate)
like Epigram is necessarily Turing-incomplete. Is
this a problem? I don't think so, if we view programs
as solutions to problems, which we have to justify
in some way. Epigram offers you the possibility to
explain your solution in a way that it can be checked
by a compiler. Not every program has to be explained,
Epigram should allow for this, but not all solutions
are explainable wrt to a fixed formal system. A problem
for philosophers not for programers, Epigrams logic is
strong enough to explain any run-of-the-mill program.

Is Epigram a bottomless TM?

Epigrams logic is strong enough to explain any run-of-the-mill program.
Are web servers run-of-the-mill programs?

The road I see for Epigram to become more complete is to embed it into some other system for which Epigram's runs would be just steps. Very much like persistent TM embeds classical TM. One may argue that since intuitively Epigram is almost TM without non-teminating runs, then "persistent Epigram" is almost PTM (without non-terminating steps). Very naive judgement, but pretty simple.

Would we get more than two classes?

There might be a little more scope for differentiation between DSLs, but above the threshold of Turing-completeness-plus-I/O is any language "interpretationally stronger" than any other?

It depends

is any language "interpretationally stronger" than any other?

I expect the answer to depend on the particular formalization of interpreters. There is a plenty of papers about equivalence semantics though I didn't read any of them, so can't say whether they are any good :)