The simplest mechanism with Turing-equivalent power to date ...

From Stephen Wolfram's announcement:

But today I am thrilled to be able to announce that after only five months the prize is won--and we have answer: the Turing machine is in fact universal!

...

We plan an official prize ceremony in a few weeks--fittingly enough, at Bletchley Park, where Alan Turing did his wartime work.

But for now I'm just thrilled to see such a nice piece of science come out.

It's a very satisfying way to spend $25,000.

The universal Turing machine in question is a cellular automaton with two states and three colors.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Another LtU hook

I notice that the author of the proof, Alex Smith, "has a background in mathematics and esoteric programming languages."

Turing tarpits

The 2,3 machine reminds me of other Turing tarpits like BrainFuck and Unlambda.

I'm not sure it is accurate to crown 2,3 as the "simplest universal Turing Machine" - unless one explains what they mean by simple. That is, 2,3 is the simplest UTM when the constraint is constucting a machine from the smallest number of bits possible.

[Note: content variously edited - stumbling for the right words].

Mapping to LC?

Does anyone know of any work done mapping Wolfram's CA formalism to the LC?

In particular, it would be interesting if there is a naturally isomorphic encoding between a "program" in the CA and a lambda expression, or perhaps more generally a combinator basis of the LC to one of these CA Turing machines.

(Having finally acquired a copy of Barendregt after many failed attempts, I can't comprehend anything right now that doesn't map onto the LC. ;-) )

I commented on this...

...at Reddit.

Yeah, that's me.

Incidentally, Vaughan Pratt disputes Smith's result, apparently because the universal machine he proposes requires a tape with an infinite program, which is however repeating. I don't understand quite what this means in LC. Maybe it means an interpreter "program" which is not finite. (Programs are finite by definition.)

It seems to me that if the infinite program (source) is repeating, and it can be computed, then the TM could produce it while it is running. If the source is not computable, then it should not be called universal. But there is some coroutining or concurrency involved here, so I am not sure.

Just wanna say...

... good to "see you" here, Frank!

Thank you, kindly.

And the same to you, Marc!

Vaughn Pratt's objection

I presume someone will correct me if I've misread Vaugh Pratt.

Vaughn Pratt's description of Smith's construction (which Smith and the Wolfram Inc. folks seem to agree with) describes: 1. a clearly non-universal process that generates an infinite, a-period encoding of the input problem. 2. the challenge problem 2-3 machine clearly doing universal computation given such input.

Vaugn Pratt points out that the same construction technique would prove that something roughly like an LBA is "universal". Imagine an LBA-like device that happens to run on an infinite tape. Regions of the tape can be marked off wherein the machine will operate like an LBA until/unless the machine can prove that it will not halt with a solution while confined in that region. When such non-termination-in-this-region is found, the machine marches off in one direction (say, always "right" on the tape) until it finds a copy of the same problem, but encoded with (say) twice as much space. The process repeats indefinitely. A non-universal program can copy the input program infinite times on the tape, each copy having twice as much "space" as the previous. The sort-of LBA that marches-right when it gets stuck is clearly not universal. The combined system of a march-right LBA with a space-doubling program copier is clearly universal.

So, says Vaughn Pratt, the 2-3 machine has been shown to be as "universal" as our move-right-and-retry LBA -- not universal at all, on its own.

Smith and the Wolfram Inc. folks counter that the word "universal" has been expanded over the past 40 years, in part to include cases exactly like this one. In particular, they insist its reasonable to let any clearly-non-universal process initialize the entire tape. Any machine that can compute, given such input, is usefully said to be "universal".

After all, they might point out: a true LBA would never do the step of "march right and restart" since a true LBA would never try to move outside the space for the fixed-space encoding it is giving up on. So, the "march right and restart" behavior adds computational power.

I'm with Vaughn Pratt on the vocabulary part, at least. The 2-3 has not been shown to be universal. Universal machines accept a previously constructed input and do arbitrary computation. Smith's construction (and retrying-LBAs) must run concurrently with a second process, that second process continuously expanding the region of the tape that has been initialized. The dynamic behavior of a 2-3 machine engaged in a universal computation can not be explained without reference to the concurrent dynamic behavior of an encoder process. The total size of the system is the 2-3 machine plus the concurrent machine that encodes the tape: the computational process is a combination of the two activities.

That said, the 2-3 machine (at least) belongs to a class that is at least as interesting as universal machines: it is (we could dub it) a "general purpose computer".

General purpose computers are exactly the same power as the march-right machine: LBA-like machines but that are capable of universal computation if they are continuously fed additional initialized input tape at some finite rate. Real world computers are exactly like this: you could keep running a divergent computation that requires unbounded space on Pentium PC if, outside of the computer, you build (and keep extending) a robotic tape library to give your computation more and more space.

Every universal machine is, using these definitions, a general purpose computer. The converse is false, though. Smith has shown that the 2-3 machine is a general purpose computer, not that it is universal.

We could go on to define the class of "proper general purpose computers" -- those which are not universal. The structure of proper and improper general purpose computers and universal machine, plus how these relate to the classic chomsky hierarchy machines, is not all that far explored yet, as far as I can tell. It is apparent, though, that there are useful distinctions to make between general purpose computers and universal machines. And further distinctions because in the case of the 2-3 machine we don't yet know whether it is a proper or improper general purpose computer.

I'm also with Vaughn Pratt at being surprised Smith's contruct was declared to prove 2-3 "universal" in the first place. The Wolfram Inc. folks have come up with a bibliography of fewer than 20 papers that arguably use the word that way. That seems a slim branch to crawl out on.

Note to Wolfram Inc.: Next contest should say, simply: "$N for any proof we deem to be really cool that pertains to Y." We can then figure out what the proof actually means after the fact, trusting in your apparently good sense of really cool.

-t

AEP

(Acronym Expansion Police): For the record, this usage of LBA appears to mean Linear Bounded Automaton.

Unbounded LBA?

As an LBA seems to be defined as a Turing machine that is restricted to use an amount of tape linear in the input, then what is an LBA without the size restriction but a general Turing machine?

A Turing machine is also only universal if you keep "feeding in" (letting it move to more) tape indefinitely.

unbounded lba

Given a "classic LBA" L as you understand it, one can build (without general computation, in a finite number of steps) a second LBA, L-halting, that always either computes the same final result as L or halts with a state and tape condition that tells us that L would loop forever.

Suppose (for illustration -- there are other ways to construct this) that L and L-halting both use the colors Red, Blue, and Green on the tape.

L-halting-as-turing is a turing machine (infinite tape) that is just like L-halting, except there is an additional color -- Purple. When we program L-halting-as-turing we paint some continuous finite region of the tape with Red, Blue, and Green -- but the rest of the tape in both directions is purple. Because L-halting-as-turing is (at heart) just an LBA, if started off at the right place in the initialized region, it will never stray into a purple cell. It will always halt before doing so.

Now we can make one more machine: L-searching-using-L-halting-as-turing.

That last machine runs just like L-halting-as-turing but if it would ever halt because L would halt, it does something different -- it marches towards the right looking for purple. Finding purple, it continues until (if ever) it hits Red Blue or Green and, should it find such a cell, it does a second round of emulation of L-halting-as-turing. (Then repeats -- if that second emulation doesn't work it looks for a third chance, etc.)

So, it's pretty obvious that L-searching-using-L-halting-as-turing is not universal like a universal turing machine. If you give L-searching* a tape with only a finite non-purple region, it can't compute anything that couldn't be computed by giving an LBA (specifically L) a similar initialized region.

But, if the tape is not infinitely purple in at least one direction -- there's a chance that L-searching* can do computation.

Can L-searching* be universal if the rest of the tape isn't all purple -- but is some regular language pattern? Yes. As it passes over each purple cell, L-searching* can paint it some other color. When it next encounters a non-purple cell in its rightward-trek, it can enter a "retry" state that regards the painted over purple cells as extra memory in which to retry the same problem.

So here is an interesting question: do there exist LBA such that the corresponding L-searching* is *not* universal with any regular language input tape but *is* universal with some a-periodic tape that can be defined by induction over some non-universal computation?

Smith has shown that the 2-3 is equivalent to an L-searching* machine that is universal given a non-regular input tape. Is it universal given a regular tape?

Additional questions:

We can transpose the Chompsky hierarchy to input tapes. There are infinite tapes that can only be given by induction over a regular language, infinite tapes that can only be given by induction over a linearly bounded language, over a push-down automata language or over a turing machine.

Corresponding to each class of tapes is a class of machines that, given some tape of that sort, can complete a universal computation.

For example, regular machines, LBA, and PDA all (trivially) form a computationally complete system with a tape prepared by induction over a universal turing machine.

A regular machine does *not* form a complete system when combined with a tape prepared by another regular machine, or by an LBA or PDA.

And so on.

Smith has exhibited two machines: the 2-3 challenger and the induction step of his tape construction. Together they are universal. I'm not sure we know much more than that.

-t

An aside on TMs versus "GPCs"

This is not quite the same distinction as your GPCs here versus traditional TMs, but ...

Yuri Gurevich had a nice result back at the University of Michigan in the late 1980's or early 90s about TMs and their infinite tapes. Specifically, he showed that the computing power of a machine with finite tape size, as that finite size went to infinity, was in the limit not equal to the power of a traditional TM with an infinite tape.

This is a bit surprising, since "finite, tending toward infinity in the limit" sure sounds a lot like "infinite".

I could probably dig up a reference on this if someone wants it.

(Yuri is now at Microsoft Research, doing work on Abstract State Machines, some of this with Andreas Blass (still at Michigan, I think). ASMs were a kind of reaction to Denotational Semantics in their early years, at least as I understood it.)

TMs vs. "GPC" question

Specifically, he showed that the computing power of a machine with finite tape size, as that finite size went to infinity, was in the limit not equal to the power of a traditional TM with an infinite tape.

Taking those words as ordinary english they sound trivially false and I don't see a paper on Yuri's http://research.microsoft.com/~gurevich/annotated.html">list of publications that is obviously what you are talking about. So, I'm sure I have no idea what you refer to.

Can you give some pointers and/or expand on what you mean by "limit", "infinite", and "equal in power"?

Thanks,
-t

Gurevich on limits of TMs (and the genesis of the ASM project)

[Sorry for the delay, Thomas: I couldn't find Yuri's tech. report in my files, but eventually got it (or what I suspect I was thinking of) from the web; "Google rocks", etc.).]

I think the tech. report in question was Reconsidering Turing's thesis: (toward more realistic semantics of programs), a 1984 technical report, #CRL-TR-36-84, with a PDF available at the link cited, via U. Mich. libraries "Deep Blue" project. (No chess connection, AFAIK: "blue" is just the color associated with Michigan's sports teams, as in "Go blue".)

I've just skimmed the report, but the basic idea is a critique of the notion in Turing's model of access to unbounded tape. Yuri defines a model based on families of resource-bounded machines instead. This particular tech. report does not formalize everything, but asks whether uniform families of bounded machines correspond to traditional TMs (with unbounded resources), including some discussion about algorithms which do or don't make use of explicit access to the bounds. While I don't think there is a firm technical result here regarding the difference between unbounded TMs and families of bounded machines in the limit, it is at least suggested by an informal construction. And I recall it being pitched that way at the time, although that may have been someone else's characterization (i.e., not Yuri's).

There may be more formal/technical results in Yuri's later work: you'll have to forgive some fuzziness and gaps in my memory, since this is all about 24 years ago. For that reason and others, you should look to Yuri's actual papers (rather than my interpretation) for an accurate sense of his results and position.

LTUers in general may be more interested in some remarks in the introduction to the tech. report about lambda calculus as the basis for formal semantics, e.g.:

It was surprising to find untyped lambda calculus as a basis for denotational semantics. This calculus was never too popular among logicians; the underlying intuition of the calculus is not clear. Note that untyped calculus is used in [St] mostly for foundational purposes; when it comes closer to applications, a much more intuitive typed calculus is used.

The paper represents the beginning of Yuri's (and others) Abstract State Machine project, which is now quite mature and well-documented in the papers listed at his (aforementioned by Thomas) Micrsoft website. In essence this is a critical response to denotational semantics in general.

At the time I think I had a basic stylistic clash with Yuri over these perspectives: I found lambda calculus quite intuitive and machine-based formalisms less so. Now I find his critique more interesting. Of course, I was just a beginning grad student in 1984: I have matured a lot since then (I hope ;) ), taught courses on automata and formal languages, etc.

(Ironically, I think that the combinator-graph based implementation of Miranda (tm), which I was using and studying at the time, would probably fit quite well into Yuri's scheme of things ... but I suspect his taste does not especially run toward functional languages. Miranda is of course a trademark of Research Software, Limited.)

computing vs. leaving evidence of computation

So, there's a way (as I describe above) of breaking down universality into two machines: one that prepares the infinite tape by an inductive process; the other that runs on that tape, never catching up to the initializing process.

Really, it's a threesome: there are those two machines plus a third that emits at atomic signal indicating that the solution to a computation is ready (and where it can be found).

I wonder if anyone has cataloged the 64 cases in some systematic way (4 chompsky categories raised to the (input-tape, computer, output-detector) power).

-t

don't have time to work on it

There's something interesting about related rates, too. I'd guess someone has explored it already but:

Suppose you have one clearly sub-universal machine preparing an infinite tape for a machine that is universal given a suitable tape with infinite initialization.

The tape-initializing machine has to run at least as fast as the then-universal machine eats tapes. In general, the rate at which the universalish machine consume space has to be no greater than the rate at which the initializer can creaet space -- or else the machine isn't obviously realizable.

Not sure if Smith adressed that or not....

-t