The Complexity Zoo

The Complexity Zoo

Please do not feed oracles to the complexity classes! These classes require a specially balanced diet to ensure proper relativization.

Way too often PL designers refer to Turing machine or other formalism as a benchmark of expressivity. For practical needs, though, the algorithmical complexity of programs is quite important. E.g., if the program takes O(n) steps on Turing machine, but O(2n) on my shining new rewriting engine, then it's time to doubt the usefulness of my engine in practice.
What are the well-known (classical) results relating complexity classes of programs expressed on different media (calculi/abstract machines/etc.)?
Or am I totally confused and there could not be such a thing?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Poly time church-turing

Well, the polynomial time Church-Turing thesis posits that the class P is the same under all deterministic models of computation - any reasonable attempt to model deterministic computation will end up polynomial-time equivalent to a Turing machine. As far as I know there's no evidence against this, but it's one of those widely-accepted pieces of wisdom that's not stated formally enough to be formally proven.

Of course once you allow non-determinism, quantum computers and so forth, the situation changes and you can start doing NP-complete problems in polynomial time.

Sure, but there's an apprecia

Sure, but there's an appreciable difference between linear algorithms (in P), and quadratic algorithms (also in P). This is the obvious reason why people use random-access machines rather than turing machines to perform string processing.

Of course, although most of c

Of course, although most of complexity theory suffers from this problem. Although it's generally defined in terms of Turing Machines, it aspires to be about some abstract/generalised notion of computation, rather than tied to any one arbitrary-chosen machine, and so has to be done modulo polynomial equivalence. And of course it's only concerned about asyptotic behaviour, rather than behaviour for practical input sizes.

The OP's example was a difference between polynomial and exponential time complexity though, which the theory can help with :)

My serious point is that diff

My serious point is that differences do surface between machine architectures, and that in practice those differences really do matter, and those differences are "asyptotic" [sic] (i.e. we can distinguish the O for a faster polynomial algorithm is not the O of a slower algorithm). It is simply not enough to say that some algorithm is polynomial time, so it is tractable - you just need a big enough cluster. These differences do matter for anyone who does not have unlimited amounts of supercomputers.

I agree! I was merely pointin

I agree! I was merely pointing out for the OP's benefit why complexity theory isn't useful for studying these differences between polynomially-equivalent machines.

Thanks for pointing out the minor typo.

Aha

Thanks, using your post I was able to google quite a lot. Here is a short example of that.

But still, is not "only a polynomial slowdown" a bit too much in practice? Will business users invest in a single-tape TM if it costs only half as much as a double-tape (both of them costing infinite amount :-) ) if they know the consumption of time is squired for the cheaper one? Don't users of high-level PLs expect only a constant slowdown as compared to bare metal? What theory can guarantee that properly built PLs do not incur the full polynomial slowdown?

My education is pretty much absent in this area, so excuse me if I am asking something obvious.

There are many examples...

Basically, every strongly-normalizing (ie, terminating) lambda calculus represents some sub-Turing complexity class.

For example, System F can express all of the primitive recursive functions (and maybe more; I'm not sure). The well-typed programs of Light Affine Logic encode all of the polynomial time algorithms.

"What is an Efficient Implementation of the lambda-calculus?"

I guess I found what I wanted:

What is an Efficient Implementation of the lambda-calculus?

We have defined a notion of complexity with which to measure the efficiency of an implementation of the lambda-calculus. In terms of this, the complexity of a functional program can be given a precise meaning. We show that all implementations must have at least linear complexity. We have devised a technique [...] for proving combinator based implementations inefficient. We [...] show that a conventional machine model can be simulated in the lambda-calculus with only a constant amount of overhead.
But note also this paper: Optimality and Inefficiency : What Isn't a Cost Model of the Lambda Calculus?

Different Models of Computation

The usual way of showing that a model of computation X is equivalent to a TM is to show that a) a TM can 'emulate' X and so execute any program running on X, and b) X can emulate any TM. In the case of X emulating a TM you write a program in X which emulates the TM's tape and control unit. A typical TM instruction is:

If in state A move L/R and/or write a character on the tape then change to state B

The time taken to do this on X will probably be roughly constant. Ie if one step of the TM takes 1 second it will take K seconds on X. So any program running on the TM could be run by X and the program would have the same complexity - though the constants would be different. In that sense the complexity classes don't change.

They could change if, for example, X could read a number n and then in one step do something to n objects. (Note n would have to be a runtime value or you'd just have something equivaent to a multi-tape TM)

So, what, we should all progr

So, what, we should all programme in lambda calculus?

Seriously though, we all knew that. Just because two computational models can calculate the same set of numbers does not mean that they will take the same number of operations, or even that the functions describing the number of operations will have the same big-O. As I've already said, reversing a string will take n*n operations on a turing machine, but n operations on a random-access machine.

Physics?

What are the well-known (classical) results relating complexity classes of programs expressed on different media (calculi/abstract machines/etc.)?

Why not add physical constraints to the list? For example, we like to think that we've got O(1) access to elements of arrays, but that's really hiding the fact that RAM is really implemented with O(log n) trees once you consider the gates that make up the row and column decoders. And even that is overly optimistic, because if your bits take up volume (and the speed of light is constant), then the best you'll get is O(n**1/3).

Functional/non-functional and strict/lazy

There's a rather impressive result which I feel belong in this discussion. Nicholas Pippenger showed that there are certain algorithms that cannot be implemented with the same complexity in a purely functional strict language as in a language with destructive updates. Later Bird, Jones and de Moor showed that these algorithms can in fact be implemented with good complexity in a lazy language. Thus lazy languages are in some cases asymptotically faster that strict pure languages. A good page which discusses these results can be found at Oxford.