When Are Two Algorithms the Same?

When Are Two Algorithms the Same? Andreas Blass, Nachum Dershowitz, Yuri Gurevich. February 2008

People usually regard algorithms as more abstract than the programs that implement them. The natural way to formalize this idea is that algorithms are equivalence classes of programs with respect to a suitable equivalence relation. We argue that no such equivalence relation exists.

A bit more philosophical than usual, but the issue is quite relevant to discussions in the field.

It is possible to stipulate any equivalence relation that is considered useful (e.g., equivalence up to local transformations) but the notion of a universally applicable relation is indeed problematic.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

What *is* an algorithm, anyway?

Again excluding functions in the abstract (mappings from domain to codomain, possibly partial, which specify no way to derive the codomain from the domain)--do algorithms really even exist independent of some computational model?

We often speak of the time and space requirements of a function or algorithm. To speak of such things with regards to a function-in-the-abstract is generally misleading; though it's often shorthand for specifying the lower bound of all known (or possible, in some cases) algorithms. We can say that sorting a collection of well-orderded things requires O(n log n) worst-case, independent of algorithm, as it has been proven that no algorithm can do better worst case.

Or can we? The proof of such assumes a computational model which permits certain operations (comparison, exchange, selection of an arbitrary element) with certain defined costs. If we admitted to our model an oracle which sort in O(n) time; then the analysis would change and the proof would be no longer correct. OTOH, if we were to have a model in which certain operations were no longer O(1), then the lower bounds would also change.

All claims concerning performance of an algorithm assume a particular computational model.

What of equivalence of computer programs?

Well... programming languages generally come with a semantics of some sort; such semantics generally define---a computational model. Many such models are highly abstract, and elide details on things such as performance (esp. the runtime cost of operations); OTOH, real programs are ultimately executed on machines (real or virtual) in which such things can be measured, (and which do have deterministic small-step semantics). And--a good part of PLT research involves program tranformation--the taking of one program as input, and producing another program as output (possibly targeted towards a different computational model), which produces the same outputs, but which may not have the same "algorithm". If I write a naive recursive implementation of a factorial program, and my compiler replaces it with an iterative implementation using TCO--is it the same algorithm?

Program equivalence is hard

Program equivalence is a really hard problem which seems to evade a half decent problem definition. At best you translate it to the problem of equivalence of formal systems. The difficulty of specification doesn't seem to disappear. Anyone game for a clear problem specification?

I suppose if formal system equivalence were a solved problem, we won't be having all the debate on software patents, 'cos telling whether a patent is really new might just be a matter of pushing a button to compared it against the existing patent body.

Standard philosophy trick

The paper does note some interesting problems and difficult ideas, but showing that deep results in computer science are hard to obtain isn't exactly new.

The fundamental problem I see is the use of the standard philosophical cop-out, which is to argue about the definition of words. Instead of taking one definition of "equivalence", and one of "algorithms", the authors keep wavering over what they could mean. Sure, it might be fun to bicker over those definitions, but that does not lead to interesting work. You have to set your assumptions before you can proceed.

For example, I could set down assumptions that would make it possible to construct equivalence classes of algorithms. Point in fact, we do it all the time in ACL2. For example, we could look at the properties of a sorting algorithm. The standard formulation is that it returns an ordered permutation of an input list, for some definition of ordering. As long as I can prove a function to have that property, I can assert it is in the class of sorts, since I've defined the term. Now, there may be problems in making the proof itself, but that is, again, nothing new.

I'd be interested if someone else found something more meaty in the paper, but as far as my quick reading could tell, they didn't seem to go out on a limb and actually assert anything.

I think fundamentally there

I think fundamentally there is a difference between functions and algorithms for computing those functions.

For example, one can define when two recursive functions are equivalent, but there may be many different algorithms for computing the same function. For example, mergesort, quicksort, and bubblesort may all compute the same function, but traditionally they are regarded as different algorithms.

shouldn't be that hard

I wish to postulate some definition of "classical algorithm" that isn't quite the step-wise program category they talk about.

In the classical CS literature, machines are always assumed to be finite but might be hooked up to infinite tapes or token streams. The main action a classical algorithm is in that finite part: finite memory, finite number of states....

Classical CS literature tries to be "scale free" most of the time. The algorithms all run on finite machines but they are described leaving open just how big a machine is being talked about. For example, a stack may be assumed to be "unbounded" in size for the practical purposes of the algorithm, but it's very clear that's what is being talked about is a finite stack that is "large enough".

Knuth makes that pretty explicit with the ambiguous architecture MIX machine.

In this reading of the classical literature, algorithms are always descriptions of regular languages or perhaps linear bound automata. Yes, they may use turing-complete "pseudo-algol" or some stack automata expression -- but it is always implicit that this is understood to mean "as mapped on to a 'large enough' finite machine".

Maybe questions like "is algorithm A equivalent to B" or "is algorithm A a generalization of algorithm B" are easier to frame precisely (and answer) if we explicitly regard classical algorithms as classes of related regexps and LBA (such classes containing arbitrarily good approximations of infinite machines for inputs limited to a certain size).

-t

Proof of the Church-Turing(ASM) thesis

The three authors of this paper are well known ASM scholars. In a paper previously discussed on Ltu here they develop a proof of the Church-Turing thesis and I think also the ASM thesis. It is tempting to wonder what the relationship is between these two papers. The first sentence of the abstract

Find, and argue conclusively for, a formal definition of
algorithm and the appropriate analog of the Church-
Turing thesis.
may be a clue. Are they building a case for their proof of the ASM thesis and by implication the Church-Turing thesis? They seem to say that there is no difference between an algorithm and am ASM?

Has anyone read this book:

Has anyone read this book:

Automatic Algorithm Recognition and Replacement: A New Approach to Program Optimization

Seems like it may be relevent....