Optimisation by repeated beta- and eta-reduction

The following post recently showed up on Planet Haskell: Morte: an intermediate language for super-optimising functional programs. From the post:

Now suppose there were a hypothetical language with a stronger guarantee: if two programs are equal then they generate identical executables. Such a language would be immune to abstraction: no matter how many layers of indirection you might add the binary size and runtime performance would be unaffected.

Here I will introduce such an intermediate language named Morte that obeys this stronger guarantee.

The typed lambda calculus possesses a useful property: every term in the lambda calculus has a unique normal form if you beta-reduce everything. If you're new to lambda calculus, normalizing an expression equates to indiscriminately inlining every function call.

I am worried about this because the author explicitly wishes to support both folds and unfolds, and, according to Catamorphisms and anamorphisms = general or primitive recursion, folds and unfolds together have the expressive power of general recursion—so that not every term has a normal form (right?). More generally, it seems to me that being able to offer the strong guarantee that the author offers implies in particular a solution to the halting problem, hence a non-Turing-complete language.

Later, the author says:

You can take any recursive data type and mechanically transform the type into a fold and transform functions on the type into functions on folds.
I have a similar concern here; it seems to me to be saying that folds can express general recursion, but I thought (though I don't have a reference) that they could express only primitive recursion.

Have I got something badly conceptually wrong?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

if two programs are equal

if two programs are equal then they generate identical executables.

Supposing the author isn't all wet, there does appear to be something odd going on. Supposing the definition of equality is about what one would expect (but perhaps it isn't), and supposing the language is Turing-powerful (which it sounds like it is), equality must be formally undecidable. Therefore, if equal programs generate identical executables, it must be that whether or not two programs generate identical executables is formally undecidable.

So either equality doesn't mean what one would ordinarily expect it to mean, or the compiler might not halt.

Equality is incomplete.

Equality is incomplete. Ultimately, he's programming with Church-encodings and primitive recursion, so the language is total in a way which is more restrictive than, say, Agda. And in Agda normalization does not prove that x + Zero = x, if you define it by induction on x.

Paul Capriotti has pointed this out on Reddit in this subthread:

http://www.reddit.com/r/haskell/comments/2g6wsx/haskell_for_all_morte_an_intermediate_language/ckggs04

Let's just hope something interesting comes out of this in the future.

Hagino on canonicity

Hagino's thesis has a little discussion on this point as well, where he points out that his CPL reduces both data and codata to a canonical form, but probably not as canonical as one might naively hope. (the key insight here being that we normally think of "equality" in terms of the mishmash of traditional laws we've become accustomed to applying, but his system is so austere —"fundamental" gadgets such as products or function application, which tend to be ancillary details, baked in to traditional systems, are library definitions in his— that equality winds up being much finer-grained)

No general recursion here

After a quick skim of that post, I'd be surprised if Morte were a Turing-complete language. The author starts out by saying it's "nothing more than a bare-bones implementation of the calculus of constructions," and later on gives a reference link on the topic of total functional programming.

I am worried about this because the author explicitly wishes to support both folds and unfolds, and, according to Catamorphisms and anamorphisms = general or primitive recursion, folds and unfolds together have the expressive power of general recursion—so that not every term has a normal form (right?).

I haven't tried to understand the details of that argument, but Morte expresses inductive types as folds, and it expresses coinductive types as unfolds. It doesn't mix folds and unfolds on the same type.

Non-Turing-complete IL

I haven't tried to understand the details of that argument, but Morte expresses inductive types as folds, and it expresses coinductive types as unfolds. It doesn't mix folds and unfolds on the same type.

Ah, probably that's the key. I am still puzzled by the utility of a non-Turing-complete language as an intermediate language, but that's SEP—at least it gets rid of any mathematical inconsistency!

Not Turing complete

The creator of the language explicitly indicates that, as you guessed, it is not Turing complete: http://www.reddit.com/r/haskell/comments/2g6wsx/haskell_for_all_morte_an_intermediate_language/ckgh9a1.

No room for Turing machines

There are no Turing machines, for the universe is too small to hold even one potentially infinite tape; consequently, all implementations of "Turing-complete" programming languages are in fact not Turing-complete. Nonetheless their utility is substantial.

"Ah, but a man's reach …"

Surely this is an over-literal point of view? Assuming that the universe is bounded, every computer that will ever be implemented physically could be simulated by a finite-state automaton; but I think that no-one would take this to mean that we shouldn't study the various computability hierarchies.

People can study whatever they like

even paper grocery bags, as Tolkien said once. But it's not clear that there is a sufficient excuse for our programming languages to be Turing-complete when the implementations aren't and can't be.

Finite Limits

However, you also don't want the language imposing arbitrary limits on the size. 640k is enough for any program? You can always find a finite example that needs M + 1 memory. Given we have working programs of size M, and we know we can find a working program of size M + 1, by induction we can see that we need a language that supports all 'M' up-to (and by definition including) infinity.

The paradox of the heap, reversed

By that argument, we need infinitely large machines, but we don't have them. For some value of M, the fact that "we need" an M+1-capable computer just means we would like to have it, but since we can't get it, we live without it.

The paradox of the heap, which goes back to Classical Greek times: Given a heap of sand grains, we can divide it into two heaps. By induction, then, there are heaps of a single sand grain, yet we know perfectly well that one grain doesn't make a heap.

C

C is formally a finite state machine, isn't it?

Likely

I'm hazard a guess that it is, in theory, on the grounds that a pointer is... I think... limited to the same number of bits as an integer.

A pointer can be different than an int

There is no constraint in ISO/IEC 9899 (1999 or 2011 version) that add any requirement about that; really, the only constraint is about some particular different types of pointers having the same alignment requirements and representation as each other (6.2.5 Types, clause 26 in 1999 and clause 28 in 2011):

A pointer to void shall have the same representation and alignment requirements as a pointer to a character type.39) Similarly, pointers to qualified or unqualified versions of compatible types shall have the same representation and alignment requirements. All pointers to structure types shall have the same representation and alignment requirements as each other. All pointers to union types shall have the same representation and alignment requirements as each other. Pointers to other types need not have the same representation or alignment requirements.

This was also used in older implementations; as an example, the Amiga used 16 bit ints and 32 bit pointers.

I can't think of or find any other constraint that would result in ints and pointers ending up with any relative bit requirement. There's some practical considerations - lots of Unix implementation code assumes/assumed that pointers were/are the same size as ints - but there's no standards based requirement.

Indeed

And the only standard thing that makes an int-pointer mapping canonical is intptr_t and uintptr_t, which are both optional, AFAIK.

Pointers are different from ints

This was also used in older implementations; as an example, the Amiga used 16 bit ints and 32 bit pointers.

Another example is current x86-64 boxes, where common ABIs make ints have 32 bits and pointers (obviously) 64 bits.

In some sense C defines a proper Turing (equivalent) machine

When run on a "machine" consisting of the process:

1. Run the program on the microprocessor you have, and if it runs out of memory, then

2. Wait for the next generation of microprocessor with double the pointer width and go to step 1.

C pointers and ints

I think the finiteness of memory is implied two ways:

1. Pointer arithmetioc

An integer difference between any two pointers can computed. (Is this actually formally true?)

The resulting value fits in size_t.

size_t is finite.

2. sizeof (any_pointer_type)

Similar argument.

close enough to my understanding

(I hope a C language lawyer doesn't want to engage, since I find lawyerism a bit boring, and don't want to play.)

I always assumed infinite size of a Turing machine's tape was due to lack of concern for a specific limit, as not very interesting, as opposed to it being an important characteristic. Since no device can be infinite in capacity, any realization of a TM is finite. And finite-sized random-access von Neumann machines are just faster, but not much different in terms of what you can get done. Informally, most practitioners I know (who bandy about the idea of Turing-complete languages) consider only three things to be necessary to make arbitrary machines equivalent to a TM because you can build FSM's dynamically: 1) ability to test-and-branch to get varying behavior depending on values, 2) ability to loop, and 3) ability to write memory that affects results in (1) and (2).

C's spec is a bit funny about trying to avoid specifying how big a pointer can be, and therefore how big the address space might be. Nothing would stop you from defining integer and pointer types bigger than a host processor can handle natively. You could use 128-bit pointers on a 64-bit processor; it just wouldn't be as efficient. Anyway, I have trouble seeing why finite memory size becomes a practical problem, because you would run into other practical problems first when trying to use (say) a 128-bit address space. Something unwieldy would happen with costs that are too high.

Regarding your points: yes, the integer difference of any two pointers (of the same type) can be computed, and it fits into ptrdiff_t, which is understood to be signed. (There might be ontological provisos about where the pointers came from, like addresses of objects actually allocated. Etiology of pointers is probably constrained.) While it's typically the same size as intptr_t, it doesn't have to be. Since sizeof(ptrdiff_t) must likely be constant, rather than variable in size, I think it does imply address size is finite. Normally pointers are all the same size, but they may not need to be; even so there would be a biggest pointer whose size is constant with similar implication.

Mostly just verifying your assumptions as correct

There are constraints on etiology of pointers; they have to be in the same array. Pointers don't need to be the same size, but sizeof for pointers need to result in a constant (for each pointer type). So does sizeof for ptrdiff_t. So your base argument works.

And yeah, I agree with you that the infinity of Turing machine tape is usually uninteresting; for practical purposes, it's universal computation on the amount of memory we have available that's interesting.

The difference between any two pointers in the same array only

I think the sizeof(any_pointer_type) argument works and is a simple and good one.

Pointer difference is a bit weaker than you imply, though. The difference between two pointers are constrained to two pointers to the same array, and ptrdiff_t isn't even guaranteed to be able to hold it. 6.5.6 clause 9 starts with:

When two pointers are subtracted, both shall point to elements of the same array object, or one past the last element of the array object; the result is the difference of the subscripts of the two array elements. The size of the result is implementation-defined, and its type (a signed integer type) is ptrdiff_t defined in the header.
If the result is not representable in an object of that type, the behavior is undefined.

Pointer size wouldn't matter

Pointer size wouldn't matter if there were some abstraction that could be addressed incrementally, but had no knowable, fixed size.

For instance, consider fseek(), fread(), fwrite(). If it weren't for ftell() which bounds FILE position, those 3 functions could be your infinite tape. Of course, FILE isn't part of the language, but being part of the standard library should qualify I think.

approaches to chomsky hierarchy

On idea (C) is to have a language whose programs are formally finite state automata, but whose numerical limits are meant to be large enough that we mostly just think of it as Turing complete.

Another idea (traditional Pascal, I think; traditional Scheme, I think) is to have a language whose programs can be formally Turing complete although the spec might mention that all implementations are expected to have limits.

Another (regexps) is to have a language whose programs are formally sub-Turing complete (e.g., FSA or LBA) and programmers using those languages must regard them as sub-Turing complete.

Turing's idea (as mentioned obliquely in other earlier comments here) offers another path:

Languages (none come to mind other than toy TM languages) whose programs are sub-Turing complete (e.g., an FSA) but which are defined to run connected to a store of some kind which, through the trick of relative addressing, can offer formally infinite storage. The ensemble (as with FSA + infinite tape) can then be formally Turing complete.

In this last case, programmers must think about their programs as sub-TM AND as defining the behavior of a TM.

That last one seems like a realistic and practical possibility to me. I have an experiment going along those lines but not one ready for publication yet.

A few examples of sub-TM

A few examples of sub-TM languages that compose with storage and time into Turing machines: Dedalus Datalog, Funloft, (maybe) Esterel.

I seriously considered such approaches for my Awelon project. I ultimately decided there isn't much practical difference between "primitive recursive" incomplete languages vs. Turing complete ones with respect to risk of resource exhaustion. A naive parallel implementation of Fibonacci function can consume all available resources just as effectively as Ackermann or a divergent computation.

If we had something like 'big-O' types, we might be able to achieve reasonable limits. It might be an interesting area to explore. I didn't want to break that much new ground with Awelon.