"The Anatomy of a Loop"

What's better: recursion or iteration? This topic has been discussed to death on LtU:

Olin I-did-it-all-by-myself Shivers weighs in [PDF, 194K]:

In call-by-value functional languages such as ML or Scheme, we typically write loops using tail-recursive function calls. This is actually a terrible way to express program iteration, and it's not hard to see why. As was popularised by Steele, a tail call is essentially a "goto that passes arguments." So writing loops with tail calls is just writing them with gotos. Yet, it has long been accepted in the programming-language community that goto is a low-level and obfuscatory control operator, a position stated by Dijkstra's "Goto considered harmful" letter.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Yes

For the recursion vs. iteration debate, I would be most happy to write an "Exclusion of mechanisms considered harmful" paper that lays out the point that in some cases, recursion make sense, and in others, iteration makes sense. But really, anyone who has used both techniques in non-trivial ways can see that this is true without me having to write a paper spelling it out.

Like others have pointed out, tree structures are very nice to traverse recursively; and in the absence of performance requirements, I typically choose to do so. On the other hand, I know that exactly for performance reasons, most quality C++ SCL implementations choose a red-black tree as the backing for std::set and std::map, and select iteration-based algorithms to deal with them, even though the recursive ones are "more natural".

So the question is of whether iteration or recursion is better is like asking whether arrays or lists are better. They each have strengths and weaknesses, and like every other tool in a programmer's toolbox, it is the programmer's responsibility to choose the best one for the task at hand.

Tail recursion sacrafices

Yeah I'd agree - recursion is wonderful but you have to sacrafice a lot of its benefits to make loops tail-recursive and perform well. E.g. rather than a high-level declarative definition of the factorial function:

let rec fac n =
    if n == 0 then 1
    else n*fac(n-1)

We end up with something that looks just like a trivial syntactic transformation applied to the iterative algorithm:

let fac n =
    let rec loop n a =
        if n==0 then a
        else loop (n-1) a*n in
    loop n 1

Is there more that compilers could do to refactor simple recursive integer-based loops to tail-recursive form automatically? I don't know much about this but am curious.

In any case where we have f x

In any case where we have f x = x*f (x - 1) the compiler would need to know about the algebraic properties of * and -, which could be any functions whatsoever. Also, the non-tail recursive recursion schema could be much more complex than this, but I think that it illustrates the problem.

Worse, if the compiler exploits algebraic properties to reorder operations, we would lose the ability to force a given scheme of evaluation. such as by the use of monads, or in fact by using the recursion schema above.

Hmm... yeah I can see it woul

Hmm... yeah I can see it would be hard/impossible to do in general. I was just thinking about common special cases though - I think this one would work without using any special properties of g, or affecting the order of evaluation:

let rec f n = 
    if n == z then a
    else g n (f (n-1))

transforms into

let f n =
    let rec loop counter acc =
        if counter == n then acc
        else loop (counter+1) (g (counter+1) acc)
    in loop z a

Perhaps other common special cases could be dealt with too? just as a performance optimisation in the compiler.

(Sorry just thinking aloud th

(Sorry just thinking aloud there - perhaps some compilers do this already?)

Automatic conversion to continuation-passing?

A compiler should be able to convert to continuation-passing style automatically. Then the function,

let rec fac n =
    if n == 0 then 1
    else n * fac (n - 1)

becomes

let rec fac n k =
    if n == 0 then k 1
    else fac (n - 1) (\x -> k (n * x))

Of course, this is trading stack usage for heap usage.

Continuation Passing Style

Yes, the CPS transform turns programs into tail recursive programs. This was the basis for Guy Steele's Rabbit compiler, one of the first compilers for Scheme. The Essentials of Programming Languages by Friedman, Wand, and Haynes is a good reference.

However, I personally don't understand the full point of the CPS transform yet. After all, if you naively CPS the factorial function, your control context still grows linearly with n. You need to be smart enough to apply the associative property of machine integers in the right place to reduce the control context. (Well, ignoring the fact that the number of bits in (fac n) grows as n * log n, so the control context still grows linearly. It's just that the leading coefficient is smaller)

What CPS is not

You (partially) may not understand the point of the CPS transform because you think it is supposed to be an optimization. It is not. The point is that it makes many things more explicit and easier for the compiler to manipulate and it more directly maps to lower-level forms of code.

Looping in APL

From the article...

APL [7] itself is an interesting model: it essentially replaces
"time-like" loops with "space-like" aggregate operations on multi-
dimensional arrays. This is a beautiful model; the cost of its elegance and simplicity, however, is the language's complete abandonment of any attempt to guarantee that computations can be performed on-line, without allocating potentially enormous intermediate collections.

Anyone care to explain (or point to an explaination of) an example of how looping in APL works (or at least the "time/space"-like issue)?

APL looping is implicit

APL has no explicit loop construct at all except conditional GOTOs. However, APL is dynamically typed, and a variable can contain a scalar (number or character) or a single- or multi-dimensional array (up to an implementation defined maximum number of dimensions).

Terminology: the number of dimensions is the rank, and the vector specifying the maximum value of each dimension is the shape.

For example, A + B will simply add numeric scalars, but when applied to arrays of the same shape, it implicitly delivers another array whose elements are the sums of the respective elements of A and B. Similarly, A = B will deliver zero for falsehood or one for truth when applied to scalars, but when applied to identically shaped arrays delivers an array of 1s and 0s for corresponding equal and non-equal elements.

Not all operators work like this: the matrix-divide operator, for example, does not. APL also provides a number of higher-order functions, though they cannot be applied to user-defined functions. (This condition is relaxed in APL's successor J.)

But...

But that sounds like map/fold which the original author contends isn't good enough. I'm thinking he must be refering to something that's different enough to earn the "elegant" label.

It is similar

There are similarities. The two main differences I can think of are that APL is built around the notion of multi-dimensional data strctures (not just lists), and provides many specific operators that handle dimensionality "correctly".

The links and explanation here might be useful.

Aren't all loops constructed with goto?

Olin Shivers writes, "As was popularised by Steele, a tail call is essentially a 'goto that passes arguments.' So writing loops with tail calls is just writing them with gotos."

Sure, but how is that different from a while loop? Eventually, all control constructs are implemented by jump instructions in the machine code. Tail calls may be similar to goto from an implementation standpoint, but they're no different from a function call in terms of the language itself.

Comprehensibility

A "while" loop has two visually distinct parts: a body and a termination condition.

A "for" loop has more parts: initialization, increment, termination condition, body.

OTOH, a tail-recursive function doesn't have a standard, visually distinct way of code organization. Yep, it's functionally equivalent, and crystal clear in simple cases (like factorial or Fibonacci numbers), but not as clear in more general cases.

For example, if we're iterating over a list, a tail-recursive function is fine. But when we need to increment a counter along the way, we need to change the code in a non-obvious way. Rinse, repeat. That's exactly why we need more structured forms of looping.

Proper loops

The point is that a proper loop runs from top to bottom and then back again, and maybe exits at one or more points in the middle. Anything other than that is spaghetti code, and is why GOTO was considered harmful in the first place.

Scheme actually does provide the DO macro that supports this kind of loop, though for whatever reason WHILE and UNTIL were never standardized. However, if you can't contort what you want into what DO provides, you are left with either making your own syntax extension (as Shivers has done) or falling back on raw tail recursion, little different from Minimal Basic's GOTO and IF...GOTO statements.

It's not how loops are implemented (it's all GOTOs at the machine level, certainly). It's how the structures of the language conceptualize them.

How is that different from tail-call recursion?

The point is that a proper loop runs from top to bottom and then back again, and maybe exits at one or more points in the middle. Anything other than that is spaghetti code, and is why GOTO was considered harmful in the first place.

Isn't the same true of a recursive function with tail calls? The whole point of a tail call is that it's the "last" thing in the function. There are places in the function where such a call is a tail call, and other places where it is not.

It's not how loops are implemented (it's all GOTOs at the machine level, certainly). It's how the structures of the language conceptualize them.

But the only difference between a tail call and a regular function call is at the implementation level. At the conceptual level, they are the same. Do function calls lead to spaghetti code?

Shivers's argument seems to be (1) tail calls are like goto, (2) goto is low-level and confusing, therefore (3) tail calls are low-level and confusing. I don't see how that argument doesn't apply to every control construct.

Difference

I won't speak for Shivers, but the difference I see is that tail calls make iterative code look like recursive code. It's a bit of syntactic dishonesty. It's basically saying: "This looks like a function call, but I completely intend it to iterate." If the language forces you to do that, so be it. But is there really a language that doesn't have any iterative control structures at all? And is the tail call really more expressive than a while loop?

But is there really a languag

But is there really a language that doesn't have any iterative control structures at all? And is the tail call really more expressive than a while loop?

Yes (e.g. Haskell), and it depends on your definition of expressive.

Finite State Machine

An example of tail call that is more expressive than while is programming a Finite State Machine:

InitialState(S) -> StateA(S)
StateA(S) -> do something; StateB(S')
StateB(S) -> do something; StateA(S')

Not buying

Your state machine runs forever, unless one of the "do something" processes causes it to terminate. And if it does, then it is exactly the condition of a while loop, which would seem to me to be just as sensible a way to write that.

Maybe I misused "expressive"

I prefer the tail call version because it can have local parameters and use less comparations than a while that needs a case to compare a variable "next_state" to all states.

?

I don't understand. The purpose of a while loop in the code you cited would be to terminate the state machine. I assume at some point, you intend it to reach a goal state? Do you have some mechanism that allows it to reach that goal state without making a comparison? If so, it must be a pretty trivial machine. If the machine does something "interesting", then I assume it enters various states conditionally, and the condition by which it enters a goal state would simply be encoded in the while condition. Otherwise, that condition must appear somewhere else. The problem is that you didn't actually describe a *concrete* machine that solves some problem, no matter how contrived or trivial. So it's easy to paper over details like goal states and branching.

I didn´t mean eliminate comp

I didn´t mean eliminate comparisons inside the "do_something"s. Just that in code like this:

while (next_state != FINAL) {
case (next_state) of
1: do_something; next_state = i;
2: do_something; next_state = j;
..
N: do_something; next_state = k;
}

after each "do_something", it would need to compare "next_state" with possibly N values (the states ids) while the tail call version would just jump to the next_state.

Apples?

You aren't comparing apples to apples. In the functional example you gave, you only had two states. So N == 2. Also, I wouldn't implement a state machine like that, unless I really needed the performance. I would choose an associative map to get the next state:

while (state != FINAL)
{
    state = state_map[state].do_something();
}

I like my version better because you can actually see it checking for a final state. Also, I don't see how the mutually recursive version scales as nicely.

Ok. Using just 2 states was a

Ok. Using just 2 states was a bad example, but not every FSM need a final state (some embbed software for example) and you had to add some indirection using a datastructure to obtain the same effect of the tail call. That said, I agree that using a associative map may scale better.

"Expressive"?

And is the tail call really more expressive than a while loop?

Maybe it would be better if we just didn't use the term "expressive" when discussing individual syntactic constructs. The notion of "expressiveness" is vague at best. And the most formal definition of expressiveness that I am aware of (Felleisen's On the Expressive Power of Programming Languages) operates at the level of entire languages, rather than individual syntactic constructs.

So, when you ask your question about tail-calls and while-loops, I must ask "what do you mean by 'more expressive than'?". Do you mean "able to express a greater variety of control semantics"? Do you mean "able to express some operation in a way that is closer to a given domain"? Or do you mean something else completely?

Lifting expressiveness (or elevating it?)

the most formal definition of expressiveness that I am aware of (Felleisen's On the Expressive Power of Programming Languages) operates at the level of entire languages, rather than individual syntactic constructs.
If one can compare expressiveness of entire languages L1 and L2 using relation L1 <== L2, then it's possible to define a ternary relation C1 <=L= C2 on individual constructs (where a construct is an endofunctor on languages), where C1 <=L= C2 <=> C1(L) <== C2(L). And this ternary relation (or a language-indexed family of binary relations) can be turned into a binary relation by using, e.g., universal quantifier. Or an existential one.

Can we agree that in such setting it makes sense to say "construct C1 is more expressive than construct C2 for all languages satisfying P"?

Lifting expressiveness

Can we agree that in such setting it makes sense to say "construct C1 is more expressive than construct C2 for all languages satisfying P"?

Sure, that sound reasonable to me. I just rarely see the context ("for all languages satisfying P") made explicit in discussions of "expressiveness".

Tail calls aren't just for recursion/iteration

Perhaps we're speaking past each other. Here's my understanding:

1. A tail call is the final/outermost function call in a function's definition. So, for the function f(x) = g(h(f(x))), the call to g is the tail call.

2. Tail-call optimization is a compiler technique that implements a function's tail call as a jump instead of a regular function call, thus avoiding some stack manipulation. This is purely an implementation issue; there is no semantic difference between an optimized tail call and any other function call.

3. You can write recusive functions so that the recursive call is in the tail position, thus making the recusion as efficient as iteration when the compiler supports tail-call optimization. (I think such functions are called "tail recursive".) This sometimes results in less understandable code.

In the absence of side-effects, the major difference between a loop and a tail-recursive function is that the latter requires you to be explicit about the loop variables.

Of course

But Shivers was not talking about general tail calls, just recursive tail calls, in the context of looping constructs:

In call-by-value functional languages such as ML or Scheme, we typically write loops using tail-recursive function calls. This is actually a terrible way to express program iteration, and it's not hard to see why. As was popularised by Steele, a tail call is essentially a "goto that passes arguments." So writing loops with tail calls is just writing them with gotos. [emphasis mine]
And my point is that the only reason people write recursive algorithms in tail-call form is for efficiency reasons. In all but the most trivial cases, the tail call form is not the most syntactically economical. By that I mean that for most languages, the shortest algorithm not using tail-call form will have fewer tokens than the shortest algorithm using tail-call form. To put it in simple terms, tail-call form is *ugly*.

Now, I also take exception to the common complaint about loop variables. The only time I use "loop variables" is when I want to perform a counted iteration. And I think it's good modular style to make that count a part of the loop's structure, rather than embedding it in a function call. If I only need to test for a condition, then there is no extraneous loop variable. I would, in particular, like someone to show me a loop where the "explicit loop variable" is more elegantly hidden within a recursive function call.

Finally, I would like to point out that tail-recursive functions may not require you to have explicit *loop variables*, but they *do* require explicit *accumulator arguments*, so I don't see how you have gained a single thing. In no instance can I imagine that it's simpler to convert a naturally non-tail recursive algorithm into a tail-call form than it is to write the equivalent iterative form, but I'm more than willing to be shown an example.

Example

In no instance can I imagine that it's simpler to convert a naturally non-tail recursive algorithm into a tail-call form than it is to write the equivalent iterative form, but I'm more than willing to be shown an example.

That's what operators like fold and unfold are for: they abstract common patterns involving accumulators. Take factorial as an example — citylight gave the usual recursive definition and an accumulator-passing-style version in an earlier post. But once we recognize that the accumulator-passing-style version is just a standard fold, we can abstract it out and implement factorial as follows (using Haskell):

fac n = foldl (*) 1 [1..n]

Of course, foldl is an iterative operator, which can be implemented in various ways, whether using tail calls or for loops (as Neil Madden showed in Tcl recently). However, the point is that comparing tail calls to higher-level constructs is like comparing atoms with molecules.

I think that Shiver's paper is, in a sense, being taken out of context here: to justify his new control construct, he's chosen to "assassinate his predecessor", in this case tail calls, but I think he's assuming that the underlying rationale for tail calls is understood by his audience.

[edit: corrected attribution of Tcl fold example]

Non-example

I don't see a recursive call there, let alone a tail-recursive call. Yes, I appreciate the value of higher-order functions. Even C++ has for_each() (general fold), accumulate() (which would be the appropriate function for the factorial example, as it keeps a running accumulator), count(), count_if(), transform() (map), etc. But I think my point still holds: tail-recursive form is unnatural and awkward. That being said, I do prefer to use higher-order functions even in C++, and I've even been known to write a functor (C++-speak for "function object", not a mathematical or FP functor) or two for just that purpose even when the equivalent iterative loop is shorter.

Its in the fold

I don't see a recursive call there, let alone a tail-recursive call.

Its in the definition of fold:
foldl f a [] = a
foldl f a (x:xs) = foldl f (f a x) xs

Re: Non-example

I don't see a recursive call there, let alone a tail-recursive call.

Right, that's the point. The tail-recursive call has been abstracted into the definition of fold. I'm not sure anyone is suggesting that direct use of tail calls are the answer in every case, which perhaps makes your objection to them moot?

What I'm pointing out is that given function calls which support tail call optimizations, you can use that to implement other control structures, whether recursive or iterative. Given that, the fact that some solutions involving direct tail calls are less simple than they might be is irrelevant: just use the appropriate control construct for the job in a given case. In many functional languages, it'll still be tail calls under the hood of the control construct you use.

But I think my point still holds: tail-recursive form is unnatural and awkward.

Don't you mean that accumulator-passing style tail-recursive form, used to perform iteration, is unnatural and awkward compared to using an iterative construct tailored to the kind of iteration you're doing? But when put like that, where's the surprise?

To restate and summarize, it seems to me that your objection here is based on an assumption that someone is claiming that tail calls should always be used to directly implement iteration. I'm not sure who's claiming that, and the existence of operations like fold and that MLton for loop implementation refute that claim. Can you clarify your point?

Here's an example from a class I'm TAing

It's a backtracking algorithm that searches for knight's tours over an n x m chessboard. It's in tail-recursive form, right now, and I don't think there are any iterative solutions as clean.

(* Compute the legal knight moves from (i,j) on an m x n board *)

let moves (m,n) (i,j) =
  List.filter
    (fun (x,y) -> 1  
        if List.length path = m * n then Some(List.rev path) else k()
    | p :: ps -> 
        if List.mem p path then
          ktour size path ps k
        else
          ktour size (p :: path) (moves size p) (fun () -> ktour size path ps k)

(* The tour function just initializes things, and gives 
an initial failure continuation that returns None, 
representing not finding a path. *)

let tour size pos =
  ktour size 1 [pos] (moves size pos) (fun () -> None)

Tail recursion

"It's a backtracking algorithm that searches for knight's tours over an n x m chessboard. It's in tail-recursive form, right now, and I don't think there are any iterative solutions as clean."

The recursions in ktour are all in tail context but as far as I can tell you still have linearly growing control context? (The chain of continuations being constructed.) If we're allowing examples of tail recursive functions with linearly growing control context then there are plenty of simple examples to draw on. For instance, CPS transform the usual recursive solution to the Towers of Hanoi problem.

The Unbearable Expressivity of Function Calls

(Hey, if people get to reuse the title "X Considered Harmful" ad infinitum, I don't see why Milan Kundera can't get in on the action from time to time.)

But is there really a language that doesn't have any iterative control structures at all?

Functional languages tend to have no primitive iterative control structures. That doesn't mean they don't have iterative control operators or structures defined in terms of some primitive construct, such as function calls (including tail calls).

And is the tail call really more expressive than a while loop?

Put it this way: you can implement while loops in terms of function calls. You can't implement function calls in terms of while loops (or if you can, I don't think I want to use the resulting language).

The proper response to the claim that the tail call is a low-level construct boils down to "duh". That's kinda the whole point: it's a primitive feature (actually part of the implementation support for function calls) on top of which a wide variety of mechanisms can be built, including: iterative operators such as left fold and unfold (see A tutorial on the universality and expressiveness of fold); something like for loops in MLton; and Shivers' new looping mechanism.

It's a bit of syntactic dishonesty. It's basically saying: "This looks like a function call, but I completely intend it to iterate."

There's no dishonesty - you only think there is, because you're assuming that function calls can't or shouldn't "normally" be used to iterate. Why do you assume that? From a functional perspective, mainstream languages have an arbitrary restriction on function calls, in that they consume space even when there's no logical need for them to do so.

P.S. Once again, Lambda, the Ultimate GOTO is relevant here.

The Irreducible Cost of Abstraction

That doesn't mean they don't have iterative control operators or structures defined in terms of some primitive construct, such as function calls (including tail calls).

This was, of course, my point. FP coders don't really need to write their algorithms in tail-recursive forms if they don't want to.

Put it this way: you can implement while loops in terms of function calls. You can't implement function calls in terms of while loops (or if you can, I don't think I want to use the resulting language).

Ahh, but that's taking things out of context. Obviously, while loops are not a *general* flow control construct. The rhetorical question was simply regarding *looping*, which was the original topic of this thread (but seems to slip away at every opportunity).

There's no dishonesty - you only think there is, because you're assuming that function calls can't or shouldn't "normally" be used to iterate. Why do you assume that?

Because a function call is an abstraction boundary. When you call a function, you are saying: "I'm now going to go solve a subproblem that is not related to the current task, which is why it exists as a separate piece of code." When you iterate, you say: "I'm now going to solve the next piece of the current task, but it looks just like the last piece, so I am reusing the code." That isn't an argument against recursive calls in general. In general, a recursive call solves a subproblem that is not trivially reducible to iteration (like, say, mergesort). However, a tail-recursive call is explicitly designed to solve a subproblem that is trivially equivalent to iteration, yet comes dressed in the clothing of abstraction. When I see an algorithm that has been contorted from a beautiful recursive form into a tail-call form, I think: "Wow. Those poor guys. Just think what they could do with a little state and a good looping construct." ;>

From a functional perspective, mainstream languages have an arbitrary restriction on function calls, in that they consume space even when there's no logical need for them to do so.

Insofar as they set up function prologs/epilogs without performing tail-call optimization, I agree. But if you mean to say that function calls can consume *zero* space, that's where I must disagree. The only way a function can consume zero space is if it passes no arguments or overwrites existing data. Since the latter is strictly verboten in pure FPs, that implies that function calls will almost always incur some overhead (though I would hope considerably less than the 393 bytes that the PL/I compiler emitted!).

function-call space overhead

But if you mean to say that function calls can consume
> *zero* space, that's where I must disagree. The only
> way a function can consume zero space is if it passes
> no arguments or overwrites existing data. Since the
> latter is strictly verboten in pure FPs, that implies
> that function calls will almost always incur some
> overhead

Functional programming doesn't use mutable variables, yes, but that doesn't mean the -implementation- doesn't!

So zero-overhead function calls are fine.

Pure->Mutable?

I would be curious to see how you would automate a transformation from pure to mutable state to effect zero-overhead function calls. I'm sure it's practical in some trivial circumstances, but I'm more skeptical about general-case functions.

Is mutual recursion also iteration?

However, a tail-recursive call is explicitly designed to solve a subproblem that is trivially equivalent to iteration, yet comes dressed in the clothing of abstraction.

Are mutually recursive functions trivially equivalent to iteration?

My first guess is that tail-call optimized mutually recursive functions are significantly more efficient than their matching iterative implementation.

A quick google search didn't turn up any results one way or the other.

Theorists

The problem with academics is that they often forget to look at concrete instances out of fear that they might overgeneralize them. Show me your favorite mutually recursive functions in tail-call form and we can examine your claim empirically.

[Admin ]

Wasn't is possible to raise your objection without trying to be insulting first? Please refrain from this kind of behaviour on LtU.

Duly noted

However, several times I have challenged people to provide concrete instances of their claims, and nobody seems interested in doing so. I find it frustrating to debate generalities, because the devil is so often in the details. I presume we are all programmers here, so it should be relatively easy to provide a short code snippet to illustrate a potentially contentious claim. I guess I was trying to bait people into taking a position and defending it. ;>

Re: Is mutual recursion also iteration?

It doesn't seem to be very difficult to simulate mutual (tail) recursion:

  local
     fun f x = ... g x' ...
     and g x = ... f x' ...
  in
     val h = f
  end

using loops:

  fun h x =
    let
       datatype state = F | G
       val state = ref F
       val x = ref x
    in
       loop
          case state of
             F => ... state := G ; x := x' ; continue ; ...
           | G => ... state := F ; x := x' ; continue ; ...
       end
    end

implementation != semantics


But if you mean to say that function calls can consume*zero* space, that's where I must disagree.
The only way a function can consume zero space is if it passes no arguments or overwrites existing data. Since the latter is strictly verboten in pure FPs, that implies that function calls will almost always incur> some overhead

i think you are confusing implementation of the language with semantics of the language: you dont change any mutable state at the semantic level, but you reuse the stack-space of the calling-context (control information (ie. return-adress) and local variable bindings) because they cannot be referenced anymore after the call returns.

This is quite easy to prove as the only thing that remains to be done after the call is to leave the context.

ps: Im not sure that "tail-call-optimization" is actually a speed-optimization as you have to arrange things on the stack before you can actually execute the call?!?

Good point

Actually, it is both a space and speed optimization, because, as you say, you can not only reuse the stack space, you can actually perform argument updates in-place. So you save yourself the trouble of setting up parameters, and you consume zero space. I think that's pretty clearly an "optimization" over the general case where you have to initialize arguments explicitly and in their own space. However, this still only covers the tail-call case. The general case still requires argument copying and preservation of the local context (unless you have some surprising new theorems I have not been apprised of!).

When I see an algorithm that

When I see an algorithm that has been converted from a beautiful recursive form into a tail-call form I think: "Wow. Those poor guys. Just think what they could do with a little state and a good looping construct."

I always find this argument fairly non-sensical, though not completely surprising. It has the form: Recursive forms (of iterative functions) are pretty, but practically they need to be written in tail-recursive form, but tail-recursive form looks mostly like the imperative iterative form and is much uglier than the recursive form; therefore we should always use an ugly imperative iteration.

Tail recursive form is ugly... because it is rather similar to the imperative iterative solution. I fail to see how being ugly all the time and reducing flexibility is better.

Finally, as has been said. You can simply -implement- that "good looping construct" with tail-calls and macros/higher-order functions. The converse is not true as a local transformation I'm fairly certain.

Been There. Done That.

Is it just me, or does this whole optimized-tail-calls-are-pure-evil discussion sound awfully familiar?

Be declarative

The third alternative is to be declarative. I have to admit to not having read the referenced paper yet, but I suspect that this is what it is getting at. A loop macro enables you to write loops in a declarative style, i.e. what to do rather than how to do it. This is much clearer than iterative constructs like while where you have to mutate state variables by yourself. And by induction also clearer than tail recursion. :-)

I like recursion better.

Recursion can simplify some problems, especially those that need a stack; whereas in loops you have to worry about the iterator or iterators, their initial value, their ending value, etc.

Stupid question

Whats the difference between iteration and recursion? Is it the syntactical difference between refering to its definition or it a execution property of having control require additional space? I've also see the terms coiteration and corecursion in someones (maybe Pardo's) papers but don't have a handle on those words either. Any pointers towards a good defintion?

Delaying binding as long as possible

It feels to me like while loops have one definite disadvantage. If I write:

while (x > 0) {
  accum = accum + f(x) + y;
  y = accum / x;
  print y
  x = next(x);
}

I have lots of opportunity to do things between updates. I'm changing state whenever I like. If I have to build my iteration on top of function calls, then I delay my state changes as long as possible, and do them all at once:

if (x > 0) {
  print y;
  next(accum+f(x)+y,
       accum/x,
       next(x));
}

Sadly, I can't find Alan Kay's quote on delaying binding and state change as late as possible. But either way, a tail call is not a goto. It is exactly one destructive update for each local variable, no changes to non-local variables, and a goto, all as an atomic operation.

Higher order == good

Except when it's not. If your loop body can be easily expressed by a call to a generic higher-order function, great! But a while loop is useful exactly when it is *not* convenient to use a higher-order function (such as because you want to treat some values in the sequence specially, for instance). A trivial example involves I/O:

do
    filename = showFileSelectDialog();
while (filename != '')

What would be the most appropriate higher-order function to eliminate this evil use of iteration? It's not a sequence, so fold/map/filter aren't appropriate. And it's not a container, so algebraic data types won't help.

How to eliminate evil iteration

If some particular kind of iteration is really important in your application domain, then I don't see what would be wrong with providing a higher-order function for the purpose. For example, you could provide something like

fun forgetUntil (pr, gen) =
    let
       val r = gen ()
    in
       if pr r then
          r
       else
          forgetUntil (pr, gen)
    end

in a library and then use it like

val filename =
    forgetUntil (fn n => n <> "", showFileSelectDialog)

(Note that the exit condition is the opposite of what you used.)

What if it's not really important?

Then you either have to spell it out the way you did with your custom function, or you have to simulate the iterative construct by writing the function. Neither is particularly bad, but my point is that sometimes you don't want to construct a compressed-air driven hammer to pound a finishing nail. At least not if you have a small hammer already handy.

Levine the Genius Tailor

Well, I don't think that there can be a language that provides just the right special syntax for every possible situation.

One thing (not the only thing) that always annoys me about builtin loop constructs like for, while, and do-while (in C++ and similar constructs in other languages) is that they often do not do exactly what I want. For example, it is not extremely rare to need to circumvent the ordinary exit condition and then use break or continue in the middle of the loop:

  while (true) {
     ...
     if (condition) break;
     ...
  }
It just looks ugly to me. Many of the mainstream languages that have several builtin looping constructs aren't expressive enough to really implement your own looping constructs in a safe and easy to use form. I know I've spent an insane amount of time trying to squeeze my round algorithms into the square holes provided by some of those languages. With higher-order functions and tail-call optimization I don't have such problems as I can get exactly what I want.

Let me quote a story from the book Psychology of Computer Programming by Gerald Weinberg (pages 210-211 in the silver anniversary edition):

PROGRAMMING LANGUAGE DESIGN

  It is impossible to begin a discussion of psychological principles of
programming language design without recalling the story of ``Levine the
Genius Tailor.'' It seems that a man had gone to Levine to have a suit
made cheaply, but when the suit was finished and he went to try it on, it
didn't fit him at all. ``Look,'' he said, ``the jacket is much too big in
back.''
  ``No problem,'' replied Levine, showing him how to hunch over his back
to take up the slack in the jacket.
  ``But then what about the right arm? It's three inches too long.''
  ``No problem,'' Levine repeated, demonstrating how, by leaning to one
side and stretching out his right arm, the sleeve could be made to fit.
  ``And what about these pants? The left leg is too short.''
  ``No problem,'' said Levine for the third time, and proceeded to teach
him how to pull up his leg at the hip so that, though he limped badly, the
suit appeared to fit.
  Having no more complaints, the man set off hobbling down the street,
feeling slightly duped by Levine. Before he went two blocks, he was
stopped by a stranger who said, ``I beg your pardon, but is that a new suit
you're wearing?''
  The man was a little pleased that someone had noticed his suit, so he
took no offense. ``Yes it is,'' he replied. ``Why do you ask?''
  ``Well, I'm in the market for a new suit myself. Who's your tailor?''
  ``It's Levine---right down the street.''
  ``Well, thanks very much,'' said the stranger, hurrying off. ``I do
believe I'll go to Levine for my suit. Why, he must be a genious to fit a
cripple like you!''
  Would it be inappropriate to concot a version of this story called
``Levine the Genius Language Designer''? The first problem in discussing
language design is that we do not know the answer to that question. We do
not know whether the language designers are geniuses, or we ordinary
programmers are cripples. Generally speaking, we only know how bad our
present programming language is when we finally overcome the psychological
barriers and learn a new one. Our standards, in other words, are shifting
ones---a fact that has to be taken into full consideration in programming
language design.

Lisp, the programmable programming language

One of the major elements of the Lisp tradition (including both Scheme and Common Lisp) is the ability to become a language that "provides just the right special syntax for every possible situation". By constructing appropriate macros, you can add whatever syntax you need. Indeed, it is common for Lisp programs to be written almost entirely in macros constructed for the purpose at hand.

(My title is due to John Foderaro, and might more correctly be written "Lisp, the #1=(programmable . #1#) programming language".)

Scope is everything

Well, I don't think that there can be a language that provides just the right special syntax for every possible situation.

Perhaps not but you have to give Olin Shivers credit for trying. His looping vocabulary covers an awful lot of cases, including this one:

while (true) {
     ...
     if (condition) break;
     ...
  }
which would be written something like this (translating from Scheme to C):
loop {
  ...
  until condition;
  ...
}
I find that quite pretty, myself. Has anyone in this thread actually read the referenced paper? I found it fascinating, particularly the real focus of the paper, which is a (to me) completely new idea about scoping. ("Scope is everything" -- a remark Dr. Shivers attributes to Dan Friedman.) Maybe I should start a new topic to discuss that...

Breaking in the middle

(In the great tradition of /., I haven't finished reading the paper. Personally I'd judge the difference between the two examples as negligable - the issue for me is that the condition is buried in the middle of possibly lots of other code. Getting it out of there is what is important, even using a gross if test so that you could get back to the loop check... Dr. Shivers - what a terrifc name, bested only by Bjorn Lisper - starts off talking about the "software engineering perspective".)

If you need a mid-test, you need one

I personally don't think that "gross if tests", or for that matter termination variables (do {...} while (!done);") actually lead to better maintainability. It is even harder to see where done is assigned to than scanning for loop termination keywords, particularly if you have a reasonable syntax highlighter. As an experiment, I modified Lua to implement the unless/until/where/while statements suggested by "Anatomy of a Loop". (I had previously implemented something like his subloops, which I think are the most interesting part of the paper, but I'll leave that for another topic.) A small sample:

Lua:

function command_shell(stream, handlers, endword)
  for line in stream:lines() do
    if line == endword then break end
    line = line:gsub("^%s*", "")
    if line ~= "" and not line:find("^#") then
      local cmd, rest = line:match("(%S+)%s*(.*)")
      if handlers[cmd] then
        handlers[cmd](cmd, argsplit(rest))
      else
        print("Unrecognized command "..cmd)
      end
    end
  end
end
Modified Lua:
function command_shell(stream, handlers, endword)
  for line in stream:lines() do
    until line == endword
    line = line:gsub("^%s*", "")
    unless line == "" or line:find("^#")
    local cmd, rest = line:match("(%S+)%s*(.*)")
    if handlers[cmd] then
      handlers[cmd](cmd, argsplit(rest))
    else
      print("Unrecognized command "..cmd)
    end
  end
end
Not a huge difference, perhaps, but I find the second one quite a bit clearer; for me, for example, until line == endword is immediately understandable in a way that if line == endword then break end isn't.

Yes, and I think it's a lovel

Yes, and I think it's a lovely paper. Start a new topic, and I'll post why I think it's a good paper, and what I think practical alternatives to his approach are.

Has anyone in this thread act

Has anyone in this thread actually read the referenced paper? I found it fascinating, particularly the real focus of the paper, which is a (to me) completely new idea about scoping.

Yes. I was also in the audience when Olin Shivers gave the presentation at ICFP'05.

you have to give Olin Shivers credit for trying

Certainly, as I said to him, it was a nice presentation. The scoping stuff is also very interesting. However, the design (and implementation) seems overkill to me, given that I don't really see a lot of functional programmers complaining about lack of looping constructs.

Looks like an unfold to me, t

Looks like an unfold to me, though you're only taking the last item from the result. I'm not sure why you're repeating until there's no selection though?

Many HOFs work fine when some values are supposed to receive special treatment, for a simple reason: the function that's supposed to act on values gets to make the decision, it's none of the HOF's business if it happens to do a case analysis and return most of them as-is.

To get exactly one destructive update for each local variable...

...you could just combine your state into a single record. For example, here is how you could implement fib in either functional
  fun fib n =
      let
         fun loop {i, j, n} =
             if 0 = n then
                i
             else
                loop {i = j, j = i + j, n = n - 1}
      in
         loop {i = 0, j = 1, n = n}
      end
or imperative
  fun fib n =
      let
         val st = ref {n = n, i = 0, j = 1}
      in
         while #n (!st) > 0 do
            st := {n = #n (!st) - 1,
                   i = #j (!st),
                   j = #i (!st) + #j (!st)} ;
         #i (!st)
      end
style in SML. (The above are not meant to be the most concise implementations possible.) Alternatively you could use an assignment operator that allows you to perform multiple assignments (like in Python).

Addendum: Here is one way to implement fairly convenient "simultaneous" assignment in SML using the varargs Fold technique developed by Stephen Weeks:

(* Some general purpose combinators *)
fun pass x f = f x
fun id x = x

(* Generic varargs terminator *)
fun $ (v, f) = f v

(* Varargs fold *)
structure Fold =
   struct
      val fold = pass
      fun step0 h (v, f) = pass (h v, f)
      fun step1 h $ x = step0 (fn y => h (x, y)) $
      fun step2 h $ x y = step0 (fn v => h (x, y, v)) $
   end

(* The actual implementation of simultaneous assignment *)
structure SimAss =
   struct
     fun assign $ =
         Fold.fold (id, pass ()) $

     fun to $ =
         Fold.step2 (fn (cell, value, updates) =>
                        (fn () => cell := value) o updates) $
   end
Here is an implementation of fib using "simultanous" assignment in SML:
open SimAss

fun fib n =
    let
       val n = ref n
       val i = ref 0
       val j = ref 1
    in
       while !n > 0 do
          assign to n (!n - 1)
                 to i (!j)
                 to j (!i + !j) $ ;
       !i
    end

Parallel to let versus lambda

We naturally prefer

(let ((x 5)(y 7))
  (+ x y))
to
((lambda(x y)(+ x y)) 5 7)
because the let groups the values with the variables they are bound to, while the lambda has them separated by the body of the function. One has to remember their order if you wish to follow the code.

Now consider defining the arithmetic-geometric mean :

(do ((x 1 (/ (+ x y) 2))
     (y 2 (sqrt (* x y))))
    ((= x y) x))
=> 1.456791
We can code this recursively
(defun ag (x y)
  (if (= x y)
      x
    (ag (/ (+ x y) 2)
	(sqrt (* x y)))))
(ag 1 2) => 1.456791
To my eyes the big difference is that the iterative code groups things better. The variable x is grouped with its initial value 1 and its update form (/ (+ x y) 2). Separation of the variable from the initial value is sometimes inevitable. Separating the update form from the variable is a loss. In these kinds of cases iteration is syntactic sugar. It is the same kind of sugar that let provides for lambda and tastes just as sweet.

This is the dumbest argument

I've heard in quite some time. I talking specifically about:

So writing loops with tail calls is just writing them with gotos. Yet, it has long been accepted in the programming-language community that goto is a low-level and obfuscatory control operator, a position stated by Dijkstra's "Goto considered harmful" letter.

By this logic, if statements are also evil. After all, they're just goto statements. If I write:

if (some_test) {
    ...
}

this is just the equivelent of writting:

if (not some_test) goto LABEL;
...
LABEL:

So we've got to get rid of if statements as well. Oh, and loops too, because the loop:

while (some_test) {
    ...
}

is just a goto as well- it's just syntactic sugar for the code:

LOOP_TOP:
if (not some_test) goto LOOP_EXIT;
...
goto LOOP_TOP;
LOOP_EXIT:

so it goes to. And of course for loops have to go as well, as they're just while loops in disguise, which are just gotos in disguise, and gotos are evil.

Choose your monkeys wisely

This is the dumbest argument I've heard in quite some time.

While it is a silly argument (in my mind at least), it's a bit of a shame that it's what everyone here is focusing on, and not the meat of the paper itself, the CFG language behind Shivers' looping constructs. I haven't really read the paper that closely (it was the end of a long day and my brain wanted sleep far more than macros or operational semantics), but it seems to be a nifty way of thinking about scope and control, and using some of the relations to our benefit.

Of course Shivers sort of brought this on himself by choosing to bring up tail calls and gotos as the evil monkey in the closet rather than, say, the existing loop macros he discuses in section 2. I guess he was using a bit of academic trolling to get people to read his paper, and he was successful on that account. Unfortunately, it seems to have backfired, so that that is all we are discussing, and not his solutions... A bit of a shame, really.

Clarification

This is the dumbest argument

I've heard in quite some time. I talking specifically
> about:

So writing loops with tail calls is just writing
> them with gotos. Yet, it has long been accepted in
> the programming-language community that goto is a
> low-level and obfuscatory control operator, a
> position stated by Dijkstra's "Goto considered
> harmful" letter.

By this logic, if statements are also evil.

Do you mean Dijkstra's or Shivers's logic by "by this logic"?

winnowing wheat from chaff

This is the dumbest argument I've heard in quite some time

The statement you are responding to is not his argument. Rather, it is an intentionally inflammatory lead-in to an argument. Shivers has a history of using the rhetorical device of prefacing solid work with statements designed to capture your initial attention.

The substance of the paper is now being discussed in a separate thread.

True

An the work deserves to be discussed. However, I must say that the introduction does a disservice to the paper, and "lead-in" statement appears more than once, for no good reason.