No more tail calls in Javascript?

In very disappointing news proper tail calls are out of ES4. It seems that a justification for tail calls could not be found. For example, here is Adobe's position on tail calls:

A more serious point is that we can't avoid adding tail calls at some point if we cater to the functional programming crowd. However, if we really intend to cater to them then we need to provide data structures that are functional too, like lists and operations on them—unlike ES arrays, which are entirely imperative. Of course users can implement those data structures themselves, if they have tail calls, but right now just adding tail calls “for functional programming” seems like a half solution.

Finally, tail calls preclude the use of straightforward implementation techniques for procedure calls. To be sure they are less limiting than generators, as one-shot continuations or longjumps are sufficient to handle tail calls in a non-tail-calling implementation language, but implementations that want good-performance tail calls must necessarily switch to a code generation technique.

This seems misguided. The user can implement functional data structures but not tail calls (without whole program transformation), so the later are much more valuable than the former. Furthermore, as a functional programmer I'm quite happy to use mutable data structures but I would certainly miss tail calls. Finally, every JS implementation is already shifting to code generation because straightforward implementation techniques are too slow for the existing idioms used in JS code.

The ES4 Wiki still indicates that tail calls are in, so perhaps they'll yet make it. For laughs you might want to look at the Ecmascript progress spreadsheet. Apple sure don't like change.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Is Adobe serious?

To Adobe I also add: all those Scheme interpreters? I guess they don't exist.

Sausage-making

Finally, tail calls preclude the use of straightforward implementation techniques for procedure calls.

It's hard not to interpret that as "we refuse to rewrite our interpreter".

On the other hand...

…this doesn't prevent implementations from optimising calls in tail position, does it? For instance, no C standard requires tail call semantics, yet GCC provides them, at least sometimes. (A few years ago, when debugging an interpreter, I modified it to be recursive instead of iterative, expecting to use stack length to trace instruction count. Inserting a dummy statement after recursion allowed this.)

Guaranteed tail recursion does indeed complicate implementation. Has anyone investigated adding tail call optimisation to an existing virtual machine? Functional programming styles often require tail semantics, but other styles would likely benefit as well, in my experience.

not an optimization

Optimizing tail calls without a guarantee is useless to programmers. If you can't rely on tail calls being constant-space, then you have to use primitive loop forms.

I don't believe guaranteed tail recursion necessarily complicates implementation, but it certainly does if you've already built your implementation as a C-style stack machine. That's part of the problem of adding it to an existing language like JavaScript.

Has anyone investigated adding tail call optimisation to an existing virtual machine?

Sun appears to be seriously considering adding a tailcall instruction to the JVM.

IBM JVM

IBM implemented TCO on their JVM years ago.

Not full TCO

If nothing's changed since 1999, IBM's JVM only eliminates recursive tail calls, cf. Overview of the IBM Java Just-in-Time Compiler, where they call it "tail recursion elimination".

hee hee, I just don't give up

That's sort of old news, Noel-- it's hard to fight the inertia on top of existing implementations of function, and as you noticed, back when that spreadsheet came out everyone started getting conservative about what they were willing to commit to implementing. But this was pre-Harmony, so lots of things have changed since then.

In particular, tail calls may have found a second wind. Check out my latest proposal on adding a cleaner function form to JavaScript (with a name that LtUers might find familiar...) which, among other good things, would mandate proper tail calls.

This is part of several new mini-proposals underway for consideration in ES-Harmony. There's been some good discussion about the new mini-proposals on the es-discuss mailing list lately, too. I'm very excited about the possibilities!

Dave's proposals received quite well so far

I've been following the discussion of Dave Herman's "lambda" proposal for ES-Harmony and it seems everyone in the discussion would like to see the general idea succeed. Their seems to be appeal to the idea that more complex forms in the language are syntactic sugar that reduce to lambdas under the covers.

Cultural clash

The primary use of TCO in functional languages is, of course, to support recursive implementation of things which are generally written as for/while/do loops in imperative languages--with an explicitly-mutated induction variable rather than creating a whole new stack frame (and depending on TCO to memory-optimize this technique) to hold each instance of the induction variable.

What about--as a compromise--adding explicit induction variables--both to Javascript and other imperative languages, as well as to some of the FP languages?

They'd have the following semantics:

* Mutable at the head of the loop (i.e. whenever the loop is (re)-entered
* Immutable otherwise--outside the loop (if still in scope), or within the loop body.
* Not subject to any obnoxious variable capture (something not a big problem in imperative languages, but a concern in the programming styles found in FPLs). If capture of these is permitted (say, a lambda is created within a loop), it gets the current value of the induction variable; not creating an alias to the mutable state.

This would be beneficial to imperative programmers; many of whom have learned that things like "const" or "final" are useful. Would eliminate a whole class of "loop variable accidentally clobbered" bugs. And it might benefit the FP community (perhaps by attracting new programmers for whom all-recursion-all-the-time is too much of a mental barrier) too--even though it doesn't buy you anything, admittedly, that recursion (with TCO) doesn't get you.

Of course, I expect both camps to react with horror at this suggestion. (And likely with good reason...) :)

It's the fault of all you Haskell programmers

By raving so much about pure lazy languages, you've created the impression that only pure languages (with or without laziness) can really count as functional. That means people see functional programming as an all-or-nothing commitment on the part of a language and its implementors, rather than something that can be done even in impure eager languages like Scheme.

Not just an Impression

The Haskell guys got it right. Only 'pure' functional languages (with or without laziness) can really count as functional. 'Impure' functional languages defeat perhaps 60% of the long term benefits of using functional programming in the first place. They benefit from the ability to return functions, and they benefit from lexical closures, and that's the extent of it.

The problems with impure functional languages:

  • behavior or calculation becomes tied tightly to environment, especially since variables referenced from a lexical closure can change in value at a later time. This makes it difficult to suspend, persist, mobilize, or lazily deliver calculations over a network.
  • The idea of 'objects' gets mixed up with the idea of 'mutable closures' in a bad way. The synchronous message send, where one waits for a response, is an awful thing about modern implementations of OO (leading to massive amounts of accidental complexity and headaches in any moderately complex framework due to special efforts to avoid cyclic calls and special requirements for concurrency management). Use of impure functions (which are inherently synchronous, due to the need for a return value) will only prolong this mistake.
  • Value objects tend to be stateful. This prevents most sharing of state (since you never know who might edit it). It also makes less efficient the static storage of data due even to the mere potential for later manipulations on that state
  • awful for metaprogramming or logic programming; for purposes of automated planning and testing of potential execution paths it is much better if one returns a program-to-be-executed rather than executing as one evaluates.
  • partial evaluation optimizations and various other (less extreme) reordering optimizations (parallelization, optimizer assigned laziness vs. eagerness, simple reordering based on anticipated message return times from Memory, etc.) are made difficult by the temporal relevance and repetition of side effects; indeed, one's hand is usually forced: eager evaluation is the only way to go when side-effects are mixed into functions.

There are so many 'hidden' costs that come from this 'impurity' that people never think about before they've tasted the other possibilities.

Not that the entire language needs to be 'pure'. Languages could also have 'pure' functional subsets that interweave with imperative or message based languages, or that return programs to execute. What matters is keeping the calculation in a distinct semantic domain from the communication - i.e. the functions need to be 'pure', and if that means separating functions from procedures then so be it.

Problems with impure languages

behavior or calculation becomes tied tightly to environment, especially since variables referenced from a lexical closure can change in value at a later time. This makes it difficult to suspend, persist, mobilize, or lazily deliver calculations over a network.

That depends on how you do implement closures. If closures are simple pointers-to-the activation record (aliases), then yeah; you get into all sorts of problems. This is one reason, I think, that C displaced Pascal; C fixed a known source of nasty Pascal bugs by simply banning lexical closures completely.

If, on the other hand, lexical closures close over copies of mutable variables (each closure gets its own copy), then this problem goes away.

But then--many imperative languages don't provided closures to begin with; or reify them as objects.

The idea of 'objects' gets mixed up with the idea of 'mutable closures' in a bad way. The synchronous message send, where one waits for a response, is an awful thing about modern implementations of OO (leading to massive amounts of accidental complexity and headaches in any moderately complex framework due to special efforts to avoid cyclic calls and special requirements for concurrency management). Use of impure functions (which are inherently synchronous, due to the need for a return value) will only prolong this mistake.

I don't understand here; how are impure functions synchronous, but pure functions (which are only run for their return value) not? Are you meaning to refer to the need to ensure side effects happen in some prescribed order?

At any rate; the introduction of objects (and particularly of compiler-enforced encapsulation) helps a great deal. (More below).

Value objects tend to be stateful. This prevents most sharing of state (since you never know who might edit it). It also makes less efficient the static storage of data due even to the mere potential for later manipulations on that state

Value objects are, by definition, stateless. Imperative programming languages do frequently make the mistake of inappropriate aliasing of same, leading to the sorts of bugs you describe; but there exist many examples of "safe" value-object libraries which behave as one expects.

awful for metaprogramming or logic programming; for purposes of automated planning and testing of potential execution paths it is much better if one returns a program-to-be-executed rather than executing as one evaluates.

I have to admit, I don't follow you here. The difference to me between "code analysis" and "code evaluation" is the presence or absence of input to the program; how does that relate to state, other than the general observation that stateful programs are harder to reason about?

partial evaluation optimizations and various other (less extreme) reordering optimizations (parallelization, optimizer assigned laziness vs. eagerness, simple reordering based on anticipated message return times from Memory, etc.) are made difficult by the temporal relevance and repetition of side effects; indeed, one's hand is usually forced: eager evaluation is the only way to go when side-effects are mixed into functions.

Yes, yes. Referential transparency are the keys to the kingdom of the semi-mythical sufficiently smart compiler, which dances and sings and runs for office. Or so sayeth the theory. :) In practice, implementations of stateful languages (including the silicon-based implementations of that class of stateful language known as "assembly language"--i.e. modern superscalar CPUs with their multilayer memory hierarchies) are able to optimize just fine. And while ref trans certainly admits certain optimizations, it also imposes its share of penalties. Uncontrolled lazy evaluation (i.e. Haskell) has a nasty tendency to blow out the heap in ways that are notoriously difficult to find and debug; and certain operations on aggregates (i.e. replacing the nth element of an array have worse asymptotic bounds for stateless implementations than they do for stateful ones. In the case of linear update (you don't care about the prior contents of said array), this will kill you.

The key to writing robust, high performance software is not avoiding side effects--in many programs, state change is part of the application domain; not just an implementation choice--but encapsulating them. This includes many things; language choice for separate programs; more capable type systems (which better capture immutability than const); hiding mutable state inside objects with well-defined interfaces; better firewalls between subsystems, both at the language and at the OS layer, and as you note--separation of computation from I/O.

You don't need to choose between the red pill and the blue pill, FTMP. And where you do have to choose, the choice can be made at a fine granularity.

The belief in some quarters that to enjoy the benefits of FP, you have to go whole hog and ONLY use FP--is in my view, highly unhelpful. If for no other reason than it erects an unrealistic (and in many instances, insurmountable) barrier to FP use.

RE: Problems with impure languages

If, on the other hand, lexical closures close over copies of mutable variables (each closure gets its own copy), then this problem goes away.

I'm not certain how one could reasonably consider that a 'lexical closure' if you're not actually closing over the same variables others are using; it certainly wouldn't be symmetrical to the inherent closure over global variables.

And it still wouldn't solve the inherent problem if aliases of any sort exist.

I don't understand here; how are impure functions synchronous, but pure functions (which are only run for their return value) not?

Sorry, my wording must have been poor because it seems you inferred something I didn't intend to imply. Pure functions and impure functions are both synchronous. But since you cannot execute side-effects with pure functions, there is far less risk of them being abused as process objects. And thus the problem won't (likely) exist - unless your programming community is full of functional extremists that think even processes are functions and pretend they can pass the world itself from one function to another and still have anything resembling a mathematical function.

Language minimalism is achieved by reuse of primitives, and is a desirable language property. Using impure functions as objects or object methods is an obvious reuse possibility, and thus tends to be favored as part of the design. Better solutions for Object Oriented design do exist, especially related to first class processes (a'la Erlang, CSP, Actor Model). If one uses pure functions, then falling back to one of these other models is a natural language design choice.

Value objects are, by definition, stateless. Imperative programming languages do frequently make the mistake of inappropriate aliasing of same, leading to the sorts of bugs you describe; but there exist many examples of "safe" value-object libraries which behave as one expects.

With examples like 'Scheme' with its set! operation among the 'impure' functional languages, I'd hardly say this is a problem restricted to imperative programming languages.

And libraries are insufficient solutions for the problems I described. It is the mere potential for aliasing that causes the problem. Libraries don't rid themselves of this potential - not unless they are part of a common 'standard' that receives special processing. Perhaps a language could include a tag to mark a 'value object' as such that could be used when making assumptions at compile-time. Alternatively, a language could follow Clojure's example and provide a common set of value objects. But for most 'impure' functional languages with any equivalent to the 'set!' operation, the implementor is on his own - no help at all from the compiler or interpreter.

I have to admit, I don't follow you here. The difference to me between "code analysis" and "code evaluation" is the presence or absence of input to the program; how does that relate to state, other than the general observation that stateful programs are harder to reason about?

"Code analysis" may occur after evaluating a continuation (where 'evaluating' = reducing to a value in a canonical form) but before executing it (where 'executing' = performing programmed communications activities visible to the world outside a given closed system). Impure functional languages do not make (and are defined by their not making) the distinction between execution and evaluation. For them there is but one phase, and 'execution' takes place at the same time as 'evaluation'.

I named one problem - that of analyzing a continuation before executing it. For a logic language capable of reasoning about a procedure or plan, the ability to create a collection of possible 'continuations' and examine them in projected or simulated worlds before choosing one plan and executing it is of massive benefit. And this usually is best done after one has some input available, in order to make a better decision. Thus the order in this particular instance is evaluate, analyze, decide, then execute. This perhaps ran counter to your intuition of the natural order: analyze, optimize, then evaluate. If you read the recent LtU article on Alan Kay's 'worlds' paper, you might understand this as solving a similar problem... but allowing simulation and analysis of much, much more than just state.

There are actually a great many benefits for programming if one may choose to not 'execute' a function the moment all its parameters are passed to it. E.g. a function of two arguments could return a procedure that, when executed, has some side effects. This procedure could then be inspected, run in separate 'worlds' with simulated resources, signed, saved, passed to a remote user (especially if environment is an implicit but homoiconically formed parameter to the procedure), thunked, delayed, transformed, optimized, etc. These possibilities simply don't exist with impure functional languages and imperative languages (at least not without a great deal of Greenspunning), and so people don't think about them.

As far as "how does that relate to state" goes - it doesn't, at least not directly. The above were complaints about 'impure functional programming', which is only incidentally about stateful programming (since 'side effects' generally aim at state manipulation). In the broader sense, state manipulation is just one form of Input/Output directed to a shared memory service, and the issue of side-effects and 'impure' functional (and side-effects in general) is really about communication - that is, beyond the communication minimally necessary for performing evaluations.

In practice, implementations of stateful languages (including the silicon-based implementations of that class of stateful language known as "assembly language"--i.e. modern superscalar CPUs with their multilayer memory hierarchies) are able to optimize just fine.

The reason 'stateful' languages do well is because humans are able to optimize them... at least until the complexity reaches a certain point, such as supporting state distribution, software transactional memory, transparent persistence, etc.

Anyhow, the goal isn't to avoid state entirely. Clojure-style approaches that distinguish between 'pure functional' and a more procedural domain will still achieve the benefits of the pure functional component. This allows one to encapsulate state manipulation in a manner that still allows the implementation to gain the advantages of referential transparency.

Impure functional - wherein evaluation and execution are merged (e.g. where a function can print to console) is what exhibits the problems. Pure functional interwoven with non-functional approaches (e.g. reactive, imperative, etc.) in a manner that allows clean distinction between the two by the language implementation won't have these problems.

Tail calls are not an FPL thing

Ironically, tail calls are even more important for OO languages and still more important for reflective languages. Method invocation is the hook for dynamic dispatch in OO languages, and method invocation or function call is usually the hook for any run-time updates. In most FP languages, if you change from a tail recursive loop to a for-loop (or whatever) you don't really lose anything (but prettiness usually.) Examples:

length xs = length' xs 0
    where length' []     acc = acc
          length' (_:xs) acc = length' xs (1+acc)

-- made up language
length xs = do
   let acc = 0
   while not (null xs)
       acc := acc + 1
       xs := tail xs
   acc

abstract class List<T> {
    int length(int acc);
}

class Nil<T> : List<T> {
    int length(int acc) { return acc; }
}

class Cons<T> : List<T> {
    T head;
    List<T> tail;
    int length(int acc) { return tail.length(acc+1); }
}

// maybe later
class Append<T> : List<T> {
    List<T> front;
    List<T> back;
    int length(int acc) { return back.length(front.length(acc)); }
}

class Memoizing<T> : List<T> {
    List<T> list;
    int length(int acc) {
        int len = list.length(0);
        this.length = acc => len + acc;
        return len + acc;
    }
}

Maybe that example was too "FP", let's try another:

serverLoop channel = do
    msg <- recv channel
    case msg of 
        ... -> ... serverLoop channel ...
        UpdateServer server -> server channel

interface Server {
    void serverLoop(Channel channel);
}

class SimpleServer : Server {
    void serverLoop(Channel channel) {
        Message msg = channel.recv();
        switch(msg.Type) {
            case ...: ...;
            case UpdateServer: msg.getServer().serverLoop(channel);
        }
    }
}

class ServerLoop {
    void serverLoop(Channel channel) {
       Message msg = channel.recv();
       switch(msg.Type) {
           case ...: ...;
           case UpdateServer: this.serverLoop = msg.getServer();
       }
       serverLoop(channel);
    }  
}

The "killer app" for TCO...

...as far as imperative languages are concerned, isn't really shown by your example above.

I'll break tail-calls into three classes:

1) Non-recursive tail calls. TCO is a minor at best optimization here, as the amount of stack space saved is only a small constant. Furthermore, many modern OO languages can inline these anyway; which eliminates the "call" altogether.

2) Simple recursive tail calls--a function calling itself, or two or three helper functions that call each other recursively. Imperative programmers already know how to write these as loops; and many do so instinctively. The avoidance of mutable state for a particular induction doesn't buy you much in a language and environment in which mutable state is all around you. Keep in mind--many programmers accustomed to imperative languages like tend to think iteratively; and aren't bothered in the least by the increased semantic distance between an imperative implementation of a function and a pure mathematical description. Important for theorists, but many practicing programmers have yet to be convinced; and its them who need convincing. It's kinda like telling a little kid that their semantic veggies are good for 'em; they much prefer syntactic sugar. :)

3) Complex gordian knots of tail calls. Here, ladies and gentlement, is the "killer app". Such things are dealt with in many imperative languages with explicitly-coded trampolines, or (more often then not) non-optimized-lets-pray-we-don't-blow-up-the-stack recursion. In the case of Javascript, I wonder how much CP-style JS code is out there chewing up collective petabytes of stack space on users browsers; no wonder Google went and wrote Chrome. (Does the Chrome VM do TCO?) Writing an interpreter for anything generally requires TCO to work properly; or else the for-loop-and-switch-statement from Hell.

If you want to convince programmers (or PL implementors) of imperative languages that TCO is good; THIS is where you want to focus. Not on writing constant-space Fibonacci solvers without loops; to many that is a solution looking for a problem.

My examples fall into either

My examples fall into either 3 or 4. The point is that the callee is dynamically determined and further may be code that did not exist at the original time of writing. TCO for for-loops is nice because then we don't have to rely on the control structures the language implementors gave us, but that is not why I think TCO is important nor what my examples illustrate (except the very first pair where the whole point was that for that example for typical FPLs there isn't a big difference between the two).

Emphasis on the dynamic

Emphasis on the dynamic dispatch of the calls should perhaps be made more clear in the examples above. What you're attempting to demonstrate becomes confused when you have that extra 'setup' code hanging around the non-TCO examples (class ServerLoop, etc.). Indeed, at a glance it looks like you're trying to demonstrate that TCO or FP simply offers an easier implementation on a line for line basis.

Your number 3 is definitely

Your number 3 is definitely the "killer app". Being able to thread my way through the various "states" in a program gives a completely different style of programming which is much easier to comprehend.

A middle position

The BitC position on tail calls isn't as strong as Scheme's, but seems to capture most of the practical use-cases without creating compiler hassles. BitC requires tail calls when:

  • The call is in tail position.
  • The target of the call can be statically identified at compile time.
  • Caller and callee are bound in the same letrec form. This covers self-call as a degenerate case.

This choice was pragmatic; we knew that we were targeting C. On the other hand, it seems to cover an awful lot of the real-world use cases without creating any compiler hairballs.

As a practical matter, is this good enough? What important use-cases does it miss?

CPS!

I can't tell whether your clauses are in an AND or OR relationship, but I'm pretty sure that they rule out the ability to program in CPS. I think this is a pretty significant limitation.

Important Cases

It misses dynamic dispatch cases of all sorts (multiple dispatch, virtual functions, function pointers, etc.) in addition to calls outside of letrec forms, such as continuation passing style or a function that selects a behavior based on some parameters.

But I don't know enough about BitC to assert these are relevant cases in your language. I imagine that your version of tail call would allow programmers to 'hack it' much of the time, albeit only for closed-system computations.

on-the-fly tail optimization

Hello, I am no expert on the present topic, but did a quick experiment:
Javascript tail optimization implemented in Javascript itself.

I think one can already use tail optimization in Javascript, without waiting for Javascript engines to change.
Constructive comments and suggestions are very welcome.