Gilad Bracha: Will Continuations continue?

There are a variety of reasons why we haven’t implemented continuations in the JVM. High on the list: continuations are costly to implement, and they might reek havoc with Java SE security model. These arguments are pragmatic and a tad unsatisfying. If a feature is really important, shouldn’t we just bite the bullet?

Many here will not like the answer.

This issue was discussed here mnay time, of course, but I think it is of interest to know what the people at Sun are thinking...

Tim Bray's response is also worth checking out, if only for the sake of this sound bite: The worst AJAX apps are like bad Nineties VB.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

UI design

It's not really PL related but I think the most interesting assertion there is Tim Brays idea that limited, linear he-said/she-said interactions are better for end users than interactive WIMP style GUIs.

I've seen a lot of attempts to duplicate widget based UI in a web browser but few attempts to do the reverse (unless you count wizards). Maybe that's a bad idea.

On the flip side I don't remember seeing many usability studies of web apps.

A couple features...

I thought that was really interesting too. Personally, I think that sequential user interactions are a terrible thing, and severely constrain the things you can do with the web. Imagine trying to write an IDE or a serious word-processor or spreadsheet with a non-AJAX webapp (I haven't used Writely, but I believe they're AJAX). However, the web made 3 features ubiquitous that users have now come to depend on:

  1. Persistent, always available undo. No matter where you are on the web, the back button always gets you back to where you came from (well, unless an AJAX app just broke it).
  2. The ability to jump directly into an interaction, bypassing intermediate pages. Also the ability to store your place in an interaction (bookmarks, history list, writing down a URL) and pass that information to a friend
  3. A universal GUI focused on displaying information. Webpages are frequently more information-dense than desktop apps, and the good ones have less extraneous visual clutter

Interestingly, the first 2 of those features depend heavily upon continuations. I wonder if we'll see webapps evolve to be more like desktop apps, but those desktop apps will need continuations anyway.

The web influence

I'm not convinced that "linear" apps are becoming more popular. Let's not forget that web applications are far outnumbered by simple websites that support true random access (via search engines).

I see quite a lot of influence on desktop applications from the web. An increasing number of apps generate their UI using HTML-ish rendering (XUL, Apple's desktop widgets, etc). Searching is becoming more important, though it has a way to go. Also, there's sometimes navigation via link-following with forward and back buttons (IntelliJ is pretty good at this).

I don't quite get the appeal of continuation frameworks. It seems like they introduce a coupling between what should be a long-lived, stable data representation (bookmarks) and something that a developer should be able to rewrite whenever they feel like it (a method's local variables). Just like Java serialization isn't so hot for a database format, framework-generated continuations seem like a really bad idea for long-lived bookmarks.

appeal of continuations from cost perspective

Last night I wrote (and junked) a long piece on why I wanted native first class continuations. It was so long, I broke it into two long pieces. But I favor a rule of thumb: something I can say well can be said quickly. The conclusion: what I wrote last night sucked. I'm unsure how to write something readably short.

But I really want to say something on this topic, so instead I'll just hint around my focus without really making definitive statements. (I'd rather say something about complex servers, but the number of words just explodes when I tackle that vein of thought.)

A couple plausible metaphors came mind: knots in strings and a classic brain teaser. This paragraph is about the brain teaser, which is stated as follows. You have three jugs holding 8, 5, and 3 gallons each. The 8 gallon jug is full of water and the other two jugs are empty. Divide the water into two equal parts (four gallons each in two different jugs), only by means of pouring water from one jug to another. [Hint: if you first get two gallons into the 3g jug, then filling it from a full 5g jug leaves four gallons in the 5g jug.]

What's the point of this brain teaser? Sometimes you can simulate the tool you want (a four gallon jug) to measure out your problem, but the simulation is not something obvious every time when coding, especially when performing many other simulations at the same time -- because none of your other tools are exact fits either. Your toolkit is a bunch if weirdly sized jugs rated to hold different fluids, and your system uses water, oil, gas, liquid nitrogen -- you name it -- that you don't want mixed.

This is where the knots-in-strings metaphor comes into play. Someone gives you a string with a single granny knot in it, and challenges you to remove the knot. Okay, no problem. The simpler the strawman, the easier it is to solve, right? How about something harder? Someone gives you a string with lots of knots in it all interlaced in one snarl -- a true Gordian knot -- and challenges you to remove the knot. Take your time, but the clock is ticking.

I've notice that folks like to present single granny knot problems when they argue first class continuations are unnecessary. (The demonstration of untangling looks so easy.) But it would take forever to present a Gordian knot problem, so in effect they don't seem to exist in blog venues, which filter for simplicity. But that's not realistic. Your typical professional coder sees Gordian knots in code all the time. You spend your first few days on a new job counting such knots, and listening to tales the old timers tell about folks lost at sea, tangled in nets during past storms. (Sorry, I'm having too much fun.)

Suddenly I'm liking this fishing boat metaphor that's come to mind, so I'll go with it. Okay, your typical programming language vendor is really making nets for fisherman. Net vendors want to sell nets pitched as cheap, strong, and reliable. (You can add weight and other factors to this metaphor, but it doesn't matter.) Really cheap and virtually indestructible would be nice. They want to make nets will simple knots in them -- cheap, easy to produce, and strong within easily measured parameters when looking at a net in isolation.

Fishing net vendors (programming language vendors) offer nets for sale to fisherman -- deck crew and ship captains alike -- based on scenarios that are simplified, and often don't resemble what happens at sea, with all that gear deployed, following the schedule invented, and using the new accouterments on the ship's deck. Once the crew starts landing incredible numbers of fish (and why are they doing this in a storm anyway?) and jury-rigging the equipment to solve problems on the spot, weird things start to happen. The nets almost always snarl and there's no time to fix every tangle under the pressure. Knots that tangle nets at sea are a different order problem than the simple knots in rope making the net work as a net.

Umm, I don't think I like this metaphor any more. But at least the last paragraph is lots shorter than what I wrote last night. Here's my main point: the cost of tools in isolation is different than the cost of tools when deployed in the field, especially if the scale and complexity of field deployment is a lot different from a numerics benchmark. Sometimes you want to use a different cost function, depending on the context. To lower the cost in a very complex end system, you might prefer to increase basic costs in the tools in isolation, because the cost function looks better when you deploy with more-versatile but more-expensive parts.

Continuations typically require an implementation which uses more heap memory, and appears to churn gc more than necessary when viewed in isolation. However, sometimes the use of first class continuations is like having jugs in every single size you want to measure, without awkward simulations. When push comes to shove in complex servers, sometimes developers give up and just start allocating more memory and copying things, because the whole nest of knots has gotten too much to untangle. When this happens, a little more allocation in stack frames might start to look like small potatoes. A slightly more expensive tool that eliminates some chaos is a gem, if it puts the developer back in control.

Okay, that was too long. But it didn't stink as much as my last effort.

Too verbose/unclear

In French we have a saying 'Ce que l'on conçoit bien s'énonce clairement et les mots pour le dire arrivent aisément': to summarize: if one understand clearly a problem, he can explain it easily.

If you don't manage to explain your point easily, think again about the point you're trying to make..

easy to say but hard to make clear

I think I agree with that most of the time. I might be able to make my point again more clearly.

Compared to other features in a programming language, first class continuations are relatively complex. For example, most of Scheme is nice and simple, except for continuations. :-)

Continuations also incur runtime costs, like heap based stack frames. So what good are they? I'll avoid "coroutines are really neat", which I think is valid; instead I'll make a point about cost.

A complex and expensive primitive in a language might only pay off when you can factor some large cost out of a complex and expensive high level use case. And the more complex and expensive the use case that gets reduced, the more sense it makes to save the cost this way.

So it makes sense that simple use cases won't make good examples of where an expensive primitive looks cost effective. A really good (terribly complex) use case might be hard to cite for many reasons.

For example, no one wants to admit they actually create (or just experience) certain kinds of messes that can be helped by a complex and expensive low level primitive. It's Emporer's new clothes social territory.

People in technical venues like this one use brevity, simplicity, and clarity as an indirect measure to assess idea quality and/or speaker competence. So if something requires complexity to describe correctly, then it can't be said and be graced with easy acceptance, using that metric.

Agreed about your point,

Agreed about your point, that's why it's difficult to defend GC also for example.

Pudding

Of course the irony is that once people get their hands on Java or C# (since they apparently did not get their hands on Scheme, Lisp or ML, etc.) or maybe even Perl or Python, they quickly appreciate GC.

Are continuations really needed?

It has been discussed before, but I never saw a convincing example. I never understood why a web application can not open as many 'states' in the server as the client opens windows.

I am no web programmer, but I went ahead and implemented a small Java project to satisfy my curiosity if the above is possible. This is what I did: every client request had an id which mapped to a specific 'program' instance; the program was in reality the same class that was instantiated many times, depending on client requests. Then the app's servlet forwarded the request to the appropriate instance of the client, based on request id. It worked, it was very simple, and states were never messed up. Did I miss anything?

Memory

How do you know when to de-allocate an instance? What if I open a web browser on your app then close the web browser (or my machine crashes) before completing the interaction? Now you leak server resources -> bad.

What if you add a timeout so that state objects are released after a while. You still use unnecessary resources, albiet for a limited time. So I am using your app, get distracted or go get lunch, and when I come back I find my session has expired and the data I input has vanished -> bad.

Ruby on Rails adopts a system whereby it serializes sessions to disk, but I don't know what its policy for freeing dead ones is. It's probably sensible though.

Still have that problem...

You have the same problem with any multi-page web interaction. What if the user has a single browser window open, gets halfway through an order form, then decides they don't really care and shuts down the computer? You have to decide when to free up the resources of that session. And you have no way of knowing whether they just got up for a cup of coffee or have given up entirely on their purchase.

Client-side state

That problem disappears if you track current session state entirely using cookies, hidden form fields or URL request parameters thereby keeping the server essentially stateless. It's definitely possible and common to write webapps with no server side resources allocate to track client state.

How do you know when to

How do you know when to de-allocate an instance?

Using timeouts.

and when I come back I find my session has expired and the data I input has vanished

So? it is so bad? if you have a job to do, just do it. I can't wait for you forever ("I" as in the server).

Ruby on Rails adopts a system whereby it serializes sessions to disk, but I don't know what its policy for freeing dead ones is. It's probably sensible though.

1) what does this have to do with continuations?
2) what If I never come back? how does RoR deal with that?
3) what keeps me from implementing serialization state without having continuations?

Continuations

You're still saving the continuation, you're just doing it manually. Your Java object is essentially a continuation where you have to set the state manually. True continuations save the state automatically based on what local variables are currently on the call stack.

The resource leak Mike mentions occurs because you're saving the continuation on the server and only storing a pointer to it on the client. Many continuation frameworks will serialize the continuation totally and send it back to the client as a hidden field. This preserves REST properties (easy caching, back-button aware, history list, multiple window-friendly) at the expense of bandwidth.

Cool

That's a good idea, though serializing a whole stack sounds scary. What if the stack contains pointers to memory on the heap, is the whole object tree serialized too? I guess it must be. *boggle*

Not as bad as you'd think

Yes, but in practice the stack of webapps (excluding the basic HTTP parsing/marshalling functions, which usually aren't captured in the continuation) is usually fairly small. Looking at Java stacktraces of webapps I've written, I rarely have more than 3-4 frames of application code on the stack.

In theory, the amount of information that's captured by the continuation is precisely the information that's been entered by the user, as long as the app is smart and performs all the processing only after all steps have been completed. Few users have the time or patience to enter more than a couple K of info, unless the form includes uploaded files, which can't practically be serialized and passed around as a hidden form field anyway.

I do not need to save the stack and all the program's state.

All that is needed to "save" is the information the user has input so far, which means a different instance of the application object model for each different state the user is in. So the server has one session per user, and each session has many 'sub-sessions' which each one corresponds to a "work flow" of the user. Submitting the same form from two different windows ends up affecting two different objects, and thus there are no complications, and no need for continuations.

Manual CPS transformation

Yep, what the "a different instance of the application object model for each different state user is in" amounts to is manually transforming the UI code to CPS (and then defunctionalizing it). With continuations you can write your UI in direct style, without having to perform the transformations by hand. I recommend reading the article ("Escaping ...") I mentioned earlier.

The UI code has nothing to do with keeping the states.

As I have implemented it (maybe I am missing something here), the UI has nothing to do with keeping the states around. The states are simply data objects. The UI is a separate layer filled from the data. Continuations offer to me nothing that I do not already have.

The UI has plenty to do with

The UI has plenty to do with keeping around UI-related state, no?

Have you read the article?

Have you read the article?

Escaping the Event Loop

Well, continuations allow you to "escape the event loop" or, in other words, allow you to write your UI in direct style rather than in defuntionalized CPS. See, for example, Escaping the event loop: an alternative control structure for multi-threaded GUIs (probably mentioned here before).

Right

As regards web programming I think the issue is essentially "Automatically Restructuring Programs For The Web" (search the archives). I don't think this must be related to support for continuation at the VM level.

VM support is helpful, on the other hand, when you want to use it as a platform for languages with first class continuations. You can, of course, do the hard work yourself (instead of relying on the VM), but that's cumbersome for a variety of reasons.

reek not wreak?

I wonder if he meant that?

java reeks?

finally someone had the guts to admit it!

Almost...

Gilad's argument, grossly simplified:

1) the most important use case for continuations is Seaside;

2) Seaside is a dead end anyway.

Both valid points. (People in the comments are pointing out another "major use case" - implementing languages atop the JVM - but it doesn't seem very major from a commercial POV.)

However... I may be imagining stuff, but here is another, equally plausible argument that could have influenced Sun:

1) the most important use case for Java is webapps;

2) the most important competitor to Java in the webapp space is Rails;

3) Rails doesn't use continuations, even though Ruby has them.

QED.

In other news, Don Box points out another interesting use case for continuations: business process management, workflow and stuff. I've also seen Phillip Eby write about the use of continuations in this area sometime long ago (the link escapes me at the moment). I believe that this idea is actually more valid and valuable than continuation-based webapps, resumable exceptions or other poster-child use cases for continuations.

it doesn't seem very major

it doesn't seem very major from a commercial POV

Actually I am not so sure about this. If we are talking about a platform war, instead of a language war, things like this can be important and have butterfly effects.

But LtU isn't for marketing discussions, so I guess we should drop it.

BPMScript

Speaking of Business Process Management and continuations, BPMScript uses Rhino and it's continuation support in this area apparently.

Workflow and continuations

Seaside is hardly dead

Seaside, shaky in theory but sound in practice; I've seen a couple of tremendously compelling applications built on it. So I don't think that "Seaside is a dead end" is supported by the evidence.

It's not that Seaside isn't good.

But the alternative, keeping the state on the client, is much better. It's simpler, more RESTful and less computations and storage are needed on the server (i.e. more distributed).

Tool power => Language power

Whether a language provides first-class continuations is really a question of how much control the language designers want to give tool developers. Continuations are an effective way of implementing all sorts of control features that have traditionally been the province of language designers (the usual suspects: threads, coroutines, generators, exception mechanisms...) Just because a language happens to have many of those things built-in doesn't mean that there isn't a benefit to being able to implement them directly, in more tailored ways.

The people who benefit directly from this ability are tool developers, who can customize the control features of system-level tools and frameworks in ways that otherwise require messy and inefficient hacks. We've already seen multiple such hacks for continuations in Java, including bytecode hacks and special interpreters implemented on top of the JVM.

Continuations clearly aren't the sort of feature that you'd normally expect to find in Java, though. They're just a bit too anarchistic. I'd settle for tail call optimization...

Re some of the continuation skepticism that's been expressed in this thread: it's premature to judge continuations by the various hacks and simple solutions that have been seen in the wild so far. This even applies to Scheme, which although it's had continuations for a long time, has for example only fairly recently had serializable continuations in some implementations (e.g. SISC and Gambit). Features like this are important for real world use: serializable continuations means you can persist continuations, migrate them between nodes in a server farm or a grid, or from server to client and back.

Compare the craziness of an app distributed via Ajax and HTTP to one in which client/server interaction takes place seamlessly in the same language, with continuations travelling back and forth under the hood, so the programmer doesn't need to worry about them, and the only reason they need to care about the distinction between the client and the server is from the perspective of latency. People immersed in current software infrastructures find it difficult or impposible to believe that this vision is more than a pipedream. Of course, it's true that because of the limitations of the existing infrastructure, starting with the limitations of the web browser programming environment, nirvana isn't going to be reached any time soon. But we have to start somewhere, and greater awareness of and support for continuations will be an excellent catalyst.

It'll become much clearer what they're good for once efficient serializable, migrateable continuations can be relied on in at least a few mainstream environments. There's not a single mainstream example of this that isn't a limited hack right now, just because the platforms won't support anything else (someone tell me if I've missed an example of continuations done right; Smalltalk, Scheme and SML don't count as mainstream, of course).

Does Sun want to add support for continuations to the JVM? Of course not - it flies in the face of everything that Java has historically been about. Should Sun add support for continuations to the JVM? Hell yeah. It'll stimulate tool development, it'll put more power into the hands of the development community, especially tool developers, and it'll do more than the $100,000 in prizes for grid development. Not to mention that it's the kind of thing that could keep the JVM interesting in the face of all that .NET/Linq competition that Erik keeps posting about.

Can't say I've ever missed them

Now what I would like to see in Java is extremely light-weight threads, with low context-switching costs, so that I could reasonably run >100 threads in a JVM. That would cover essentially all of the use cases I could see needing continuations for.

Straight man?

Continuations make an excellent platform on which to build very lightweight threads, and it's been done very successfully in various Scheme implementations - hundreds of thousands of threads are no problem. If Java had continuations, you'd already have your lightweight threads.

But let's say you were given your lightweight threads as a built-in feature in Java. Can you suspend them, then clone them, serialize them, migrate them to another machine? If you can at least suspend and clone them, so that you can restart a thread at the same point more than once, then you've got continuations. If you can't do that, then you're missing out on quite a few use cases.

Really, first-class continuations are just a consequence of applying proper factoring to the concept of threads. Is there some reason that the design principles we apply to our programs shouldn't apply to our languages?

If Java were a language in which concurrency was implicit in the language, embedded somehow, and Threads weren't objects, then there might be a high-level design argument for saying that first-class continuations aren't appropriate. But as it is, Java's high-level design begs for first-class continuations.

Only a bit of a straight man...

I understand that continuations would be a useful implementation technique for light-weight threads in Java, but I really was serious that I don't see the value of exposing them to the end-user

Can you suspend them

In Java, yes, you can. However, it's a dangerously bad idea, since suspending a thread leaves it with ownership of any locks and shared resources. Any implementation of continuations will have to deal
a host of issues like that.

clone them,serialize them

On their own, I don't see the real value of cloning or serializing threads. Again, there's the question of what happens to the locks and shared resources.

migrate them to another machine

That would indeed be wicked cool, but could be handled by the runtime without explicit programmer knowlege of continuations.

I guess my takeaway is that what I really want isn't Scheme-in-Java, but rather Erlang-in-Java. Scheme-in-Java seems like it would cause more problems than it fixes. Erlang-in-Java would fix a lot more problems than it caused.

How exactly?

And how can you migrate them without first supending them, then serializing them and finally recreating them in the target machine?

Also, Erlang excels at concurrency because there is no shared state and no synchronization on that shared state. Well, if you remove those, continuations have none of the issues you mention.

A matter of encapsulation

And how can you migrate them without first supending them, then serializing them and finally recreating them in the targuet machine?

The implementer needs those sub-operations, but it's perfectly reasonable to not offer those sub-operations to an end-user. By analogy, implementers have to worry about register mappings and stack layouts, but end-users only need to know about call semantics, and have no acesss to the implementation level. A distributed JVM could certainly support thread migration without allowing general continuation semantics.

I don't necessarily advocate otherwise

Continuations in Java are secondary to me. I would like to have continuations and (even more so) tail-calls in the JVM, not necessarily in Java. So, to me, this is not about “end-users”, it's precisely about (language) implementers. Must implementers be shielded from all the complexity too?

OTOH, maybe the JVM is bound to be just that - a Java VM, and not this multi-language run-time I'd wish it could be. [rant] But then maybe Sun & Bracha should stop marketing it as such, and drop feable attempts like invokedynamic, which isn't of much use anyway. [/rant] Sorry.

You don't hate tool developers, do you?

Subject line to be read in the same tones as "you don't hate children, do you?"

I understand that continuations would be a useful implementation technique for light-weight threads in Java, but I really was serious that I don't see the value of exposing them to the end-user

Hence my point that this is really about tools and supporting tool developers. How do you expose language features to tool developers without exposing them to end users? Or should tool developers just be forced to live with whatever end users get? The latter is the traditional answer, for traditional languages, but the nature of the relationship between language developers and their communities has been changing, with open source languages being one of the driving factors.

Re the issues with continuations, such as locks on shared resources, those are all well understood, and have been dealt with in various ways in environments that support continuations. One way is just "don't do that", and for some kinds of tools or frameworks, the tool can help make sure that things work properly, e.g. via finalizers, automatic transaction boundaries, etc.

migrate them to another machine

That would indeed be wicked cool, but could be handled by the runtime without explicit programmer knowlege of continuations.

So you're saying that you want to rely on Sun to implement this stuff for you whenever they get around to it, instead of the much larger pool of smart tool developers who could have implemented it already. In fact, those tool developers have implemented migrateable continuations already, but only via interpreters on top of the JVM, afaik.

I guess my takeaway is that what I really want isn't Scheme-in-Java, but rather Erlang-in-Java. Scheme-in-Java seems like it would cause more problems than it fixes. Erlang-in-Java would fix a lot more problems than it caused.

Erlang-in-Java would certainly fix some problems, but would leave many others unaddressed, and would push a particular model of concurrency. Continuations-in-Java (since we're not talking about anything close to full Scheme) would allow a whole range of other problems to be addressed in whatever way is appropriate to the problem.

The idea that continuations-in-Java would cause problems is speculative - what's the real concern? That ordinary programmers will start wildly abusing continuations, resulting in flaky and impenetrable spaghetti code everywhere? That bad tools which overuse continuations in all the wrong ways will proliferate? There's no evidence that these outcomes are particularly likely. Bad coding practice and bad tools die, usually quite quickly.

I think plausible arguments could be constructed against continuations-in-Java from various purely technical, language implementation perspectives, or even the perspective that rejiggering the VM will be too costly. But I'm not seeing any real technical factors underlying the concerns about what continuations might unleash on an unsuspecting programming public.

Interesting Subpoint

This re-raises a very interesting cultural subpoint that I haven't really thought about much lately. I can't recall who first made the observation—probably Alan Kay—but someone once said that the major difference between the Lisp/Smalltalk communities and other language design communities was an almost religious insistence in the Lisp/Smalltalk communities that any capability that was available to the language designer should also be available to the language user. You saw this with InterLisp's structure editor, with which you could trivially edit yourself into a crashing system, and you see it today in Squeak, which will happily spit out its own kernel, which is written in a subset of Smalltalk and then translated to C for compilation.

Java lies at the other end of the spectrum. Yeah, it's got reflection and dynamic bytecode loading, and people have made heroic efforts to take advantage of them with BCEL and CGLIB and built amazing edifices around them such as Hibernate, but these happened in spite, rather than because, of Java's support for such efforts. I find the philosophical distinction quite telling.

Before anyone asks, yeah, O'Caml's kinda-sorta closed in a similar way. However, I would point out that it has an excellent syntax extension mechanism, and there's only one module in the entire system—Obj—that is undocumented. I think the existence of Flow Caml, Acute, HashCaml, and MetaOCaml demonstrate reasonably well how far you can go with O'Caml's balance of closed vs. open implementation.

Nothing new

"We can't get rid of GOTO!"

"Automatic garbage collection is expensive & hard to get right & there's only one or two use cases for it!"

"Multiple inheritence is expensive & hard to get right & there's only one or two use cases for it!"

"Multithreading is expensive & hard to get right & there's only one or two use cases for it!"

Some people just can't see potential. They don't see that workarounds are more expensive. They don't see that the things they now take for granted only work because we learned to managed their complexities.

Re: hard to get right & there's only one or two use cases

Amen. From my perspective as a language consumer rather than creator, I think I want the proper theory that gets away from the "hard to get right" and then everybody can relax and get into more use cases.

Shared state concurrency is apparently so bad that either people avoid it, or just ship stuff that has lots of bugs in it. Get away from that and maybe the people who avoided it because it is scary evil would start making cool new things?

parody of massive shared state projects

[Update: I removed the original joke I had here because I'm not comfortable with Carter's reworked version making customer's the butt of the C++ joke, instead of me. It's not ethical to work on a project bad for customers -- I'd rather not be associated with the joke as it stands. Any chance you can change the name to someone elses?]

No, no, no...

It goes like this...

  Moe: Welcome to our C++ project! Our customers are/will be dying 
       when we get started.
       Okay, let's strap our explosives on them and run them
       through traffic as a warmup, before we install our software

  Rys: Great.

  Moe: Huh uh, that's not nearly enthusiatic enough.  More Cowbell!

Non-native continuations

I found this article this morning; I can't say I was too impressed. My main complaints are twofold:

1. In the case of hotel reservations, I wholeheartedly agree with Tim Bray that a (correctly implemented) web interface is a particularly intuitive and elegant user interface. However, I might disagree with Tim in that I don't think that correctly implemented web UIs are great for all applications. I really thought that Gilad was completely off when he said this UI deserves to be relegated to the ash heap of history.

2. Gilad seems to think that the only way to use continuations in such a framework is by storing them on the server, and associating them with a reference that the client posts back. However, continuations could also be stored on the client, in effect passing them "by value" instead of "by reference", thus satisfying his desire that the client communicate all the relevant information to a stateless server to complete the transaction.

In Gilad's defense, it seems unlikely that a native JVM mechanism would be the preferred option for storing continuations on the client. These would potentially be much larger than necessary, you'd definitely want delimited continuations! Moreover, even delimited continuations could easily pose a security risk unless encrypted. And honestly, using encryption for tracking what hotel somebody is looking at seems a little silly. In that case, Gilad is still a bit off in assuming that native JVM continuations are the only kind of interest to Java programmers.

Jens Axel Søgaard's comment at the site is a bit cryptic, though particularly relevant here. If the JVM supported proper handling of tail calls, which is possible without subverting the security model, then a web framework could provide non-native delimited continuations through CPS conversion, that could be stored on the client side.

However, continuations could

However, continuations could also be stored on the client, in effect passing them "by value" instead of "by reference", thus satisfying his desire that the client communicate all the relevant information to a stateless server to complete the transaction.

Which places additional restrictions on the server, namely, a) that the continuations are not too large to be serialized back and forth, b) that the state contains nothing sensitive (hard to ensure), and c) any client-provided state must be fully validated.

There are viable trade-offs here as to an appropriate balance of server/client state, but (b) seems particularly tricky. Given how much state continuations can capture, it just seems safer to leave them server-side.

Besides, continuations, and particularly delimited continuations as you point out, are so small why not just leave them server-side for some state? That way you don't have to mess about with encryption. It makes reasoning about the system's integrity simpler, though it does impose limits on scalability and robustness.