Async/await vs coroutines?

I have noticed that in the last decade or so various programming languages have adopted an async/await model of concurrency (Javascript, C#, Rust, Zig, etc.). This surprised me as my experience with Lua had me assume that a coroutine model of concurrency was superior and would win. Obviously I have missed something. What are the advantages of async/await? More explicit? Lower level? Easier to implement? More flexible? How else do async/await and coroutines compare?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Coroutines are typically

Coroutines are typically more costly in space since they capture the whole stack frame, where async/await make it a little easier to capture only relevant state. The cost for better storage use is code duplication for functions that can be used in both synchronous and asynchronous contexts.

Interesting. So a coroutine

Interesting. So a coroutine needs its entire stack saved because it needs the full context to continue running anywhere in its call stack while async/await only needs to save enough state to handle computing the remaining possible yield/return points?

Bad assumption

Coroutines don't need to save their entire stack, but sometimes the easiest implementation is to copy the entire stack, so that is how it is implemented naively.

There are lots of possibilities for how to implement coroutines, but unfortunately most languages have to interop (FFI) with existing languages / libraries / etc., and thus there are certain conventions (calling conventions, stack layout conventions, etc.) that implementations must follow, and that is another reason why "copy the whole stack" is a popular answer.

If you didn't require FFI from within a co-routine, then the set of options that you have as the code generator open up hugely.

Ah, thank you for clarifying.

Ah, thank you for clarifying.

KISS

I'd speculate that the low-level device is essentially a collective giving up: that concurrency is (in practice) widely perceived to be a problem for which, over many decades, no high-level device has been able to gain traction, and therefore no higher-level device is worth investing energy in — neither effort in designing it into a language, implementing it, nor using it.

Interesting. Are there known

Interesting. Are there known lowest-level concurrency devices that all others can be built on?

To be clear

I make no suggestion at all, one way or another, on the merits of these perceptions; merely suggesting they are widely perceived.

Something else

There is something else at work, I think, and it's about usability. Co-routines require thinking in a different mental space, and they require that you organize your code differently. In a lot of ways, async/await does not. Or at least it does not appear to until you're too deep into it to back out. :-)

Async/await is deceptively compatible with existing sequential programming practice. With a bit of tooling, it's possible to take existing sequential code and convert it to async/await with very little effort and substantial benefit. Having done so, you soon find that you've been so successful at producing parallelization that you get rudely surprised when something "simple" like map in TypeScript proceeds to do "wide parallel issue" of async calls. Thankfully, there aren't all that many examples of functions similar to map in sequential codes. Few enough that you can learn a manageably small number of things to avoid and call it good. Though it does make me contemplate whether asynchrony shouldn't be part of the static type system.

At least in my experience, the place where async/await comes back to bite you in a larger way is in the interactions between out-of-order execution and external time constraints (either timeouts or human perception). Since the execution order of "threadlets" isn't defined, you can easily end up in situations where it is impossible to know if you can meet deadlines. I've made the mistake (with the help of TypeScript map) of issuing so many threadlets that a database connection with a 10 minute activity timeout quit on me. If you sufficiently overload the host machine for testing purposes, you'll discover all sorts of places in the Node HTTP stack that are not as robust as perhaps they should be.

But two things here:

  1. The wide-issue cases are manageably rare and not all that hard to chase down when you run into them.
  2. It's still true that async/await is the closest model to conventional sequential code and the model that appears most compatible with existing programming languages.

Actually, the one thing I've found that's a real nuisance is async debugging. "Run to next statement" has the wrong semantics. It's a solved problem elsewhere, but not in the tools that we are currently using.

I think it's telling that one of the most common complaints in the Node world is that you can't await from top level unless you are using the node-preferred module system (which presents problems for startup-time initialization that has to use async library code). I think it's also interesting that, in combination with a time source, you can implement things like scheduled "tasks" in Node using async/await (which I did recently for a production system).

I suppose the other thing to say is that async/await seems to be a good impedance match for systems that do a lot of I/O. While compute-intensive workloads exist in the world, the vast majority of real-world applications are I/O bound. This is why, in spite of being single-threaded[*], Node outperforms multi-threaded Apache by a factor of 25 in some important cases. It's also why the performance challenges inherent in JS don't hurt very much: all of that code is running in the 10% part of the 90/10 rule.

    [*] Node cheats, in the sense that it uses libuv to do multithreading and asynchronous I/O at the JS perimeter.

Fast forward, and Node has adopted first-class multithreading as of v10.5 using V8 isolates. Truly concurrent threads that share physical heaps but not logical heaps. Which is really only useful for coroutine-like patterns. As with any major change in core baseline assumptions about how a system works, it will be interesting to see how this bites developers in the butt. Given the lack of shared memory, the answer may turn out to be "not much."
Personally, I think it's an interesting compromise, not least because the portions of the state that are observably immutable - and more importantly, their encachings by the V8 engine and its JIT compiler - can be reused across worker threads.

Stepping up 30,000 feet, I also think it's interesting how the last decade or two has seen so much effort devoted to developing different choice points in execution models: VMs, Compartments, Containers, concurrent GC, VM isolates, and so forth. It suggests that there is both a real need and a real problem. It prompts me to note that languages with the ability to generate complex immutable data and/or demonstrably deep-frozen data, and/or languages that have no mutable global state, may find a niche here.

It also prompts me to scratch my head a little and wonder whether the various fantastic bits of work on concurrent collection may turn out to be evolutionary dead ends. I think we were on a potentially interesting track in BitC in attempting to lift frozen-ness into the type system, and I may go back and think about that some more. But I also think that the use-cases for shared-memory concurrency may be rapidly diminishing. When I think back on some of the asynchronous IPC issues we were trying to address in Coyotos (and analogous efforts in L4), I find myself wondering now if the Coyotos kernel may, in retrospect, have been the only place in the entire system where shared-memory concurrency was actually motivated.

Ff that's actually the case, it suggests that async/await may have been one of the brighter recent ideas in the computational world, and that we are only beginning to scratch the surface of it's ultimate impact.

From a certain perspective, async/await can be seen as merging promises into programming language surface syntax (and exception handling models). As usual, Barbara and Liuba have given us much to think about. :-)