Scheme language conundrum regarding delay and force.

The following is a question originally posed by Alan Manuel Gloria to the Scheme Standardization list. I think that it is a particularly good question, so I thought I'd share it.

(define x 0)
(define p
  (delay
    (if (= x 5)
      x
      (begin
        (set! x (+ x 1))
        (force p)
        (set! x (+ x 1))
        x))))

(force p) => ??
x => ??

Delay and force have their usual (for PL theory discussions) meanings; the questions are about promises that force themselves in the course of their forcing (ie, forms for which 'force' causes a recursive invocation of 'force' on the same promise). The question becomes interesting when, as in the above case, the recursion is nontail. Some current implementations return 5 for p and 10 for x and some throw an exception reporting a reentrant promise. Some implementors claim that 5 for p and 5 for x ought to be allowed since the delayed form returns at the point of forcing and no further computation ought to be done on it, whereas others say that x is visible outside the scope of the delayed computation and therefore the computation of the delayed form must be completed for its side effect on x, even after the force has taken place. FWIW, I agree with the latter point of view. The questions, roughly, are these:
  1. Ought the above be legal with specified results, legal with undefined results, or is it an error?
  2. If it is an error, must an implementation detect and report the error via the exception mechanism, or may the dread curse of nasal demons be invoked?
  3. If it is legal, with specified meaning, then what specific meaning ought it to have?
  4. If it is legal, then may force return while the remainder of the computation caused by the force completes in another thread (making x subject to a possible race condition)?
  5. If it is made legal, is there any legitimate use for it? (ie, is there any useful purpose the language would fail to serve if it were banned)?
It complicates the discussion somewhat that this language "feature" may interact with reified winding continuations. So, bearing in mind that the committee is under no obligation to listen to or agree with this forum should a consensus be reached here, what do you think this code should do?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Run no more than once

IMHO, it's important to ensure that a promise runs no more than once. Two separate threads shouldn't be able to force a promise in such quick succession that it starts running twice before it can finish once. This synchronization burden is one reason I would look for language-provided promises in the first place, rather than rolling my own memoized thunks with set! and lambda. (Actually it's the only reason I can think of. Other use cases might change my mind.)

So I say a (delay ...) form should set up its code to run no more than once, starting as soon as a corresponding force call is made. The force call should block until the delayed code has returned at least once, and it should return the same value(s) as that code returned the first time. This implies that a promise that calls force on itself will be blocked on its own completion, usually resulting in deadlock unless reentrant continuations are involved. (A captured continuation could finish the promise later from some other thread, or this call to force could be happening after the first time the promise returned.) Whatever Scheme does for deadlock, it should do for this.

Unfortunately, non-threaded Schemes do nothing for deadlock

... because they don't expect to have any. By the way, this example has been around since R4RS.

My wishful thinking

non-threaded Schemes do nothing for deadlock

What I mean is, I think what Scheme does for this case and what it does for deadlock should be basically the same. Whatever policy it is, even if it's just that the program hangs, this case can cause it to apply in a non-threaded Scheme.

I don't expect this to be very backwards-compatible or production-system-friendly. I'm going by what strikes me as intuitive to specify and naively useful.

If it's pragmatically necessary to recover from this particular kind of deadlock (e.g. by raising an exception), then I'd try to specify that recovery mechanism in a way that's consistent with any other specified deadlock recovery cases. Not that there even have to be any others...

By the way, this example has been around since R4RS.

I intentionally didn't look before. :)

It's too bad R4RS (and likewise R5RS) affirms that "a promise may refer to its own value" and demonstrates with an example. I'd like to believe it's only a cautionary tale, showing that certain implementations may run a promise more than once if they prefer not to synchronize, but that's awfully wishful thinking.

I like the original R3RS language, "The following examples [...] illustrate the property that the value of a promise is computed at most once." For R4RS's modified example, this vague claim was clarified in a way that's technically compatible but seems superficially weaker to me: "The following examples [...] illustrate the property that only one value is computed for a promise, no matter how many times it is forced."

Looks like R6RS dropped the note that "a promise may refer to its own value," and the current draft of R7RS-small no longer has the example implementation of delay whose newfound complexity in R4RS had prompted that note. They still feature the R4RS example though, so they show this feature even if they don't tell.

Does the R4RS example actually run as-is in current implementations, or do the ones that "throw an exception reporting a reentrant promise" do so here too?

Thunk conversion?

I'm not much of a scheme buff, so this is probably a stupid question... but why aren't the semantics for delay/force defined by conversion to lambdas? I would expect p = 10, x = 10.

Promise conversion?

You can also define it in terms of promises...then you'd expect "error: promise is single assignment -- can't set its value more than once" or what Ross said. p=10 x=10 is okayish though you would not ever want to rely on that behavior. I don't see how anybody could defend p=5.

Or make the behavior undefined, but I'm not sure if that's acceptable for Scheme.

Okay

I see where p=5 comes from... it's probably the most efficient implementation in a multi-threaded system. Force checks to see if a value is present and if not starts the delayed computation of p. Upon completion, the value of p is written only if another value isn't already present (in case another thread has already written that value).

The behavior Ross wants seems preferable to me, but again I'm not a Scheme guy. I really just wanted to figure out what I was missing and now that I think I have (thanks guys), I'm going to bow out of this conversation.

Because Scheme is impure

A delayed value is supposed to become a value when forced; no matter how many times you force it, you are supposed to get the same thing (in the sense of "eqv?"), even if the underlying expression is impure.

best answer is to dissolve the committee

R4RS offers a useful and straightforward definition of delay/force in terms of lambda, if, and set!. It implies the answer p=5, x=10. It even gives a little rationale, right there in the text.

You would think that settled the question right there.

A minor variation of the R4RS reduction would give P=10, X=10 with the rule that "force" always returns the value most recently returned by the delayed form rather than always returning the first value returned by the delayed form. I think that could also be a useful construct and if so it deserves recognition as something different from the traditional (and useful) delay/force.

A variation in which recursive invocation is a detected error is easy to construct, if you want it.

Variations in which the delayed form is invoked only once, but recursive forces are permitted can be hairy, but about as hairy as implementing dynamic-wind. Nobody has experience to show any of these variations is "The Right Thing" and the level of hair alone suggests that none are (though some may be useful).

Variations in which the delayed form may be invoked only once but all invocations return "immediately" whenever the first one does are difficult (at best) to coherently define in even a simple R4RS environment.

So we started the day with a perfectly good R4RS definition and a language in which the most obvious variation was trivial to define. In Scheme tradition, it was just a little syntactic sugar over lambda/if/set!

And now the committee is doing what, again?

We're doing nothing

But that is not the fault of the Working Group or its members. The Steering Committee, which is so structureless it lacks even a convener, has not been able to meet in order to receive the WG's output. I shall be nagging the SC members further.

The way I see it...

The way I see it, this code builds a call stack containing several invocations of 'force.'

That means the evaluations of 'force' are nested in time; the first (command line invocation, outermost in the nesting) is called before, but returns after, all others. The last (base case in recursion, innermost in nesting) is called after, but returns before, all others.

The first time force returns (meaning, the base case in the recursion, corresponding to the *last* time it got called) it returns 5. As I see it, that means all evaluations of (force p) everywhere, no matter when they were called, are from that point onward bound by contract to return 5.

At this point the system no longer needs to do *anything* in order to determine the return value of every call on the stack. That return value is predetermined regardless of what value a completed evaluation would have returned. But if the recursion 'tails' are not performed, x will be left with a value inconsistent with having completed the evaluations specified by the code. So the 'force's on the stack must continue evaluation, even though the values they compute are irrelevant to their own return values. The first 'force' invoked (the command-line invocation) tries to return x when x is 10, but because it's a 'force' it has to check for a previously computed value, finds one, and returns 5 instead.

And x is left with a value of 10, consistent with the execution of the code completing the specified evaluation -- even though the results of that particular evaluation are no longer relevant after the innermost call returns.

So if (force p) gets called *again* after the first time it returns, the corresponding code is not executed. But evaluations of (force p) *in progress* at the moment the first return of a value happens must continue if there are side effects in their recursive tails, even though their return values are at that time predetermined.

That's the way I see it. That's the way it's been since before R3. But there is apparently room for interpretation....

Ray

I like that summary

That's a really good description of any version of force defined to allow reentrance as long as no earlier evaluation has yet returned. Locally in this discussion only, call this definition force-rec, meaning recursive entry is allowed up to when the first evaluation returns.

Then we might call an alternate definition force-once where evaluation can be entered once only (so recursion does not evaluate again, even if the first evaluation entered as not yet returned). In single-threaded code, force-once cannot always return the same value, since recursive calls have no idea what value will be returned by the first evaluation entered that has not yet returned itself. If forcing a promise must always return the same value, then force-once doesn't work, but force-rec does as you defined above.

Must the Scheme standard have one and only one version of force? As opposed to offering either force-rec or force-once as an alternative? While that seems like a trivial thing to decide, I imagine it requires endless discussion anyway, so I withdraw interest in the answer now after asking. :-)

Instead I want to pop up one meta level, because I'm interested in definitions per se, including concerns like: what makes a good definition? When possible I like short and clear definitions ... so among alternatives, I pick choices easier to cover thoroughly in few words without twisty concepts. The definition of force-rec is on the twisty side, while force-once can be defined more simply, which is friendly to new users.

When I tried to get peers interested in Scheme about twenty years ago, I said evaluation rules were easy to understand — atoms evaluate to themselves while pairs (i.e. lists) follow a two part rule: 1) if car is a symbol from a certain list then it's a special form defining evaluation in some way unique to each symbol; otherwise 2) each list member is evaluated before dispatching to an executable object in first position. Macros complicate things, but I'm going to totally ignore that here.

In the context of explaining evaluation rules this way, one wants to summarize when each special form evaluates an argument, so you can briefly summarize what each special form does differently. For lazy evaluation, delay returns a promise which avoids evaluating its expression until force is called, which then evaluates the expression once the way it would have in the original environment context (modulo changes in current values of variable bindings). That description sounds like force-once. A new user might hear the description of force-rec and say, "Nuh-uh, I'm not using your nutball language," because it sounds complex.

It might come down to whether you want definitions simpler for new users or for implementers who want to follow simple rules and make users responsible for outcomes in odd edge cases.

There are three principled versions of 'force' to consider.

There are actually three versions of 'force' that, IMO, are principled enough to consider for inclusion.

The first, you dub 'force-rec' because it allows recursive evaluation of force. To implement force-rec you have to have to check for a completed promise twice in each evaluation; once before you enter the evaluation of 'force' and once when you're about to return. In this semantics of 'force' you have p = 5 and x = 10.

The second also allows recursive forcing, but checks for a completed promise only when it's starting evaluation; in this version (which some have been arguing for) the first force *called* sets the value that the promise will return forever after, and any recursive invocations that complete *before* the first force called completes may return other values. In this version p = 10 and x = 10.

The third version does not allow recursive forcing, which means it can be used as a synchronization primitive or guard form on code that you want to be certain executes at most once ever. It's a bit trickier to implement, though. In this version, no 'force' of the same promise may execute code except the first one called. Any calls to force the same promise that occur *while* the first one is being executed wait, returning the value that the first one returns.

This means that any call to 'force' whose return is required for the evaluation of the same 'force' to continue, logically results in a deadlock of the calling thread.

What's less obvious about this is that due to reified continuations, you can achieve this dubious state even in the single-threaded case without a directly recursive call to force, or avoid the deadlock even in the single-threaded case when there *is* a directly recursive call to force. So the simple rule "no recursive forcing" does not suffice to describe its legitimate use, and straightforward static analysis of the sort that can be done by a code walker cannot absolutely prove or disprove the error.

When I'm looking for a "good" definition, I definitely am looking for simple, consistent rules I can explain to people about how to use something, and for the existence of definite implementation strategies that will reliably give correct results. Finally, it helps if there's a clear strategy by which non-error usage can be statically proven, though I'm not such a fanatic about this as many people are.

IMO, the third definition fails these tests. It's principled enough to consider, because I can unambiguously explain what it does and it's clear how to implement it correctly. But in the presence of reified continuations, neither the rules I can explain to people about how to use it nor a strategy for statically determining whether a usage is non-erroneous easily present themselves, and for those reasons I don't like it as a standard form nor as a building block for anything.

Ray

(reply intended)

.

What are they smoking?

There are only two sane choices for a semantics of lazy suspensions, as far as I can see:

1. If the language is concurrent, then accessing a suspension that is already triggered should just block until it has been resolved. In such a setting, lazy suspensions are merely a special case of futures.

2. If the language is not concurrent, then blocking always is a deadlock, because there is no other thread that could make progress. In such a language, it may be preferable to simply throw an exception when force encounters a suspension that has already been triggered.

Starting to evaluate a thunk multiple times -- that's just insane. I don't see why continuations should make a difference either.

Here is the straightforward canonical ML implementation of sequential suspensions:

type 'a lazy_state = Delay of (() -> 'a) | Running | Value of 'a | Exn of exn
type 'a lazy = 'a lazy_state ref

exception Circular

let delay f = ref (Delay f)

let rec force lazy =
    match !lazy with
    | Delay f -> lazy := Running; lazy := (try Value (f ()) with e -> Exn e); force lazy
    | Running -> raise Circular
    | Value x -> x
    | Exn e -> raise e

Continuations make it very

Continuations make it very difficult for a static analysis of the code to PROVE before running the program that the program will never call it while it's already running. You can start running the lazy suspension, jump out of it via a continuation to do a lot of unrelated stuff, and then jump back into it via another continuation.

I don't like "the system throws an exception if this happens" situations in which it isn't possible to prove whether the exception will or won't be thrown or check before you do the thing that may cause the exception. If you can't check or prove, then at least there should be a semantics that simply doesn't do something in the exceptional case, and returns to tell you it didn't.

IOW, for the version that protects run-once code, I would want "trigger if not already running and tell me if you succeeded in triggering it" semantics as opposed to "trigger at any time and throw an exception if it's already running" semantics.

How is not signalling better?

I don't understand your argument. How is dead-locking or silently doing other crazy things any better? It certainly doesn't make static analysis easier. And if you can't detect an error situation statically, then surely the next best thing is to get a runtime error as early as possible?

What remains is just an instance of the general question of how to best signal a runtime error: via an exception or via an error result? Both are equally expressive, but encourage different styles. The established rule of thumb is to use exceptions only for exceptional situations -- but this surely is one. Unless you are principally opposed to exceptions.

exceptions considered harmful.

Deadlocking is not better, particularly in a single-threaded program where the deadlock cannot be broken. In fact, I would consider that to be the same as a program crashing out with a type error; indication that I did something both avoidable and stupid. If it *isn't* avoidable then it is the language/runtime that's stupid.

I dislike exception mechanisms to the extent that they muddy clarity about control flow and obscure the actual point of origin of the exception. With exception mechanisms, IMO, it is too easy to get distracted by looking at the "golden path" in the code and forget to handle all the things I need to handle.

"Oh, that throws an exception, I can do it later/think about it separately," most of the times I have done it, has led to sloppy code, bad handling of anything off the "golden path", resource mismanagement, flow-of-control puzzles, loss of needed information from the point where the flow-of-control jump originated, and bugs that do not easily admit of being tracked down to a definite specific reason or a definite specific situation, and are thus more difficult to fix.

Finally, I find that needing to *avoid* trouble rather than trying to figure out how to *recover* from trouble usually provokes the kind of careful thinking that leads me to produce results that are in most ways superior. I do way better at releasing resources correctly, for example, when releasing them in the same routine that allocates them.

In fact, I think the *idea* of exceptions, as usually implemented, is flawed. These situations are not exceptional, they are ordinary and need to be handled just as well and just as precisely as anything else. Mechanisms that separate their handling from some particular selected path have primarily facilitated my *failures* to handle them as well and as precisely as anything else.

Finally, I notice that languages which promote immediate and local handling of all possible results as opposed to separating some of them as "errors" or "exceptions" are by and large the main languages that rock-solid reliable software is built in. Software written in exception-using languages may reach prototype quicker, but as far as I've seen and experienced, it definitely does not achieve high levels of reliability and quality quicker.

For all of these reasons, I find that I produce better, clearer, more reliable code when I do not use exception mechanisms. Whenever I start treating an exception as something to "recover from" at some remote point after a flow-of-control jump, rather than something to be avoided, handled immediately at the point where it happens, or evidence that the code producing it has a bad design and I need to do better, I start writing bad code.

Inconsistent state

If a sequential program ever reaches a point where it is about to deadlock, then clearly, that program is buggy, and has reached an internally inconsistent state. There is not much that can sanely be done at that point -- the code cannot know what's wrong and how to fix it, and even if by some magic it did, it can hardly do so locally. So bailing out is the safest option.

Two data points regarding exceptions in general:

Google. Our C++ style guide forbids the use of exceptions (because they are broken in C++). By now there are hundreds of millions of lines of code written in that style -- and all colleagues I ever talked to think it is a horrible regime. It creates vast masses of distracting boilerplate. That is because it forces you to manually monadify all your code, which quickly gets out of hand, and makes it far easier to introduce additional mistakes or accidentally mask errors (and I saw enough bugs related to this happen).

Erlang. Their approach to dealing with failure is based on the observation that locally handling unforeseen errors simply does not work in practice. Instead, their philosophy is the opposite: let it crash. I.e., simply throw, and recover and retry at a higher level. And for some reason, they seem to be more successful at writing reliable, fault-tolerant software than most of the competition.

Remember, complexity is the enemy of correctness. Trying to handle all errors at every level significantly increases complexity. Your state space explodes. Granted, there is a danger of overusing exceptions. But not using them at all tends to be worse for all but formally verified software.

Google. Our C++ style guide

Google. Our C++ style guide forbids the use of exceptions (because they are broken in C++).

Is this a sufficiently comprehensive summary of the reasons C++ exceptions are too much of a hazard?

Instead, their philosophy is the opposite: let it crash. I.e., simply throw, and recover and retry at a higher level. And for some reason, they seem to be more successful at writing reliable, fault-tolerant software than most of the competition.

Hard to say why without any certainty, but to my mind, Erlang's philosophy would seem to encourage more coarse-grained failure boundaries, so it's perhaps easier to keep straight in your head and manage. Tracking lots of fine-grained exception paths seems problematic if you don't have automated help.

Given your comments, would you say that something like checked exceptions are your preference, with the caveat that exception signatures need not be listed explicitly, but be propagated by effect inference?

Both

I know you weren't asking me, but IMO you want both: non-local control flow via effects/resumeable signals for recoverable errors and also compartmentalized failure at coarse boundaries. The effects mechanism is another instance where I think we're better off inferring signals without code annotations but giving explicit feedback for auditing via the IDE. On the other hand, there are a number of cases where resumeable signals aren't appropriate. Out of memory and assertion errors come to mind.

Exceptions

Is this a sufficiently comprehensive summary of the reasons C++ exceptions are too much of a hazard?

The problematic interplay with manual memory management, as mentioned there, is one issue. Another that is not mentioned is the specific problem that, even when you use smart pointers and RAII everywhere (which hardly any code does), then you still need to ensure that destructors never throw, because that leads to a "double exception" condition during stack unwinding and immediate program termination. But for non-trivial destructors it is very difficult to be sure.

Given your comments, would you say that something like checked exceptions are your preference, with the caveat that exception signatures need not be listed explicitly, but be propagated by effect inference?

On a general note, I don't believe in static checks that rely on whole-program style inference that does not work across compilation boundaries. For proper modularity, the programmer must be able to write down any necessary information eventually.

I'd love to see an effect system that really works well in practice (exception annotations are just an ad-hoc instance of that), but so far I haven't. The explicit exception annotations in mainstream languages certainly suck big time, because you cannot even abstract over those annotations -- i.e., it is inherently first-order and incompatible with abstraction.

evidence erasure

I like exceptions only provided no information is erased about what happened, if you look for it. Among other things, what I really want is a count of how many exceptions where thrown for each unique backtrace, where zero is the ideal count of course, represented trivially as absence of a backtrace from a map.

Andreas Rossberg: So bailing out is the safest option.

I was going to say how much I liked Ray's take on exceptions, and that I agreed with nearly every point (which might lead one to conclude using exceptions was a bad idea). Especially good is the idea of thinking about ways to avoid trouble by design -- it's the thinking that really helps. But you argue the other way with similar high quality points, encouraging use of exceptions to control complexity. Is it a contradiction to agree with you too?

Bailing out is good, but losing evidence about what happened is very bad. Info about a thrower (who sees what is wrong) is much better than info from a catcher (who just sees lack of results). For example, this is a main reason why C++ exceptions by default lack critical info, because a (representation of) throwing backtrace is not standardized and given, even within third party libraries that mysteriously exit. Blame attribution and increase in quality is hard to get without actionable data.

The Erlang approach of crashing is fairly good, especially if information about this is not erased. Server code in C (etc) can simulate this by "crashing" a failed request without killing others. I sometimes use a mixture of callbacks and finite state machines so an FSM can effect a non-local exit bailing out from callbacks -- but it always aims to crash an async request. (If I was using green threads with continuations, this would be far more straight forward.) However, every place this can happen must be counted and made visible in some way, or else diagnosing problems is really hard.

Fault tolerant software can start hiding partial failure early in the development process, before you find all the kinks. When there are hundreds of possible reasons why a task can fail to complete, the fact that one task fails doesn't provide any useful lead to study. It's easy to get a server that doesn't crash, but just says no to every request, very politely. Really irritating is a 95% no rate when you expected only 0.1% no rate. Since no is an acceptable outcome, how can you complain when it happens in each case? What if all of them have a plausible excuse? Hmm.

It's been a couple years since C++ was my main implementation language. But I hated use of exceptions when I found it, because it's hard to debug when you can't easily infer whether having made it to point A and point C must mean you also made it to point B between. Maybe not if an exception was thrown. I added a thread-safe exception backtrace hashmap to the last couple projects I found using exceptions, just so I could ensure each one left behind minimal evidence. But that also required changing each throw location, if I could not add backtrace interning to an exception's constructor.

I'm not sure I ever met an engineer who said, "Great, these exceptions will help me figure out what is going wrong!" Instead they are often used as an excuse not to think, and as a convenient means of hiding from evidence of problems, especially if no one looks closely at actual incidence rates. I know far too many coders eager to erase signals of a symptom in preference to finding root cause. As bad as that tactic is for a team, it can be done quietly and thus safe from ready criticism.

Language vs implementation

I agree that you want to get a stack trace when you encounter an uncaught exception. But that is not an issue with the language mechanism as such but with the quality of its implementation in some compilers.

char-ready? for promises

If you can't check or prove, then at least there should be a semantics that simply doesn't do something in the exceptional case, and returns to tell you it didn't.

Ah, I understand that. Taking this further, for any operation that could have a side effect, it would be nice to know whether the side effect will happen before we invoke that operation. If a call to force can block the thread sometimes, it would be nice to detect whether that block will occur, perhaps using procedures in the style of char-ready?.

The unfortunate case in a non-threaded program happens when we force a promise that hasn't yet been fulfilled but has already triggered. If we force this time, we're not causing any progress in the program; we're just dooming ourselves to deadlock. This is probably the condition (or combination of conditions) that would be the most useful to detect.

Edited to add: If procedures existed to detect this, would you still have reasons to prefer the other two of the "three principled versions" you mentioned above?

promise-executing?

Okay, postulate the existence of a boolean query named 'promise-executing?' that tells you whether a promise has been triggered (is executing) already.

In the single-threaded case (ie, consistent with Scheme's basic design), that's sufficient. You can check 'promise-executing?' before any call to force a promise and be sure that you won't deadlock.

In the multi-threaded case (ie, consistent with a not-too-uncommon extension to Scheme's basic design and one possible future direction for the language) it's not completely sufficient, because another thread may start execution of the promise between the time you check and the time you call it. So you might assume you're not going to block because 'promise-executing?' returned false, and then call the promise and find that you're blocking anyway.

In the multi-threaded case, however, it is *practically* sufficient, because it would be willfully perverse to have the completion of a promise depend on the success of a call completing the same promise in a different thread. So, you call promise-executing? to check, it says false, some other thread starts executing the promise, then you call the promise and block although you thought you wouldn't -- but the other thread executing it will not be blocked just because your thread blocks, so deadlock will not (normally) result.

At that point I believe I wouldn't really have a reason to prefer either of the other two versions over the third.

Par for the course

So you might assume you're not going to block because 'promise-executing?' returned false, and then call the promise and find that you're blocking anyway.

That's a good point. Fortunately (or unfortunately!), this drawback also applies to char-ready?, since a return value of #t can be misleading if another thread is about to read from the same stream. In this sense, it would at least be consistent with the R7RS small language, even if these building blocks aren't as expressive as can be in the context of preemptive threading.

Oh, it looks like Racket already has procedures like this: promise-running? and promise-forced?.

Yeah but why would you do that?

From the char-ready? case I think I'm comfortable concluding that having different threads trying to destructively read characters from the same stream is a design error.

The only case in which the threads get ungarbled messages is when the stream transmits complete messages exactly one character long according to a stateless protocol. And that happens ... rarely. I imagine a simple position sensor or something might do something like that.

Even then, why multiple threads destructively reading the same channel? What's reading the position that's so important it can pre-empt messages the other threads would like to have read, but not important enough to prevent itself from being pre-empted by the other threads reading messages it would like to have read?

I actually wouldn't do that

From the char-ready? case I think I'm comfortable concluding that having different threads trying to destructively read characters from the same stream is a design error.

I agree. I've been afraid of this particular issue a few times when I wanted to write REPLs that ran within REPLs, but so far the outer REPLs have been accommodating, always waiting to do further reads on stdin until the currently executing command is complete.

Sounds like you're making the point that there's no similarly cut-and-dry design guideline about sharing a promise among multiple threads. If two threads block on one promise at practically the same time due to the gap in (if (not (promise-running? p)) (force p)), and that somehow breaks the program--even though just one thread blocking would be fine--what have we done wrong?

Well, perhaps the guideline can be the idea that lazy computation and time-sensitive side effects are a hazardous combination. Promises may accommodate non-time-sensitive side effects such as loading static resources, doing expensive pure calculations, or forcing other promises. (Otherwise, what would promises be good for?) However, it's not appropriate to have a promise whose completion is contingent on the progress of a thread that might be blocked on that promise. To generalize this defensively, if a promise needs some thread to be responsive, that thread should never block. When the thread needs to force promises, perhaps it should do so on worker threads.

I'm not familiar with much of the practice of concurrency, let alone the theory, so I won't be too surprised if there are serious problems with this guideline.