Go-style defer-recover exception handling is better than try-catch style?

The main advantage of defer-recover exception handling over the try-catch is that it doesn't complicate the control flow of the program. This not only makes programs using defer-recover more readable, but also makes the implementation of this feature in a language much simpler than implementing try-catch exception handling.

On the other hand, the main disadvantage (in Go) is that, it does not allow handling exceptions in place close to where exceptions may arise, and the normal execution has to be resumed in the caller of the function that handles the exception.

Recently I supported Go-style panic/exception handling by defer-recover in my hobby language Dao (it was mentioned here before, but there have been many changes since then). I solved the mentioned problem in Dao by supporting a new type of code block (tentatively called framed block for convenience). Such framed block allows to handle exceptions around the place they arise, and to resume normal execution right after the framed block (namely in the same function where the exception is handled). This is achieved by executing framed blocks in new stack frames, which reuse/share the stack data of their surrounding functions and trigger execution of deferred blocks at exit as normal. You can find more details in a blog I just wrote.

Are there many people here finding Go-style exception handling interesting? I wonder if my improvements could make exception handling by defer-recover more interesting, and better than try-catch?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

No

The main advantage of defer-recover exception handling over the try-catch is that it doesn't complicate the control flow of the program. This not only makes programs using defer-recover more readable, but also makes the implementation of this feature in a language much simpler than implementing try-catch exception handling.

I don't see how any of these statements is true. AFAICT, it is straightforward to locally transform uses of one mechanism into the other, i.e., they are equivalent. Hence, Go's mechanism is just try-catch in (ugly) disguise.

local transform

What exactly is this local transform are you speaking of? Is this transform theoretical or practical? As in many cases, theoretical equivalence does not guarantee practical equivalence, otherwise there is no point of having different languages.

Also when you say "locally", do you mean without adding extra layers of function calls? I don't see how is this practically possible.

The source of complication in control flow of try-catch is that the exception handling is scoped by try{}, so when an exception happened, not only it is necessary to unwind the call stack, but also necessary to check the scopes formed by (potentially nesting) try{}s, and search/execute through the "catch" statements.

Please tell me how is the control flow in the following try-catch not more complicated than defer-recover? Or please write a piece of defer-recover codes that have control flow as complicated as this try-catch example.

try {
    try {
        ...
        raise ...;
    } catch (...) {
        ....
        raise ...;
    } catch (...) {
        ...
    }
    ...
} catch (...) {
   retry ...;
}

Transform try/catch

Transform try/catch to:

func(){
  defer func(){ 
    if r := recover(); r != nil {
      CATCH BLOCK HERE
    }
  }()
  TRY BLOCK HERE
}()

and transform raise to panic.

This is based on a two minute reading about Go's defer/recover, so YMMV.

What are you trying to prove?

Of course, transforming from one to another is easy, I don't need you to prove that. But what I need is a transformation that:

    • does not involve adding extra layers of function/closure calls (local transform);

        demonstrates the defer-recover version has the same or more complexity in the control flow;
  • It is strange that you apply

    It is strange that you apply the arbitrary restriction that you can't use function expressions. The transform I gave is perfectly local. Local doesn't mean that you can't use function expressions, it means that the transformation substitutes something in place of the original without changing the rest of the program. For example it's impossible to give a local transformation for call/cc, no matter how many lambdas you introduce. For defer/recover you can do an intricate transformation involving gotos to dispatch to the correct catch block. That way you avoid introducing closures, but THAT would no longer be a local transformation because it would need to mess with the surrounding expressions.

    The insistence on no function expressions is especially strange given that it is Go who wants to entangle exceptions with function scope. Of course you're going to need extra functions for a local transformation then. The lesson here is don't entangle your constructs, it's well known to be a mistake (cf variables are always in function scope in old versions of C, or iterators in Python, or lack of multiline lambdas in Python, or indeed languages that lack lambdas completely but do have named closures like older versions of C# with delegates).

    More complicated?

    First of all, clearly, the following:

    do something;
    try {
      do something that might fail;
    } finally {
      clean up;
    }

    makes the inherent control structure far more apparent than

    do something;
    defer (func() {
      clean up;
    })();
    do something that might fail;

    So the former is more readable, not less. Unless by "readable" you mean superficially simple but obscuring the actual complexity and order of execution. Actual complexity is the same, anyway.

    "Locally transformable" is the criterium for relative expressiveness, as devised by Felleisen: if you can locally rewrite (think: macro-expand) one language construct into another, i.e., without a whole-program transformation, than it is no more expressive. Whether you introduce auxiliary local functions is irrelevant.

    As for stack unwinding, you always have to do that. All Go does is coupling exception handlers with function frames. But such coupling is not a simplification in any interesting dimension, it's merely a conflation of concerns.

    As for your example, what is your point? You can write the equivalent in Go, but you need to introduce a function wrapper for the inner 'try' (I assume you can imagine how). How does that possibly make anything any simpler? When you actually need such a nested handling structure, Go's conflation of concerns makes it unnecessarily complicated to express. When you don't need it, the complaint is moot. Where's the win?

    Theoretical vs practical

    The example you give looks less readable in Go, because you are using Go, in my language, it can be look something like:

    do something;
    defer {
      clean up;
    }
    do something that might fail;
    

    And if you want to the program to resume in the same function where it failes:

    do something;
    @@ {
        defer {
              clean up;
        }
        do something that might fail;
    }
    

    In the syntax level, it may appear that try-catch-finally is simpler, but as I mentioned in my previous reply, it adds extra work to stack unwinding, because it has to look for the right places to start exception handling. By coupling exception handlings (deferred codes) to stack frames, you do not need to search for exception handlings, you simply call the deferred expressions (blocks in Dao). If you have implemented both, you will see the difference at the implementation level.


    Whether you introduce auxiliary local functions is irrelevant.

    Theoretically speaking, of course. But practically, there are significant differences in practice (for example in implementation). I do not disagree with you on the theoretical equivalence between them, what I was trying to say is their implication at practical level.

    As for my example, my point was to show the complication of control flow in try-catch, especially when they are overly used. And no matter how you write defer-recover, there is no way you will end up with something that has control flow that complicated, after all, the deferred codes are handled per stack frame. Of course, defer-recover cannot quite easily express something equivalent to that example, which ensures users will not attempt to do that and avoids that kind of codes that may reduce readability.

    Just to avoid confusion, when I say complication/complexity in control flow, I do not mean syntacticly, I mean the complication at runtime to handle the them.

    Lost

    it adds extra work to stack unwinding, because it has to look for the right places to start exception handling. By coupling exception handlings (deferred codes) to stack frames, you do not need to search for exception handlings, you simply call the deferred expressions (blocks in Dao).

    Sorry, I really don't know what you are talking about. The difference is purely syntactic. You can use the exact same implementation strategies for both syntaxes. A try-block can just as well be represented by a stack frame if you choose so, and indeed, many implementations do that. How's that different, and how is it anything else but a secondary implementation detail?

    Also, if I can nest blocks in your language, can't you directly transcode your previous example?

    A try-block can just as


    A try-block can just as well be represented by a stack frame if you choose so, and indeed, many implementations do that.

    OK, I was not aware this. But I am still not convinced it can be made as simple as implementing deferred blocks. Considering that there is "catch" statements and possibly "retry" statements, just representing a try block as a stack frame is probably not sufficient, I feel the catch and finally blocks may also have to be represented by stack frames. Also catch blocks are conditional, and if a language supports retry, the retries will need to jump to proper stack frames or create new stacks. Putting all these together, I cannot imagine how this could possibly be as simple as deferred blocks. Probably you are a little bit too optimistic about the implementation details.

    No difference

    Rest assured that I know the implementation details very well. ;) The catch or finally blocks are equivalent to your defer block, no difference. Catch is only conditional in some languages, and in those you can simply run the test inside the handler and rethrow if it fails, no extra magic required. (Of course you can try to be more clever with optimisations, but there is no need.)

    Really, there is nothing to see here, it's all just superficial differences.

    A 'retry' statement is a different story, it's practically equivalent to call/cc, so you need more powerful mechanisms to implement it. But virtually no language supports try-catch with retry, so I don't see how it's relevant to the discussion.

    retry


    But virtually no language supports try-catch with retry, so I don't see how it's relevant to the discussion.

    Eiffel supports do-rescue with retry. My language supported try-catch with retry too before switching to defer-recover. Probably this is one of the reason I saw more complexity in it.

    Well, I am assured you know the implementation details, but I doubt you have really realized how simple it is to support defer-recover;) .

    To tell the truth, I initially was quite skeptical of Go's defer-recover panic handling as most of you, and never considered to support it in my language. But after I supported something like Go's defer, supporting defer-recover is just as simple as adding the panic() and recover() built-in methods, no parsing work, no runtime modification!

    I have been trying to keep the implementation of Dao small, the temptation to have exception handling with virtually no implementation cost was too great to resist! So I went ahead to replace the original try-catch with defer-recover.

    Anyway, try-catch and defer-recover both have their own problems, the choice may depend entirely on the principles of the language and personal preferences of the developers.

    Given that I have provided a

    Given that I have provided a trivial syntactic transformation from try-catch to panic/defer, it can't be the case that try-catch is significantly harder to implement. Of course a language that cares about performance would never implement exceptions in this way, because it's slow.

    I initially thought it is

    I initially thought it is significant harder because I have often considered "retry" as a part of it (just to mention, that transformation does not work for try-catch with retry). Without "retry", implementation of try-catch is not as hard as I thought, but in any case it involves more implementation work to support try-catch than to support defer-recover (if defers and closures are already supported).

    it involves more

    it involves more implementation work to support try-catch than to support defer-recover (if defers and closures are already supported)

    Yes. But no more to add catch when you already have try-finally. Which would be the appropriate comparison.

    Defers and closures have

    Defers and closures have other uses other than exception handling, but try-finally without catch? Maybe you are just joking.

    How?

    Defer without resolve is try-finally, so what other uses can it have?

    Closure clearly has other

    Closure clearly has other uses. As for defer, it really depends. If you are speaking of general use, yes, they are the same. But in more specific situations, defer does have use that try-finally does not have, as defer has different semantics and is a lot more flexible than try-finally. For example in Go, you can do,

    for i := 0; i < 4; i++ {
        defer fmt.Print( i )
    }
    

    I don't see how you can do it naturally using try-finally.

    actions = []try { for


    actions = []
    
    try {
      for i := 0; i < 4; i++ {
        action.add(func(){ fmt.Print( i ) })
      }
    }finally{
      for f in actions { f() }
    }
    

    Of course it's ugly, but that a good thing. Bad code should be ugly.

    In the code, something like

    In the code, something like pushfront() could be used instead of add() to make it clear that the actions are called in the reverse order of being added to the list.

    This ugliness is clearly due to the semantic diferences.

    Defer alone does not mix well with abstraction

    You could decide that it is possible to prefix statement by `finally` inside a `try { ... }` block, with the semantics that they get added after the corresponding `finally { ... }`. That new feature (or syntactic sugar, depending on how you see it) would bridge this difference you're pointing out.

    Then comes the question of whether it actually makes a difference in expressivity. As Andreas noted, the common way to ask this question is to think in term of locality of the transformation. Is this translation of "finally " into the normal "finally" block a local transformation? Well, it depends; it is certainly local to the "try { ... } finally { ... }" expression, but inside does non-local code displacement.

    Notice that with this presentation, the "scope" of the defer/finally semantics is explicitly decided by smallest enclosing "try { ... }" block. This is a stark difference with the feature as present in Go, where this boundary is implicit and rather arbitrarily (from a user, not implementation, point of view) set to the enclosing function. When we say "the smallest enclosing try {...}", are we speaking of the static extent (one statically around in the program syntax), or the dynamic extent (the first "try" frame in the call stack)?

    - If we are using the static extent, then this feature does not mix well with abstraction: factoring away some redundant code into a function may break its semantics if the code is inside a "try {...}" block and uses finally/defer. But it is also a property of Go's "defer" semantics. It makes it brittle with respect to abstraction (factoring out code).

    - If we are using the dynamic extent, then this feature becomes more powerful and its meaning is preserved by refined function boundaries, but it also has a higher runtime cost (I haven't given much thought to it, but I suppose in the general case you would register function pointers dynamically, as Jules' code does, instead of doing a static source-to-source transform).

    Note that your change (framed blocks, essentially one implementation of try..with) plays along this boundary as well. As you describe, it mostly allows to set the finally/defer extent to one that is "smaller" than the smallest enclosing function, but you could allow a dynamic extent larger than the function, by walking the stack for a framed block where to attach finally/defer-red behavior.

    It would be easy to provide

    It would be easy to provide that behavior by making a global actions_stack variable. When entering try, push an empty collection to actions_stack. When exiting try (in the finally), pop the topmost collection, and run all the closures in the collection. Or use a dynamically scoped variable for actions.

    BTW, a question for Limin Fu, does this code:

    for i := 0; i < 4; i++ {
        defer fmt.Print( i )
    }

    print "4444"? or does it print "3210"?

    It prints "3210". The values

    It prints "3210". The values of "i" are captured when the parameter of "fmt.Print" is evaluated each time.

    So what happens if I

    So what happens if I do:

    defer fmt.Print(f(i))

    When will the f(i) be evaluated?

    Standard practice is that

    Standard practice is that "f" and "i" are captured, but the application result of "f(i)" is not. I'm not sure if Go does that, but I'm betting they do.

    C# differs from this standard behavior by re-evaluating f and i to their latest values in lambdas. Annoying, especially when "foreach" was broken until recently.

    Are you sure?

    Maybe I misunderstand what you mean, but I don't actually see this being standard practice -- most imperative languages like to treat for-loops as assignment to a single mutable loop variable, instead of binding to a per-iteration variable. The proliferation of C-style for-loops certainly hasn't helped.

    There never has been a good reason for this, and once a language grows up and acquires closures, it's just downright wrong. A major pitfall, and one that isn't simple to work around.

    (I looked it up, Go does the same, and the only reason it does not affect 'defer' is a hack in the language spec regarding its evaluation.)

    I think I said this

    I think I said this completely wrong, let me try to re-explain it the right way.

    One decision that was made in Java (I think by Guy Steele) was the restriction that mutable local variables could not be accessed in anonymous inner classes (which are close to closures) by having these variables marked as final. So they avoided the issue by disallowing it, which I think was the right decision.

    C# went with the idea that mutable variables were just like fields, which could be accessed and assigned in both delegates and anonymous inner classes. I guess C# was trying to be more consistent in this way, but I always found the behavior annoying especially when it came to foreach loops; it wasn't obvious to me that the iteration variable was mutable: if we considered a hypothetical IEnumerbale[T].ForEach[T](Action[T]) method, the iteration variable would be recreated on each call ForEach made to the action. I believe this was the reasoning that they used to re-spec iteration variables as immutable in the latest version of C#.

    So my previous post was wrong: the proper behavior in C# and Java was always to capture local variables as references, not values; it just happened that Java avoided the issue completely by disallowing the capture of mutable references.

    I think both your points are

    I think both your points are reasonable: loop variables should not be mutable variables, each new iteration should have a fresh binding in scope (as you would implement with a higher-order function), and on the other hand capture of mutable variable should be made explicit because of the surprising aliasing behavior.

    Forbidding them is maybe a bit restrictive. In ML, there is no user-visible notion of "mutable variable" (and I think that's the right choice for a high-level language), but "references" that have an explicitly different type and usage, so capturing everything by value (which means "by reference" in the case of references) works well and should not surprise people. I think the C++ choice of forcing people to qualify the reference-ness of captured variables in closures is reasonable for a low-level language; practitioners seems to hate it because having to list captured variables is painful, but maybe forcing to explicitly list (and qualify with & or not) variables that are used in a mutable way would be a reasonable compromise.

    Andreas > I was quite amused by your remark on the semantics of Go's defer statement. Sometimes I find that you're a bit too aggressive/harsh here on LtU, but this one was a good shot. The informal prose of the "specification" is quite vague, but I think the described behavior is actually quite interesting: they request the expression to be syntactically a function/method call, `Foo(<bar, ...>)`, and promise that both the function lookup and argument will be evaluated, and only the *call* itself is delayed¹.

    In an optimistic sense, that is actually a reasonable semantics for a "lazy application" (in the sense that a language with no type-level distinction between strict and lazy values could have two explicit syntaxes for function application in its abstract syntac, (strict <fun> <arg>) and (lazy <fun> <arg>). The "clean" thing to do would be to have a primitive construct that would return this as a lazy thunk, instead of atomically moving it to another block etc., but still it's nice that their ad-hoc solution actually turns out to be an already-understood language feature.

    ¹: note in particular that with the printf(f(i)) example, that would suggest that `f(i)` is evaluated fully and not deferred.

    Nitpicking

    So my previous post was wrong: the proper behavior in C# and Java was always to capture local variables as references, not values; it just happened that Java avoided the issue completely by disallowing the capture of mutable references.

    As far as I understand, the spirit of this is right, but the details are backwards, at least for Java. My understanding is that Java always captures local variables by value, and the requirement that they be declared 'final' exists only to prevent surprising semantics. In other words, I believe Java traditionally desugars:

    void f() {
       final int i = 0;
       return new Foo() { public int getI() { return i; } };
    }
    

    into something like:

    class Foo$0 implements Foo {
       final int i_;
       Foo$0(int i) { i_ = i; }
       public int getI() { return i_; }
    }
    void f() {
       final int i = 0;
       return new Foo$0(i);
    }
    

    This has the great advantage of requiring no new fundamental semantics (i.e., no "real" closures over local variables), but has the serious disadvantage of being extremely surprising if i is mutable. Therefore, force the user to declare i final and she will never be the wiser.

    And of course, since non-primitive types are always heap-allocated and passed by "pointer-value," the distinction is often irrelevant and is trivial to circumvent when needed.

    Interesting

    Interesting! And scary. Given this example I agree that there is a notable difference between 'defer' and 'try'. However, I wonder how you reconcile this example with your original claim that 'defer' is simpler and has simpler control flow. It seems to prove the opposite, since its semantics is inherently tied to implicit mutable state. ;)

    Go 'error handling' is just plain awful

    When I start thinking about error handling in Go, it makes me swoon. Where to begin?...

    I've often seen the equivalent of the first line of this post in arguments in favor of Go's panic defer, that is this line...

    The main advantage of defer-recover exception handling over the try-catch is that it doesn't complicate the control flow of the program.

    First of all, I don't see how panic/recover has any advantage over try/catch with regard to separating error handling from normal control flow, so panic/recover is not an advantage it's merely different.

    Second, Go designers decided that error handling by returning error codes from functions is the idiomatic Go way of handling errors. Any error that is meant to be handled is returned as a value. So, error handling in Go really means checking the return value of every method and function call for an error value and then handling the error, inline with your normal code. So, real Go error handling isn't about separating the error handling from the normal control flow. Idiomatic Go error handling requires you to pollute your normal control flow with all sorts of error handling logic.

    Lastly, the biggest problem with Go's panic mechanism is that the only errors that are meant to be handled by it are errors which are not meant to recoverable in the first place. Which makes the entire panic/recover mechanism in Go theoretically useless and mostly useless in practice.

    To summarize, Go fans will argue that defer/panic is a better error handling style than try/catch because it separates error handling from normal code and thus leads to cleaner code. Then they turn around and argue that, instead of separating error code from normal code, it's best to handle errors inline with your normal code! Best of all, they reserve the panic mechanism for handling errors from which there is no recovery anyway! Awesome! I find this lack of logical consistency to be pervasive throughout Go, which is why I really, really, really dislike it.

    most points you are criticizing are NOT my points

    I didn't say plain Go defer-recover is better, "Go-style defer-recover" is not "Go defer-recover". You probably didn't finish reading my post, in which I specifically said "if my improvements could make exception handling by defer-recover more interesting, and better than try-catch".

    Clearly you neither followed the link to my blog, in which I also criticized the error handling by returning error codes in Go. I actually agrees with you regarding this in Go.

    Also, I never said anything about separating error handling codes from normal codes. In fact, I believe it is important to be able to handle exceptions in place close to where they arise. And this is the reason I added some improvements in my implementation of defer-recover style handling.

    When I compare the advantages/disadvantages between defer-recover and try-catch, I was merely speaking about their effects on control flow, which is a totally different thing from code separation/inlining. To see why I say try-catch may have more complicated control flow, please read the last two paragraphs of my reply to Andreas.

    PS. I am not a Go fan to argue for Go, just in case you got the wrong impression.

    True - Dao is not Go

    You are correct, your reference to Go made me start frothing at the mouth and I went on a rant that was not relevant to Dao, apologies. But now that I've taken a look at Dao I agree with some other posters here that there doesn't seem to be much difference between defer/recover and try/catch.

    With regard to your assertion that defer/recover is cleaner than this try/catch block...

    try {
        try {
            ...
            raise ...;
        } catch (...) {
            ....
            raise ...;
        } catch (...) {
            ...
        }
        ...
    } catch (...) {
       retry ...;
    }
    

    It seems to me that with defer/recover you can only write the equivalent of a try/catch block that has this form...

    try {
         ...
    } catch (...) {
            ....
    } catch (...) {
            ...
    }
    

    So the complicated try/catch that you present is not even implementable in Dao. It doesn't seem valid to say that defer/recover is somehow cleaner just because it's not possible to implement a complicated error handling routine.

    What would be valuable to me as a programmer would be to combine pattern matching and try/catch or pattern matching and defer/recover ala Scala.

    One other question occurred to me while looking at the Dao docs... are method calls only polymorphic when called through an interface (which is yet another problem I have with Go)?

    One last suggestion - don't use bright yellow for highlighting text in your documentation, it's unreadable.

    Probably I didn't express

    Probably I didn't express myself clearly:(. Actually I give that example is not to show that defer-recover can be cleaner than try-catch in that particular example, but to show what kind of control flow one (as a language developer) has to deal with, and there is no way defer-recover can end up with something like that.

    Syntactically try-catch looks a little bit more elegant, but if you over use it can become quite complicated. On the other hand, in defer-recover, defers are always handled per stack frame, which makes understanding and analyzing the program simpler, and makes implementing the runtime for this simpler. The less elegance is a reasonable price to keep a language simple, IMHO.


    What would be valuable to me as a programmer would be to combine pattern matching and try/catch or pattern matching and defer/recover ala Scala.

    I actually considered to combine something like pattern matching with defer-recover, but in the end I preferred to keep the language and implementation simple, because the current implementation of defer-recover is very simple, and I kind like it:)

    In Dao, method calls are polymorphic for other types as well. Thank you for the suggestion regarding text highlighting.

    What would be valuable to

    What would be valuable to me as a programmer would be to combine pattern matching and try/catch or pattern matching and defer/recover ala Scala.

    I don't want to derail this thread, but in case you're interested, my language does exactly that. A catch clause can be any pattern, which include type tests, equality tests, and destructuring tuples/records.

    So does

    So does every ML since the mid 80s. Just saying. ;)

    Condition systems

    FWIW, my favorite solutions to error handling are the condition systems of Common Lisp and Dylan.

    They separate the notions of (a) finding a handler and letting it handle an exception from (b) unwinding the stack.

    I find any error handling system that doesn't do that tedious at best, and flawed at worst.

    Dylan

    I actually look into Dylan on error handling, its "block()...end" was partially the reason I added "framed block" in my language, which effectively allows to define a handler (a deferred block in a framed block) anywhere in the codes.

    block and handler

    In Dylan, there are two ways to define handlers, either with block .. exception - or with let handler (DRM: Signalers, Conditions, and Handlers).

    For me it sounds like these are the two different approaches mentioned here (try-catch vs defer-recover) - or am I mistaken?

    In my own experience, I find the try-catch (or block-exception) mechanism much more readable.

    I concur

    I heartily concur; separation of concerns and simplicity in language design pay dividends in clear, optimizable programs.

    Three error cases that are frequently conflated in error-handling systems are:
    1. bad stack state
    2. bad heap state
    3. bad external state (e.g. OS file handles)

    Heap and external problems shouldn't implicitly involve unwinding a stack, but stack unwinding needs to chain into the latter to keep them clean. And that's just the classification of errors; as you note, there is an orthogonal dispatch layer for matching the error state with a handler.

    I would add one twist from a language design perspective on how these are realized. I embody it in this rule:
    1. applicative code can use try-catch
    2. side-effectful systems may not (modulo stack panics)

    If I am looking at a block of statements (that is, side-effectful imperatives), I absolutely do not want to mentally keep track of the side-state for every single possible jump. I don't want to treat every method call as a runtime-dependent block boundary. I know the kinds of programmers who can keep this kind of phenomenal state sytem in their heads; I am not one of them.

    On the other hand, if the code has no side effects early return introduces complications in procedural reasoning. It's essentially a runtime optimization against a global "bottom -> bottom" function mapping: once a function transforms the value to a bottom/error value, it propagates out until some handler catches and transforms the error out of bottom. There may be some care in guaranteeing that bottom/error values are eventually caught, but speaks more to the language's design intent rather than its clarity.

    I carve out an exception for true stack panics because there are certain kinds of program executions that just "stop making sense" or otherwise need to destroy program state, e.g. for transactional rollback. Again, this is a simple issue to reason about locally, as these systems should generally have transparent tear-down semantics that I don't have to think through in the middle of a function body.