Exceptional syntax

A nice paper apropos of tail calls and exceptions:
Nick Benton and Andrew Kennedy. 2001. Exceptional syntax. Journal of Functional Programming 11(4): 395-410.

From the points of view of programming pragmatics, rewriting and operational semantics, the syntactic construct used for exception handling in ML-like programming languages, and in much theoretical work on exceptions, has subtly undesirable features. We propose and discuss a more well-behaved construct.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

What am I missing?

try-in-unless can obviously be implemented using let and handle:


(*) try x <= M in P unless H = let x <= (handle M unless H) in P


This equivalence is easily provable with the big-step semantics given in the paper. Moreover, Benton and Kennedy give this exact expression in page 12.

The authors state 3 reasons to prefer the try construct over the handle:
1. Progammability: the try construct is more elegant in some situations.
2. Commuting convertions (optimization rewrite transformations). There are situations in which rewrite is wanted, but impossible.
3. Small-step operational semantics. There are no obvious small-step operational semantics for handle, but try has them.

I fail to see how, in light of (*), these reasons are valid. However, since I lack the required knowledge in commuting conversions, I don't trust my treatment of point 2.

1. Programmability: As handle can be viewed as syntactic sugar for let+handle, the programmability issue is moot: implement try using the more primitive constructs.

2. Conversions: I lack technical knowledge here and would be grateful for some more knowledgeable opinion.

Since conversions use pattern-matching anyway, can't we just substitute the try with handle in the conversions (fig. 2, page 8), and obtain more complex conversions that perform the same reduction?

3. Semantics: The handle construct can also be given small-step semantics (the transitions for the other constructs are as usual):


V handle H -> V
(raise e) handle H -> H(e)
E[raise e] -> raise e

where:

E[*] = (* M)
= ((\x.M) *)
= let x <= * in P

Obviously I'm missing something here, and I feel it's not just my incomplete knowledge of rewrite transformations.

Incorrect premise

Hopefully you've figured it out by now, one way or another, but your "obvious" implementation (*) is obviously incorrect. The right side always executes P (unless there is an unhandled exception) whereas the left side only executes P if M throws no exceptions.

A syntactic improvement

Readers may be interested in learning that the last version of the OCaml language (4.02, released last August) adopted an Exceptional Syntax that is superior to the syntax proposed in the 2001 paper -- while sharing much of the semantic goodness.

let read_lines inc =
  let rec loop acc =
    match input_line inc with
    | l -> loop (l :: acc)
    | exception End_of_file -> List.rev acc
  in loop []

More formally, the syntax of pattern-matching ("match <expr> with <clauses>", where each <clause> is of the form "<pattern> -> <expr>") is extended with a class of "computation patterns" <comppat> that are either the usual ("value") patterns <pattern> or "exception <pattern>", and the syntax of clause is now "<comppat> -> <expr>". Such computation patterns are only allowed there, at the toplevel of clauses (but there is talk of also allowing "lazy <comppat>" as a value pattern in the future).

The reason why this syntax is superior to the proposed "try E in P with H" of Kennedy and Benton's paper, or the minor variant "let try E with H in P" that has been proposed from it, is that it allows to reorder the usual value return case and the exceptional case, putting first whichever is shorter / logically comes first. You can write either

  match e with
  | x -> e1
  | exception y -> e2


or

  match e with
  | exception y -> e2
  | x -> e1

The "try in with" or "let try with in" syntaxes force a choice between those two orders (and trying to allow both is a good way to introduce mysterious ambiguities in the grammar). This issue was recognized in Kennedy and Benton's paper, "4: Remarks on Concrete Syntax", page 13 in the version linked above: "The main problem is choosing whether the handlers or the continuation expression should come first [...]. Neither of these is entierely satisfactory".

I had personally considered implementing their syntax for OCaml in the past, but was put off by having to make a choice between those. It's surprising how often *syntactic* issues can block the introduction of a good idea in a programming language. I first understood that overloading "match .. with" rather than "let" was the better idea after seeing Ohad Kammar and Sam Lindley's talk on effect handlers at ICFP, in September 2012. "match with exception" is exactly a restricted case of the effect-handling construction for non-resumable exceptions.

The OCaml feature comes from a patch proposed by Jeremy Yallop in February 2014. It originally proposed overloading both "match with" with additional "exception" cases and "try with" with additional "val" cases, but only "match with exception" was retained. Jeremy had a blog post with many references to similar existing constructions.

The most surprising to me was a 1999 mail by Cristophe Raffali sent on the caml-list suggesting almost the same construction (inside a discussion started by Norman Ramsey surprised by the usual issue that "try .. with" breaks tail-calls, that is the very problem Kennedy and Benton wanted to solve). This means the better syntax "existed" (had been suggested by someone on some mailing-list) at the time the "Exceptional Syntax" work was being done.

This syntax for exceptions

This syntax for exceptions is very nice.

Though, I have a low opinion of exceptions, and think we'd be better off without them.

Either/or

I have a low opinion of exceptions, and think we'd be better off without them.

I have a low opinion of both exceptions and doing without them. How to cleanly express handling of exceptional situations seems to me to be one of the most basic and intractable problems in programming; it drove, as best I can figure, the whole concept of Aspect-Orientation, which imho failed as miserably as every other effort I've seen to solve the problem. I remember struggling with it [clean exception-handling expression] when contemplating how to cleanly express merge sort, in an undergrad file structures class thirty years ago.

the partial failure problem

For situations like `end of file` there is no need for exceptions; efficient support for sum types or collection-oriented programming would do the job. A great many uses of exceptions fall into this category, where a better model of the problem would fulfill the role. Exceptions are an enabler of leaky abstractions.

But I agree there is a real, fundamental problem to consider. I've been thinking about it over the last ten years, and I know some have thought about it even longer than me.

I call it the partial failure problem, because partiality is what makes it such a challenge. If the failure was total, the solution would be simple: reconsider the model, then start over. Partial failure implies partial success, which introduces performance pressure to leverage the successful portion of our computation. Further, the potential set of observable partial failures (a combination of states internal to a computation) is a combinatorially more complicated than observing wholistic failure (a binary condition). Finally, the cause of a partial failure is often far removed from where it can be observed, which greatly complicates any effort at recovery.

Note that I would oppose describing the partial failure problem as "handling of exceptional conditions". I think describing it in those terms will bias people towards thinking about exception handlers as a viable solution.

At this time, I feel the very best way to deal with the partial failure problem is to avoid it. When you have a partial failure, don't try to handle it. Instead, escalate the partial failure into a whole failure, at which point it becomes easier to deal with.

We can translate whole-failures back to partial failures at carefully chosen boundaries, where we can use high level strategies like fallback or failover, reset, etc.. We can also aim to automate a lot of reuse of the partial success in a transparent way via memoization, dynamic programming techniques, weak reuse of non-deterministic configurations (e.g. constraint solvers, probabilistic samples, or heuristic searches - I've discussed this particular tactic under the name 'stateless stability'). Blackboard-metaphor communications (e.g. tuple spaces, shared databases, Mnesia) are useful, too, in allowing us to leverage knowledge from our failure or investigate it externally. This suggests use of stable names and avoiding local state for anything we might want to study after a failure.

Atomic transactions offer a fine example of escalating partial failure into whole failure. But there are other techniques.

Back in 2008-9, my language designs were oriented around the actors model, and a useful technique I developed and leveraged was cascading destruction: when you created a configuration of actors, you could add a constraint that any given actor depends on some other actors. The concept is simple: if an actor you depend on is unavailable or disabled or destroyed, you also become unavailable or disabled or destroyed. This was combined with process-control actors that will commit suicide or disable themselves on demand.

This both provided easy process control of actors systems (a feature that is valuable for distributed programming or mobile agents) and an easy means to escalate partial failure so we can deal with it at a much higher level than individual messages.

In 2010, this approach to cascading destruction in actors systems contributed to my design of reactive demand programming (RDP), where it is more cleanly embodied in a very simple duration coupling property. RDP owes much of its existence to my attempts to address the partial failure problem. My 'aha' moment that led to development of RDP was during my efforts to support transparent failover or handoff between actors-based services, useful both as a high-level tactic for partial failure and to support runtime upgrades or live programming. Programming in RDP is like programming with overlay networks as the main abstraction, with easy process control and transparent failover due to duration coupling, spatial idempotence, causal commutativity, and reactivity.

I understand that Erlang/OTP makes heavy use of high level strategies for error recovery, but I'm not sure how often Erlang developers use error escalation tactics, e.g. to restart whole subsystems due to an error in one part of it.

Anyhow, my "low opinion of exceptions" is such that I would have us do without them. But that doesn't imply I think we should ignore partial failure. Rather, partial failure is exactly the sort of fundamental, cross-cutting problem that deserves much attention in the design of languages and paradigms.

we already do that :-)

escalate the partial failure into a whole failure

try {
...
}
catch( Exception e ) {
throw new RuntimeException( e );
}

is sorta how a lot of code is written

ignoring exceptions

Your example code is the closest you can get to a 'right way' to use exceptions, IMO, and it's very little different from not having exceptions and instead using `exitFailure()`. The few differences from exitFailure are mostly mistakes, too - exception stacks are data-leaking security holes, exceptions interact awkwardly with threading (e.g. threads created within the same `try` block aren't killed, and exceptions within those threads aren't caught), etc..

Is that the problem?

Or is it that the language does not include the correct handling of threads in the exception model?

A big advantage over 'exit' is that you can in turn catch the re-thrown exception. Even if we assume an exception should never be thrown, a diagnostic error with the right information makes debugging so much easier.

If you get used to only ever debugging with a log-file, so that you have to put useful exceptions in your code (rather than using a debugger) it makes it much easier to debug production code from log traces later.

Poor integration of

Poor integration of exceptions and threading in many languages is certainly "a" problem, but is by no means "the" problem with exceptions. Exceptions do a poor job of addressing the fundamental problem they were designed to address, that being partial failure.

A big advantage over 'exit' is that you can in turn catch the re-thrown exception.

That isn't an advantage. You can also 'catch' exit, e.g. via the sort of watchdog patterns you might see in Erlang OTP. You can recover from exit cleanly via 'crash only software' design. That said, 'exit' certainly has its own weaknesses.

get used to only ever debugging with a log-file, so that you have to put useful exceptions in your code

When I think about debugging with a log file, it's more in terms of orthogonal persistence and replayable transactions and corresponding potential for time-traveling debuggers. And, honestly, this sounds a lot nicer to me than trying to work with the sort of ad-hoc exception logs that you might generate from a Java program.

Time travellers

I like some of these ideas. I disagree that catching an exception us a bad idea, and I think you understand the weakness with exit and watchdog processes (at least you imply you do).

Time travelling debuggers sound great, can you tell me where I can find something production ready that I can use on client systems, that the client needs to carry on using live? (The developers are using Python).

Time travelling debuggers

Time travelling debuggers sound great, can you tell me where I can find something production ready?

I haven't searched for time travelling debuggers. I hear of them for non-conventional languages now and again (e.g. Elm and Glitch, most recently), and I'm currently developing a wiki-based IDE and software platform that will have similar features (via logarithmic history, I can feasibly support retroactive testing and debugging on the scale of multiple years). I've glimpsed one time-travelling debugger for C++, but the screenshots were ugly, and I never looked further into it.

I disagree that catching an exception us a bad idea

Rather than "catching an exception is a bad idea", I'd say "using exceptions in any situation where you might feel the need to catch them is a bad idea, and in the cases where you feel no need to catch them you might as well use exit()."

If you avoid use of exceptions, you'll find other ways to model errors in your problem domain, many of which have nicer properties than exceptions for generic programming (parametricity), extensibility, composability, reasoning about security or concurrency or data model consistency, etc.

Of course, if you're stuck in a language with exceptions, you'll need to be defensive of them - e.g. bracket patterns, RAII, try-catch (which frequently lacks easy abstraction). And you'll need to handle errors unnecessarily modeled as exceptions by third party libraries, or tunnel exceptions from clients of whatever framework you're developing.

Syntax complaint

I also like this extension a lot. It is the final step in unifying pattern matching and exception handling (which has been a long way, if you read the early 80's proposals for ML exceptions). And it kind of clarifies how computation in ML can be viewed as really returning a binary sum type, with all other constructs just being implicitly lifted to the corresponding error monad.

My one complaint is bike-shedding about the choice of concrete syntax in OCaml: the keyword 'exception' just seems like a category error. This being a pattern, it clearly should have been 'raise'. Unfortunately, that's not a keyword in OCaml, but maybe this would have been the ideal opportunity to correct that mistake... :)

Indeed

You're right. Sometimes I'd like to have a 'ref' pattern as well.

And a 'lazy' pattern.

And a 'lazy' pattern.

Already

This one we have already.