What is the Meaning of These Constant Interruptions?

Graham Hutton and Joel Wright discuss the semantics of interrupts.

Interrupts are important for writing robust, modular programs, but are traditionally viewed as being difficult from a semantic perspective. In this article we present a simple, formally justified, semantics for interrupts.
Although I didn't get into the details of the article (though it will probably interest the more astute LtU readers), what got me interested in the article was the correlation drawn between exceptions and interrupts:
An important concern in modern programming is exceptions, events that cause computations to terminate in non-standard ways. There are two basic kinds of exceptions: those that arise from inside a computation itself, such as a division by zero, and those that arise from outside a computation, such as a timeout. The former are termed synchronous exceptions, because they can only arise at specific points; for example, division by zero can only occur when performing a division. Dually, the latter are termed asynchronous exceptions, because they can potentially arise at any point; for example, a timeout can normally be received at any time. For simplicity, however, we follow the common practice of referring to synchronous exceptions as exceptions, and to asynchronous exceptions as interrupts.
I can't help but think this is related to resumable exceptions that was discussed in the LtU discussion of Common Lisp Exception Handling and Oleg's subsequent implementation in OCaml. That is, aren't interrupts basically a form of asynchronous exceptions that require a resumption mechanism?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

side comment on exceptions

I don't get it. I see it repeated everywhere, but it makes no sense:

"An important concern in modern programming is exceptions, events that cause computations to terminate in non-standard ways."

What is so non-standard and unusual about exceptions? In C++, .NET and Java, they work the same way: unwind all nested blocks and stack frames until a handler is reached. What's so non-standard about that? What is unusual? It's a completely natural extension of goto across function boundaries, featuring an argument.

Exception, Throw, and Catch

Not sure it makes any difference, but perhaps splitting up how exceptions operate provides the clue. We know that the mechanism of throwing and catching the exceptions is fairly standard across these languages. But the non-standard part is the event that triggers the exception in the first place. An explicit throw of an exception would be a standard part - and it correlates to the mechanism that you cite. However, exceptions can be caused by any number of things that are not necessarily explicit at the point in the code where the exception occurs. That is, the control flow goes through a non-standard flow of control - i.e. a different channel of processing.

[Edit Note] I guess the other thing to consider is that you are using a GOTO reference to describe the processing of exceptions, which is probably, by definition, a non-standard form of flow control in many, if not most, modern high level programming languages.

Compared to function calls alone

It's non-standard compared to a model of function calls without exceptions, in which control is passed to a function and returned to the call site when the function completes. This function call model on its own is very easy to analyse and reason about, either informally or formally (cf. lambda calculus).

Introducing exceptions complicates matters significantly — functions no longer necessarily return directly to their caller, which seriously undermines the simplicity of a function call model without exceptions.

This has real consequences for programs. For example, it's easy in C++ to get memory leaks due to exceptions, because of programmer failures to reason correctly about the possible flow of control -- see e.g. this page.

In languages with automatic resource management, like Java, exceptions may not cause that particular class of bug, but they can still much more easily lead to control flows that the programmer doesn't correctly anticipate. Here's a page of Exception Handling Antipatterns which covers many such cases.

These various traps and special cases are all consequences of introducing "non-standard control flow", via exceptions, into a language.

strangeness depends on habit and experience

"For example, it's easy in C++ to get memory leaks due to exceptions, because of programmer failures to reason correctly about the possible flow of control."

Easy it may be, but only for programmers who haven't learned to manage resources with destructors. For someone used to doing this, resource management is much simpler than without destructors and leaks are nonexistent.

The point I'm trying to make is that exceptions are as non-standard as airplanes. Depending on where you're from, you might not be used to seeing them, but that's subjective and has no bearing on others. Exceptions have been around for decades, they are standardized, and millions of programmers use them. You simply cannot walk around and keep a straight face saying they are "non-standard". Exceptions and their effects are very ordinary and every-day to any decent C++ programmers.

That's all I'm trying to say, no more. I'm just saying it here because I see this "exceptions as aliens" attitude repeated any place that mentions them. As if they're not one of the most ordinary and fundamental ingredients in today's programming.


You're confusing two unrelated contexts. The phrase has nothing to do whether one is "used to seeing" exceptions. Use of terminology is relative: what's standard in one context may not be standard in another. The context here is program control flow in PL theory. Use of the term "standard" in this context doesn't imply that exception mechanisms aren't commonly found in ordinary programming languages; the two things have absolutely nothing to do with each other, except for the fact that they use the same word in two different ways.

Here's a really simple take on it: in many programs, it's possible for execution to proceed until the program completes, without ever triggering any exceptions. Let's pick a word to refer to a program's control flow in this situation. How about "normal", or, you guessed it, "standard"? But sometimes, during the execution of a program, this "normal" or "standard" control flow is interrupted, by an exception. In that case, a qualitatively different kind of control flow occurs, which we can naturally characterize as "exceptional" or "non-standard" control flow for that program (it can also be called non-local, but that's looking at things from yet another slightly different perspective).

Note that "exceptional" and "non-standard" are not far from being synonyms. Why do we call them "exceptions"? What, precisely, is exceptional about them? One of the exceptional characteristics of exceptions is that they result in control flow that doesn't follow the usual pattern that function calls follow: exceptional control flow. If you'll accept the latter phrase, then it's hard to see how you could object to the phrase "non-standard control flow", once you recognize the context.

That's all I'm trying to say, no more. I'm just saying it here because I see this "exceptions as aliens" attitude repeated any place that mentions them.

Note that no-one has said that exceptions are strange, or alien [in general]. You're the one making that leap from the word "standard", but that's only because you're applying the wrong context.

Decent C++ programmers

Decent C++ programmers frequently get it wrong. Execption safety is surprisingly error-prone in C++. Witness this snippet of code from the Boost shared_ptr documentation:

void f(shared_ptr, int);
int g();

void ok()
    shared_ptr p(new int(2));
    f(p, g());

void bad()
    f(shared_ptr(new int(2)), g());

What's wrong with bad()? Go to the linked page to find out.

Now this isn't a problem with exceptions per se, but more of a combination of unspecified argument evaluation order and the non-standard control flow of exceptions.

I just thought I'd point out that things aren't nearly as rosy as you say.

Valid point

I do consider myself a decent C++ programmer and I think I would get that one wrong...

Does anyone know if this really causes memory leaks with common C++ compilers (like gcc, msvc)? Sure, the C++ specification doesn't cover this so you shouldn't rely on this, but everytime I stepped through code like this in a debugger the evalutation order seemed to be as expected, IIRC.

Exactly! C++ contains the solutions to its own problems

C++ contains the solutions to its own problems.
You feel good when you do it right. You feel clever.
But in the end, this is all mental masturbation.
You want to develop applications, not circumvent C++' problems.

I am try to hire C++ programmers for enhancing server software (running 24/7), and it's very hard to find people who will understand these issues. The team is not growing fast enough.
Once someone is hired, close supervision is necessary.

My next project won't be in C++.

they are perhaps not the same

unless you stretch down to some very fundamental level

interrupts are part of the foundation of virtual machines. the interrupted program isn't supposed to notice that anything happened, unless it was hard spinning on the cycle clock.

exceptions, as people have noted here, are non local exits that are part of the programmer visible system.

if you are trying to say that all control structures are continuations, i'm with you brother.

As concurrency gets higher in priority...

...then the issues related to interrupts start to be a language issue - at least that's how I read the article. That is, it's no longer something that just happens automatically in the VM, unbeknownst the to the running program.

But, yeah, my observation probably has mostly to do with continuations. :-)

I'm Missing Something ...

The PDF is inaccessible, so I'm looking at the powerpoint slides. Hutton says
    Seq (Catch x (Seq y Throw)) y
has the correct behaviour; but I don't see it. Certainly y follows x, but there's no rule for
                     |          |
        Catch x y  x V Throw  y V Throw
In Hutton's solution, the second clause of the Catch yields a Throw. Thinking eagerly, if x succeeds, then y follows. But if x fails, then we examine (Seq y Throw); so y also follows but then Throw 'happens' in the second clause of Catch, which is undefined (AFAICS).
There seems to be a shorter version that meets the specification:
    Seq (Catch x _) y
where _ is a no-op -- pure, guaranteed to succeed. If x works, then y follows; if x fails, the no-op evals to a non-Throw value, and y follows.

Both Throw and Val n are

Both Throw and Val n are considered "values". The paper says this, and also claims that the evaluation relation is meant to be total. On the slides, you can see that if "v" did not include Throw, then the second statment in a Seq raising an exception would also be undefined, not just the handler in a Catch raising an exception.

If you are talking about finally, I suppose it's meant to raise an exception when x raises an exception, although that doesn't seem to be explicitly stated. The second example confuses that even more, rejecting Seq (Catch x y) y on the grounds that y may be evaluated twice without saying anything about an exception from x being supressed.