Termination Checking with Types

Termination Checking with Types, Andreas Abel. 2004.

The paradigm of type-based termination is explored for functional programming with recursive data types. The article introduces lambda+mu, a lambda-calculus with recursion, inductive types, subtyping and bounded quantification. Decorated type variables representing approximations of inductive types are used to track the size of function arguments and return values. The system is shown to be type safe and strongly normalizing. The main novelty is a bidirectional type checking algorithm whose soundness is established formally.

Recently on LtU, there has been a good deal of discussion about total functional programming, so I thought I'd link to an article that shows how to use typing to enforce termination. The main advantage of type-based approaches (as compared to analyzing the program text directly) is that the termination check is not as sensitive to the precise shape of the program; for example, you can beta-reduce a term and not alter whether the system thinks the program will terminate or not.

This paper is also nice because it provides a relatively simple example of bidirectional type checking, which is an important idea both practically (it reduces spurious type annotations and helps produce better error messages) and theoretically (it is intimately related to cut elimination).

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Question about importance of proving termination.

Why is it so important to prove that a program will terminate? isn't it more important to prove that a program does not have logical contradictions in it?

Please forgive me for this naive question. I have read various papers presented in this site, but I can not really understand why proving that a program terminates is so important.

Curry-Howard and Termination

When code is used in specifications, then nontermination in that code is equivalent to logical inconsistency. For example, you might define with GADTs or dependent types a type representing ordering between natural numbers.

data Less a b where
LessZ :: Less Z (S n)
LessS :: Less a b -> Less (S a) (S b)

You can only construct a value (normal form) of type Less a b if a is really less than b. You might write a function taking a value of type Less a b to enforce a precondition, maybe as part of a type of sorted lists, or to avoid overdrawing bank accounts, and write your proof that the function is used correctly as the code that produces the value you need. You might also think that once you have written down that proof there is no need to actually evaluate it at runtime, but that is only true if your language is terminating. If your language allows unrestricted nontermination you have problems like

absurd :: Less (S Z) Z
absurd = absurd

So, you either need a terminating sublanguage, or your programs will do a lot of useless evaluation and pattern matching to make sure claimed proofs actually terminate.

But what about more conventional type systems?

Your explanation (thank you, by the way) is valid in the context of 'values as types'. But what about more conventional type systems? do we need to prove termination, let's say, in a Java program?

.

A month ago I had to debug a non-terminating java program that was deployed to our customer. It was really tricky to narrow down the issue, because the code was like this (java.util.zip.Deflater):

do {
    zip.deflate(bytes, off, len);
    // doStuff();
} while(!zip.finished()) 

The tricky part was that for an empty byte array deflate returned 0 and the documentation says: "A return value of 0 indicates that needsInput() should be called in order to determine if more input data is required". Of course needsInput() wasn't called but this bug never manifested because it read the input from a properties file and it was provided together with the library. Once we changed the appserver version the bug manifested and it took us a couple of days to narrow down the issue (the error logs were wrong and we couldn't reproduce the issue on our server initially). If in Java there was a way to statically prove that a certain piece of code is either terminating (algebraic) or productive (coalgebraic) the loop would never pass the check. Also when the language forces you to write down the operations' contracts you get better libraries, most of the time: the programmer would have to read the docs as the operation would probably would be written as a fold to satisfy the compiler.

A couple of months earlier I upgraded a library I maintain at work to Java 5 (I know, but our customers still work mostly with 1.3.1 and 1.4.2). It has a function int countOf(String piece, String text) that counts how many times piece occurs in text (it was written for 1.3.1 originally) and it relied on String.indexOf(String, int). One of my unit tests caught that the invariant for-all s . s.length() == countOf("", s) was broken because between 1.3.1 and 1.4.2 the contract for String.indexOf(String, int) changed. Again if there was a static verification of termination my program wouldn't be written in the way it was. Lucky for me there was a unit test checking this condition, instead of finding it on an angry bug report at night.

In large pieces of software two statements are true: there are lots of hidden assumptions and, two, upgrades aren't always backwards compatible. There are lots of situations where time is essential and non-terminating programs cause deaths, waste millions, etc.. Imagine if you're being followed in a dark alley and your iPhone can't dial to 911 because it entered in an endless loop? Imagine how happy is the CEO of a company that can't process it's payments because the batch job failed to terminate and had to be debugged, patched and restarted in a couple of days later.

Interesting.

Imagine if you're being followed in a dark alley and your iPhone can't dial to 911 because it entered in an endless loop?

Thanks for the reply.

But the problem you mention is not a problem of termination but a problem of hidden assumptions (as you said). Hidden assumptions can create problems even if not in a loop.

What really puzzles me is the use of the word 'termination'. Most programs have problems not because termination is not proven, but because assumptions are broken.

For example, the following piece of code terminates, but the output is not good if we expected the input to be an integer from 0 to 15 (for example); the end result could be anything:

x = input
x = x + 1

Perhaps the word 'termination' is used as in 'a program finishes with the proper output' rather than 'it finishes'.

On Termination

Your question is certainly not naive.

Brandon's answer gives you a bit of the story. I'll try to fill in with a bit of a different perspective.

First of all it should be said that not all programs should terminate. An operating system or a server of any kind should be able to run indefinitely. So termination doesn't apply to all programs. (There is a dual property to termination which these programs should satisfy but I won't go into this any further.)

As for all other programs, one part of showing their correctness is to show that they terminate. Proving that a program doesn't have "logical contradictions" as you call it, or simply to show that the program follows the specification is sometimes referred to as partial correctness. Why partial? Because we only know that if the program terminates then it will return the correct answer. But since we're not sure that the program will terminate the program is only partially correct. It might never return an answer for all we know.

You ask "isn't it more important to prove that a program does not have logical contradictions in it?". Well, I would say that termination and partial correctness are two different aspects of software correctness. You really want both. Which one is most important? I don't know. But I do know that I don't want programs that loop forever. I like terminating programs.

Everything terminates (I think)

First of all it should be said that not all programs should terminate. An operating system or a server of any kind should be able to run indefinitely.

I've heard this said, but I don't see it; to me it seems like everything terminates at some point. These types of long-running programs don't terminate of their own accord, but on some external signal. A server process could receive a special "terminate" message to signal end of processing and operating systems receive "halt" messages, or at least "suspend" in the case of persistent OSes like EROS.

I remain convinced that everything must terminate at some point, and the it's important that this point be well-defined.

Can vs. Must

A terminating program must terminate no matter what. A server, for example, may terminate but (reality-aside) it does not have to. In other words, you can't write such a server in a strongly normalizing language as there are (potential) execution traces that never end.

Can't write servers?

Can vs. Must

That is perhaps my stumbling block.

In other words, you can't write such a server in a strongly normalizing language as there are (potential) execution traces that never end.

I admit I don't understand the theory behind this. Can such strongly normalizing languages perform any sort of explicit I/O? (via monads or some other means)

If so, then writing a server in such a language seems possible. The point of a computation is to produce a result, and programs can't compute their results without all their inputs. It just so happens, that this function can't terminate without receiving a "terminate" message via I/O. Serving other requests is just a side-effect of waiting for that final input.

To give you a better idea of what I was thinking: consider a server which is implemented as a pure function that tailcalls itself with the new server state after each response. Upgrade of such a process is accomplished by sending an "upgrade" message with an updated server function to which the server function tailcalls to. I imagine termination to be accomplished similarly. This is Erlang's approach to servers I believe.

Is this not feasible in a strongly normalizing language? I can only see it being impossible, if somehow the type system knows that there is no guarantee that the appropriate input ("terminate") will be received via I/O.

No it isn't

IO is irrelevant.

In a strongly normalizing language you would not even be able to express while(input != "terminate") ...;

A type system is not necessary, nevertheless...

if somehow the type system knows that there is no guarantee that the appropriate input ("terminate") will be received via I/O.

Um... there is no guarantee. You have to prove that there is if you want to prove that the program will terminate. In this approach, the type system has to ensure that all programs it type checks terminate, it does not have to (nor could it in general) prove that a program that fails to type check does not terminate, let alone never terminates.

No recursion?

In a strongly normalizing language you would not even be able to express while(input != "terminate") ...;

I see. So there is no recursion of any kind in such a language? In any case, strongly normalizing languages was not the main topic of this thread (see below).

A type system is not necessary, nevertheless... [...] Um... there is no guarantee [of termination via I/O]

This paper is about termination checking with types, so that's my main concern. My initial object was to a statement that OSs and servers don't terminate. They clearly do, as either "terminate" is received or the I/O channel is invalidated by other means, ie. end of file, socket closed, etc. The question is only whether the type system can be informed of this "real world" information.

So I/O is relevant, as the type of the I/O channel must express that it is "finite" in some way, so that the type checker can verify termination. Tricky problem though: should reading a byte, seeking back a byte, then looping be caught as an error? Depends on the type of "finite" constraints placed on the channel I suppose.

I originally said "everything terminates", when I really meant to say "every real/useful system terminates". Clearly there are algorithms that don't terminate (infinite loops/regresses), but I contend that most of them are implicitly terminating in some way (aside from end of the universe of course ;-).

There are (usually)

There are (usually) controlled forms of recursion, but yes, general recursion is out.

If we assume the input is finite then we can easily write the code in such a way that termination is guaranteed, but in reactive settings (e.g. web servers) that assumption doesn't hold meaningfully. A web server can is only going to receive a finite number of requests, but may well never receive a "terminate" message. The web server thus never knows when to end, it must be written in a way that would handle the physically impossible case of infinite input.

Thanks for the educational

Thanks for the educational discussion. Just one last question. :-)

A web server can is only going to receive a finite number of requests, but may well never receive a "terminate" message. The web server thus never knows when to end, it must be written in a way that would handle the physically impossible case of infinite input.

I realize that encoding these "real world" finiteness constraints into the I/O channel type itself would be quite challenging, but I'll just suggest one possible, simple solution to termination of servers and OSs that I can foresee: x bit integer count (x >= 64) which counts the number of connections made over the listening socket. Once the 2^x has been reached, the server terminates.

2^64 requests is far more than any web server will *ever* see; it would take even longer than humanity will exist to overflow 2^64. Having the count implicit in the I/O channel's type would be nice, but it need not be.

OSs implicitly utilize a counter of sorts for tracking time/ticks, and this too is a finite counter; when the timer is about to overflow, the OS terminates. In fact, unless the OS uses an arbitrary precision counter (unlikely), it would be incorrect not to terminate.

Would this approach solve the termination problem?

Logic and reality

Yes, this would solve termination, albeit in an arbitrary and meaningless way. In fact this is the case. Our computers are not Turing machines, they are even worse off than finite state machines. An example of this scenario is C preprocessor macro programming. C's macro language is not Turing-complete, but that does not really matter because though I can't make a lambda calculus interpreter, I can apply an arbitrarily large (but fixed) number of reductions. Choosing that limit high enough will be physically, extensionally indistinguishable from an actual implementation of the LC.

However, truncating evaluation in this way offers no theoretical value and almost no practical value.

¬FSM?

Our computers are not Turing machines, they are even worse off than finite state machines.

In what way is a CPU not a finite state machine?

An FSM can match (01)*, your

An FSM can match (01)*, your computer, since it will eventually fail one way or another, can only match a finite prefix.

Of course, this distinction and for the most part the one between real computers and TMs is pointless.

It is practical in a sense...

I agree that it's fairly meaningless; it's a hack because we can't express the true properties of the underlying object in our current language.

I disagree that it provides no practical value, as it makes expressing a solution possible where there was none before; we thus gain the advantages of a strong language with termination checking simply by using a slightly impure hack which asserts a property that the type system cannot verify on its own. It would be admittedly cleaner to simply state the function's termination as an axiom, and then manually audit such functions for correctness.

I agree the solution provides little theoretical value, except for informing us where our theory currently falls short.

Crash-only software

The only way to halt crash-only software is by extreme violence against either the hardware or the software processes.

Dual property

There is a dual property to termination which these programs should satisfy but I won't go into this any further

But this sounds really interesting, so please do go into this further! Or at least give a link.

It's productivity of co-recursion,

rather than termination of recursion.

So if you have a type operator F, then the recursive type it defines can be interpreted either as a least or greatest fixed point (assuming it satisfies some conditions I will ignore).

If you interpret it as a least fixed point, mu(F), then it will have a fold operation with type fold : (F(A) -> A) -> mu(F) -> A. This fold function encapsulates the pattern of primitive recursion[1], and it is guaranteed that it will always terminate.

However, if you interpret it as a greatest fixed point, nu(F), then you get an unfold operation of type unfold : (A -> F(A)) -> A -> nu(F). This unfold encapsulates the pattern of primitive co-recursion[2], and it has the property that it will always produce a well-defined object of type nu(F). This means that no matter how far you unroll the object, you will never run into an undefined element -- this matters because greatest fixed points can be infinite.

Now, you can visualize a server process as taking an infinite stream of inputs and producing an infinite stream of outputs, server : Stream(Input) -> Stream(Output). If the stream of outputs is productive in the sense above, then you know that your server won't hang -- you will eventually get a response to any input.

[1] technically, primitive iteration, but whatever.
[2] technically, primitive co-iteration, but whatever.

But...

You said that:

Because we only know that if the program terminates then it will return the correct answer

I am sorry but I do not understand that! I have seen many programs that terminate but the answer was anything but correct.

For example, one of the bugs that I had in the Training System is that the processing of the target interception message (real example posted; I hope no managers of mine see this :-)) was erroneous because the X and Y fields of the missile's position where erroneously exchanged at some point. The simulation terminated in all cases, but the end result was wrong.

Definition of Partial Correctness

You snipped some relevant context. "The program" in the sentence is a partially correct program. By definition this program either returns the desired result or doesn't terminate.

Why can't a program terminate with an erroneous result?

I do not understand why a program can not terminate with an erroneous result. I have many examples of terminating programs with erroneous output.

Does someone wish to explain why a program can not terminate with erroneous output?

The definition of correctness, and a question.

After reading up the definition of correctness, I understand what the fuss is all about.

Since it can not be proven that a program always terminates, we are happy if it terminates with some value that passes our definition of correctness, i.e. partial correctness is when a good answer is returned.

My question is: how much of value is the above definition of correctness? would it make more sense if we considered as correct an algorithm that conforms to our specification, even if the algorithm does not return? perhaps we are not able to solve the halting problem, but an algorithm that may or may not return can still be the correct one for the task at hand.

What would happen if the definition of correctness would be: "an algorithm is correct as long as all interactions between its parts can not produce values not in the specifications"? with this definition of correctness, we are not interested in termination, but on building algorithms in a manner that work as we expect.

I apologize for not playing along with the established science, and perhaps my ideas are goofy, but what sort of "scientist" would I be if I did not explore the unexplored? :-)

Latent type error

Tangentially, and related to detecting the issue of detecting logical bugs raised in another thread, using separate x-coordinate and y-coordinate types would allow you to detect such a bug statically. Doing so would encode a previously un-expressed constraint into the program, which makes it amenable to automated proof, in this case by the type checker.

Yeap, and it's doable in C++ as well...

or any other language that new types can be defined.

But my question is not how to fix the problem, only why everyone says that a program either terminates with correct results or it does not terminate.

The answer was within

My response was not just about how to fix the problem, but demonstrating how an automated correctness proof could be applied to that particular aspect of the program. This results in a program which could not possibly have the bug in question, and therefore would terminate [edit: if it terminates] with correct results with respect to that particular issue, i.e. X and Y are guaranteed not to be swapped.

You can extrapolate from that example to get to partial correctness — although for real programs, you'd need proofs beyond those provided by a standard static type system.

Just to Expand a Bit

The theoretical observation is that, thanks to the Curry-Howard Isomorphism, a proof in a constructive logic such as the Calculus of Constructions can, in theory, give rise in an automated way ("extraction") to an actual, executable algorithm in some programming language, where the type of the algorithm is isomorphic to the theorem proved and the implementation is isomorphic to the proof. If you haven't spent any time with such theorem provers, yes, this sounds like magic.

But even if you have, and/or you believe in magic, it's also true that the devil is in the details—representing partial functions, and extracting software based on them, in a theorem prover such as Coq is far from simple. See "Formalizing Constructive Real Analysis in Type Theory" and "Program Extraction from Large Proof Developments (I)," both at this page. But the bottom line is that, if you've proven a function partial, then you've proven that it either returns the correct result or doesn't return at all—because that's the definition of a partial function. If you do this in Coq, then you can extract O'Caml, Haskell, or Scheme code for the function. Extracting C++ is left as an exercise for the reader. :-)

Formalizing OS in a strongly normalizing language

In other words, you can't write such a server in a strongly
normalizing language as there are (potential) execution traces that
never end.

First of all it should be said that not all programs should
terminate. An operating system or a server of any kind should be able
to run indefinitely. So termination doesn't apply to all programs.

In my experience, whenever strongly normalizing languages or type
systems are mentioned, the subject of a server or an operating system
invariably comes up. How do we model these, potentially indefinite
computations? Perhaps the theory is deficient? It is easy to see that
the problem is illusory however. An operating system or any other
server can be characterized as follows:

	forever (\state -> receive_req_and_handle state) state0

The computation above as a whole does not terminate. However,
the function receive_req_and_handle must handle a request in
finite time. Moreover, that function must be total: no matter
what a request is, the function, starting from a valid state, must
handle that request and produce a valid state -- in finite time. Thus
receive_req_and_handle can be expressed in a strongly
normalizing language.

And it has been: the paper
running
the manual
(Haskell Workshop, 2006) describes prototyping a secure
version of the L4 microkernel in Haskell and then formalizing it in
Isabelle/HOL. The latter is a strongly normalizing language: only
total functions are admitted. The authors report the immediate benefit
of formalization: at one point Isabelle/HOL complained; they have
investigated the code (dealing with IPC send operation) and found that
indeed the algorithm, as written, can get into infinite loop. That was
of course a mistake: the kernel must satisfy each request in finite
time. Please see Sections 5.2 and 6.2 of the paper for more detail.

ad infinitum

The computation above as a whole does not terminate. However,
the function receive_req_and_handle must handle a request in
finite time.

Is there a search-engine friendly way to say "the head of the list of endofunctors is nice, and the head of the tail is also nice, and so on..."? Or does the notion of "productivity" (supra) suffice?

relation between termination and productivity

The computation above as a whole does not terminate. However, the function receive_req_and_handle must handle a request in finite time. Moreover, that function must be total: no matter what a request is, the function, starting from a valid state, must handle that request and produce a valid state -- in finite time.

I have wondered about this before. As Neel says in an earlier comment:

However, if you interpret it as a greatest fixed point, nu(F), then you get an unfold operation of type unfold : (A -> F(A)) -> A -> nu(F).

For this corecursive unfold to be productive, we need the first argument (of type A -> F(A)) to be terminating and total. So we can convert a productivity constraint to one of termination. Is this a general property of productivity constraints? Is there a dual property for termination, i.e., is there a sense in which we can convert termination constraints to productivity constraints?

Given a fold (fold : (F(A) -> A) -> mu(F) -> A), is there a notion of productivity buried somewhere?

Forgive me if this question makes no sense...