What good is Strong Normalization in Programming Languages?

There's a neat thread about strong normalization happening on The Types Forum.

If you've ever wondered Why is it useful to have {type systems,reductions,operations,...} that always terminate? this may Illuminate you.
Here are some snippets to give you a feel for the discussion:

I think it is clearer to split Gérard's original question in two:

  1. Is strong normalization useful for programming languages ?
  2. Is the proof of strong normalization via type systems applicable for programming languages?

-- François-Régis Sinot

Termination is good:

If a program is a solution to a problem then knowing that it terminates
for any input is an important aspect of it being a solution. Often the
best way to see that it is terminating is expressing it in a language
where all programs terminate. The programming language Epigram is an
example of an experimental language which is intended to be terminating
(even though not all conditions are enforced in the current version), see
http://www.e-pig.org/ for more information.

-- Thorsten Altenkirch

Termination is good!

I think the moral sense of strong normalization is that a program
in a strictly-typed language can only diverge as a result of some
programming construct, which _explicitly_ permits looping, like
iteration, recursion etc. My favourite example here is that the
"big Omega" can be written in Algol 60, because procedure types
in this language are not fully specified.

-- Pawel Urzyczyn

Termination is good and with fixpoints is turing complete?

Another way to put this is that data structures should be definable in a
strongly-normalising language so that data access, etc. is guaranteed to
terminate. Then add fixpoints or loops to describe arbitrary computations.

-- Barry Jay

Terminating reductions allows exhaustive applications of optimizations:

In a compiler, if a set of reductions is strongly normalizing, then the compiler can apply them exhaustively to an intermediate-language term without fear of looping. (We rely on this in the MLj and SML.NET compilers' "simplify" compilation phases, which apply simple reductions and directed commuting conversions until a normal form is reached. Though it has to be said that it's not the classical SN results that are relevant here, but something more specific to our intermediate language and the simplifying reductions that we employ).

-- Andrew Kenney

Rene Vestergaard also gave a link to a 2004 discussion of strong normalization on the rewriting list.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Hey, I was planning to post t

Hey, I was planning to post this!

Just doing my editorial job :-)

Did I miss any good links about strong normalization?

Didn't Frank had a good messa

Didn't Frank had a good message about this once upon a time?

"One to One" matching

If we want to use a rule style of expression as a programming language we are probably worried about termination and speed. A way to get a little of both is to use “one to one” matching instead of unification. As far as I can tell programs written this way are at least as expressive as any other programs but it must be admitted that there are questions. What do we give up by using a “one to one” match? Does it eliminate the termination problem? (ie is it normalizing?) I wonder if anyone has seen a theoretical answer to these questions?

It occurs to me that the diff

It occurs to me that the difference between the finite and the infinite is of little comfort if, for example, your multiuser OS can be tricked into performing 4 billion iterations of a complicated loop with preemption (and a few other interrupts) disabled.

So now I'm idly wondering what formal methods could do to help catch that kind of error.

Good point..

Jed, I agree this is an issue and I have not seen formal methods encompass the OSapplication interactions yet. It seems that there is another level of formal requirements that needs to be defined; interrupts, average & max processor yield times per process, pointer validation & argument copies in OS routines.. lots of interesting security primitives to ensure that an arbitrary program cannot bring the OS to its knees.

Then how to prove that this blob of compiled binary code meets those requirements.

See "A Principled Approach to Operating System Construction"

Check out A Principled Approach to Operating System Construction from the House website, and look at the partial (Isabelle?) proofs in the cvs repository.


I can't help but bring up Xok yet again. Dawson Engler's thesis is probably the best read/"introduction". The ideas are ingenious. You'd probably also find the other OS projects it refers to interesting as well. And there is the whole PCC (Proof Carrying Code) and certifying compilers set of things.

Maybe linear constraints?

It's often easy to see that an operation's running time is O(f(input_size)) and I imagine that constraints imposed to obtain termination guarantees will generally make code easier to analyze. Time and space bounds are probably easier to derive if you already have a termination proof to work from.

And on the other hand...

...any network server is not supposed to terminate unless it's explicitly stopped or the power goes out.

The moral of the story is that non-termination might be desirable in the lambda calculus, but it's often not desirable in the pi calculus. :-)

My pi-machine hanged up!

While it is desirable that a server accepts an unbounded number of requests, it is also desirable that it responds to them, as well.

For that we need a guarantee, that traditionaly is called "termination of a computation". Some would even require a specific bound on the time of this termination. In the setting of pi (or CPSed lambda for that matter - hi, Derek) I would say we are talking constraints between occurences of certain events. E.g., whenever we call a function A with continuation B, eventually B is called. Looks like for merely requiring termination, modal logic is enough. Not so sure about bounds on termination time (like whenever we call a function A with list of length N and continuation B, B is called no later than in F(N) ticks.).

The reason why I like pi more than lambda is in pi program (as in CPSed lambda program) these constraints can be uniformly applied to both calls and returns.

As soon as we agree that we can use constraints in pi to express termination, it's tempting to express constraints as types.

I still wonder, whether adding delimited continuations to the equation would make it more fun...

Sized Region Types

In Proving the Correctness of Reactive Systems Using Sized Types, Hughes, Pareto and Sabry describe a type system which ensures liveness of reactive systems.

As Andris notes, we still wan

As Andris notes, we still want a guarantee that a message will receive a response in some finite time. This is a property that's mathematically dual to termination, and is called productivity.

If you think of a sequence of responses as a possibly-infinite stream of values, productivity means that for any N, you can compute the Nth value in finite time. So if you pull the elements of the stream off one at a time, you will be able to reach any N in some amount of time -- there's no point at which you'll block forever waiting for the next value.

Loved the e-pig site!

And immensely enjoyed the "nod and two winks" article! Very nice.