## Total Functional Programming

Here's an interesting paper recently mention in another thread: Total Functional Programming...
Abstract: The driving idea of functional programming is to make programming more closely related to mathematics. A program in a functional language such as Haskell or Miranda consists of equations which are both computation rules and a basis for simple algebraic reasoning about the functions and data structures they define. The existing model of functional programming, although elegant and powerful, is compromised to a greater extent than is commonly recognized by the presence of partial functions. We consider a simple discipline of total functional programming designed to exclude the possibility of non-termination. Among other things this requires a type distinction between data, which is finite, and codata, which is potentially infinite.
I presume that the bogus definiton of "fib" is a subtle reminder of the importance of eliminating bottom.

## Comment viewing options

### Surprised that they don't separate non-terminating functions.

In Haskell, stateful operations are monadic. You can see whether a function is dependent on external stuff, time, etc., by looking at its type (typically IO Int, etc.).

If this paper did the same kind of thing with provably-terminating functions (that only operate on data, not codata), i.e., add termination to the type of the function, then you could still write Turing complete code, as an outer layer, with 'total functional' code inside. The closer you can get to having a zero-sized outer layer, the more 'totally functional' your code is.

Interesting paper.

### I had this thought just the other day...

My idea was to allow recursion only through a modified y-combinator y :: (a -> a) -> M a where M is the 'recursion' monad. Any use of this y would automatically result in values tainted by the monad. I think that by Curry-Howard it corresponds to a modal logic where you're allowed to use circular arguments in a way that means that you're still protected from being able to prove _|_.

### Partiality is an effect

Tarmo Uustalu (with Thorsten Altenkirch and Venanzio Capretta) have done some work on capturing non-termination in the monad

codata Sometime a = Now a | Later (Sometime a)

This talk is about an approach to partiality from non-termination as a monadic effect using coinductive types. The intuitive idea is very simple: termination/non-termination is about waiting, this waiting can be made official. Looping is supported directly by the monad. In order to obtain general recursion, we use that the homsets of the Kleisli category are domains. This gives us a possibility to translate a language with general recursion into a total language following the general paradigm of monadic translation. We also discuss a combination of partiality with other effects based on a monad transformer.

### Resumptions

codata Sometime a = Now a | Later (Sometime a)

This looks very similar to resumptions, funny they didn't mention that. Also, the intuitive idea of slicing (potentially) non-terminating computation into provably terminating chunks and then scheduling them in some fashion is quite popular, from operating systems to speculative computing. The main difference between them is how they prove this termination of chunks (OSes use hardware interrupts) and how open and controllable is scheduling (microkernels?).

### This idea was often

This idea was often discussed on comp.lang.functional a couple years ago (and probably several times since then).

### The original paper by Moggi...

...included partiality as an example of a monadic side-effect.

From a proof-theoretic point of view, I think that nontermination is best treated as a side-effect. However, from a semantic point of view it has a very special status: the possibility of nontermination is at the heart of domain theory, and is basically why you can give a nontrivial semantics to the untyped lambda calculus (ie, solve a domain equation of the form D == D -> D).

### There has been some

There has been some discussion of trying to use turing-incomplete/decidable languages on LtU (e.g. here), but I've seen no mention of these techniques.

Another excerpt:

There already exists a powerful theory of total functional programming which has been extensively studied. This is the constructive type theory of Per Martin-LÂ¨of (of which there are several different versions). This includes:
â€“ Dependent types (types indexed over values)
â€“ Second order types
â€“ An isomorphism between types and propositions, that enables programs to express proof information.
The theory was developed as a foundational language for constructive mathematics. It is certainly possible to program in it, see for example [Nordstrom et al. 1990], but it would hardly be suitable as an introductory language to teach programming.
I am interested in finding something simpler.

[..]
What I propose is something much more modest than constructive type theory, namely an elementary discipline of total functional programming. Elementary here means
1) Type structure no more complicated than Hindley/Milner, or one of its simple variants. So we will have types like Nat â†’ Nat, and polymorphic types like Î± â†’ Î±, but nothing worse.
2) Programs and proofs will be kept separate, as in conventional programming. What we are looking for is essentially a strongly terminating subset of Miranda or Haskell (or for that matter SML, since the difference between strict and lazy goes away in a strong functional language)

Here is a description of the research program that seems to be associated with the paper.

All the research associated with the paper seems to be pre-1998 (there is only one 200x citation), associated with the name "Elementary Strong Functional Programming" (or EFSP).
In search of further work on this I've found (so far) "Catch - Case Totality Checker for Haskell".

This reminds me a bit of the "hair shirt" of no side-effects in Haskell: what would we find if we weren't allowed to use a particular recursion scheme before designing a termination-prover? It seems one might at least get more insight into such algorithms' character.
They claim to have found, e.g. interesting new results on corecursive algorithms for arbitrary-precision integers.

### Along these lines

Along these lines (termination, data and codata) is Charity. Lots of interesting literature is available from the website.

### Induction on naturals?

Anyone else get a little queasy when faced with a inductive proof on something like the naturals, which can't be fully represented on our finite hardware? Especially in a paper on total functional programming?

### not if they're defined as peano numbers

'cuz then their limit is still succ(...(zero)...).

### Natural queasiness

I share your concern. My belief is that to realize its full potential of bullet-proof approach, total programming must be resource-aware (which also probably means encoding memory, CPU, and I/O requirements in types - ouch).

### 2/3's the way there?

Well, currently Haskell has its I/O requirements in types. And if you are working in a terminating language, it seems like you should almost already know the time complexity of your algorithms (Maybe requiring people to do a complexity analysis instead of a termination proof would be an easier sell?). So now we just need to get space usage factored into the equation. I wonder if a language like Hume has anything to offer in this regard.

### Termination easier than complexity

And if you are working in a terminating language, it seems like you should almost already know the time complexity of your algorithms (Maybe requiring people to do a complexity analysis instead of a termination proof would be an easier sell?)

I doubt that. Can you give an example of an algorithm which time complexity is easier to analyse than its termination behaviour?

### *Almost*

Key word that I guess I should have emphasized: almost. The idea here is that by putting a little more effort into your termination proof, you might also be able to prove the complexity of your algorithm, and subsequently reap those additional rewards. (Certainly termination is a precondition on complexity. If you know the complexity of something, you also already know it terminates).

### How to automate?

Do you know of any way to automatically compute reasonably tight complexity of code? Termination is most generalized version of a complexity bound, and there are plenty of languages that only include
terminating programs. Getting more specific there are programming languages characterizing complexity classes, like polynomial time, and logarithmic space. sized types keep track of some sorts of constant factors, but I don't recall that paper bounded the resource requirements of functions, just that they would eventually terminate.

### They define 0/0 as 0

This means that the behavior on some errors (when the program tries to divide by 0) is changed from failing to proceeding with nonsense values, and thus the errors are harder to find.

A similar thing can be said about Java's static exception specifications. They had tempted many programs to catch â€œimpossibleâ€ exceptions and do nothing in the handler, which makes things worse than they would have been without having to declare exceptions statically.

### Yes, but immediately after

Yes, but immediately after this they point out: [sic]

Runciman [Runciman 1989] gives a useful and interesting discussion of how to
make natural arithmetic closed. He argues rather presuasively that the basic
arithmetic type in a functional language should be Nat rather than Int. In a
total language it is certainly desirable to have Nat as a staticly recognised type,
even if Int is also provided, since there are functions that have no sensible value
on negative integers â€“ factorial for example.

..and I tend to agree: I'd rather have this handled by types (or explicit tests) than exceptions. But this has discussed here before..

Since they are trying to remove "undefined" results, including errors, I don't think its too much to ask.

### Runciman is ever worse

His natural number arithmetic makes aâˆ’b equal to 0 if a<b. Making operations total by giving them artificial results outside their natural (no pun intended) domain is counter-productive. It leads to less robust programs, the converse of the stated goal.

### Combinatorial subtraction

max(0,a-b) is called 'combinatorial subtraction.' I vaguely remember an interesting paper on the subject by Steve Johnson (author of the original Portable C Compiler) from around 1974, but I'll be damned if I can find the reference.

### I hope you caught that BBC story...

...here.

But seriously...if you want total programming then it seems clear to me that

(/) :: Int -> Int -> 1+Int

where you can call the single element of 1 whatever you like: e.g. Exception, DivisionByZero, Infinity, Sigfpe, and you need a convenient way (eg. a monad) to lift computations on Int to computations on 1+Int. I can't see that anything more than this is needed. Certainly no need to make 0/0=0 or ditch the negative numbers.

### This only shifts the burden to someone else

What should the caller of the / function do with the potential singularity in the result, if he is quite sure that the divisor will never be zero? The contract of the caller probably leaves no space for reporting the problem. It's a total language after all, the caller must return a valid answer to its caller, it can't even loop forever!

One non-solution is to extend the interface of every potentially failing function with a singularity, even if the failure is never intended to happen, and propagate it. This leaks implementation details to interfaces (a sorting function might have a superfluous singularity in its result just because it uses division somewhere), and many practical non-leaf functions are total only in theory. Propagating singularities calls for language support, and if a large function returns a singular result it might be a challenge to find out which of its components has triggered it and why.

Another non-solution is to put bogus values of correct types in these impossible branches. This is a horrible consequence of having to program in a total language.

I don't believe in total programming at all. Pretending that functions don't fail has no value if the way the functions are actually written contains various impossible failures in their code paths. Ruling failures out statically is impractical with present type systems.

Current languages generally offer one of two choices:

1. A computation may abort the whole program, possibly with a message (Haskell's error function). This can happen at any time.
2. Any computation may throw an exception, which can be caught.

The second choice is a superset of the first choice, and thus it's almost universal. Even Glasgow Haskell reinterprets Haskell 98's non-catchable failures as catchable exceptions. I've heard of no paradigm which would offer something better here.

### The contract of the caller

The contract of the caller probably leaves no space for reporting the problem.

This kind of 'reporting' can be conveniently wrapped up in a monad or idiom, or some such device. It doesn't have to be that much of a burden.

Propagating singularities calls for language support,

Or some higher order functions to plumb everything together.

if a large function returns a singular result it might be a challenge to find out which of its components has triggered it and why.

On the contrary. Though I doubt that total functional programming is practical (we agree on that, I just consider this to be an academic exercise), if there's one thing going for it, it's that it won't be as much of a challenge to find why a singular result was created because developers will be encouraged to explicitly handle singular cases. (My nickname came about as a result of spending so much time trying to track down mysterious SIGFPEs because in partial languages developers are not encouraged to implement code to handle singular values.)

### The third solution

What should the caller of the / function do with the potential singularity in the result, if he is quite sure that the divisor will never be zero?

A solution I'm working on basically requires a proof that the divisor will never be zero. If the programmer is quite sure a proof should be wrote down an passed to the compiler. Using qualified types to denote propositions and proofs of its propositions as evidence (as in type classes' instance dictionaries):

(/) :: (Nonzero i) => int -> i -> int
foo a b c = if a == 0 then bar c else baz b / a


In the example a proof is created by the compiler that in the else branch "a" isn't zero. If no proof is provided the proposition would be propagated to foo's caller. If we allow intersection/union types we could extend division's type with | (Zero z, Nonzero i) => z -> i -> z and get a nice optional optimization.
The idea isn't infeasible, though it would probably make compilation times longer if the compiler is expected to infer properties. OTOH if the programmer is supposed to provide the proofs the compiler just needs to check them, which is quite practical. A way to make this process simpler is to provide a sophisticated language for writing proofs and an interactive tool to help nail them down, both would be similar to current theorem provers, and the written proofs would work like proof-carrying code.

### Note that, mathematically,

Note that, mathematically, division by zero often occurs by rewriting an equation, where said rewriting is actually invalid when division by zero occurs...
Or, in other words, there is often a perfectly sensible result corresponding to this case.

Off the top of my head, I think of

• averages: if there are zero terms to average, the average is undefined.
• roots of second-order polynomials - e.g. a x^2 + b x + c = 0.
Using the standard formula (-b +/- sqrt(b^2 - 4ac))/2a is invalid when a==0, but a==0 means the actual equation is
b x + c = 0,
and so the solution actually is x = -c/b, NOT "undefined".

• someone mentioned factorial.
If we work with the gamma function, n! = gamma(n+1),
and 0! = gamma(1) = 1.
- but whether this is applicable depends on the origin of the factorial term.

So, generally, a "singularity" should be treated with care mathematically, not just in terms of "division errors".

### But integers aren't closed under division anyway...

But seriously...if you want total programming then it seems clear to me that
(/) :: Int -> Int -> 1+Int

I don't understand why 0/0 = 0 is any more wrong than 5/3 = 1.

If you want total programming with totally meaningful results, don't you need something like

data Rational = Rational Int Int (/) :: Int -> Int -> Rational 

where the denominator is allowed to be zero?

### Two issues

(1) There are several kinds of division with different applications. Sometimes you want -5/3=-1, sometimes you want -5/3=minus five thirds, sometimes you want -1.66667 and sometimes you want -2. I've met applications where all of these are useful (and most popular programming languages I know offer more than one of these operations). But in my experience, almost every time I've had code attempt to divide by zero the cause has been a bug in the code. I think that's the primary reason to handle division by zero differently.

(2) Division is just an example. I think we're really talking about mathematical exceptions in general. What should the return type of sqrt or acos be? We need to deal with these properly in total functional programming.

### And don't forget currency

What's one-third of five dollars? (possible answer $2,$1, $1.66,$1.67). (Now if I can just install a program to transfer those fractional cents to my bank account).

The most common place I encountered division by zero was in embedded systems, where a value will theoretically approach zero, but due to inaccuracies in measurement may result in a division by zero that has to be caught.

### no kidding

I've been wondering why (high-level) programming languages don't have a slew of different classes like this to represent the type of where your calculation is going. To my mind (and I'm not an academic, so I've ignorance a'plenty) this is a much more serious concern than whether or not the program halts.

### What's wrong with Coq

Coq is actually not too bad to demonstrate total functions. The fixpoint keyword lets you define ML-like functions that are guaranteed to be total functions. You just have to make them structurally recursive. The syntax is almost exactly the same as OCaml, so it's not hard to learn. No proofs or esoteric stuff needed.

### Nothing's wrong with Coq (excepting for its popularity...)

It took me quite a while to realise that Coq was a really good system for total functional programming. I think it isn't generally advertised as such, and the tutorials available don't really give the impression that it is a good functional programming environment.

I hope that more examples and tutorials on doing software verification and total functional programming in Coq end up on the web. I'd write one myself but I'm not yet competent enough.

### Programs in Coq

Adam Chlipala has done a lot of work using Coq as a programming language.

My thesis research centers on building programming language tools with proofs of total correctness, using the Coq proof assistant. What distinguishes my approach is that I use dependent types in the implementations of the tools, not just their correctness proofs, following a kind of "correctness by construction" discipline.

See specifically his paper Thoughts on Programming with Proof Assistants - "My purpose in this position paper is to make the general claim that Coq is already quite useful today for non-trivial certified programming tasks..."

### Non-turing completeness

One of the points this paper makes is to dispense with Turing Completeness in order to buy security and optimizability. As a practitioner I would readily give up Turing completeness, and have a separate syntactic space for the Turing complete part. HTML + Javascript is a classic example. SQL + PHP is another.

The market is embracing this rapidly, because the advantages in practice are compelling. Today, more and more of the GUI is written in HTML/CSS, with some Javascript thrown in for imparting "dynamism". Over time, more of the things you need scripts for is migrating to the underlying HTML/CSS system. On the server side, database systems do the query processing, and you use Java or C# or PHP for business logic.

Sridhar

### Partial functions are types

...or they give rise to types.

If T is some type, f(T) is some partial function over T, and T' is the set of all T for which f(T) is defined, it's obvious that T' is a Liskov subtype of T.

Of course, in many cases determining that a given T is or is not in T' is often undecideable--otherwise we'd have no need for array bounds-checking, or ALU/FPU hardware which causes a processor exception if you try and divide by zero.

### Not sure about the fib definition

mmm, i don't understand whether the bogous definition of fib is intentional or not, can someone please enlighten me on why does the paper define the last line of fib to be > fib(n+2) = fib(n+1) + fib(n+2), instead of what i believe to be the right definition of > fib(n+2) = fib(n) + fib(n+1)

Thanks alot!