Interval Computations

Interval Computations came up in a recent LtU discussion.

Are they a worthy language feature?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Definitely worthy

Interval computations are not trivial. Creating efficient means of calculating things like Sin, Cos and many other analytic functions with interval arithmetic takes much more time then your average hacker is likely to spend. If you can put it into the language as a library then perhaps that is the right way. Why should we reinvent the wheel?

One thing is for sure though, physicist should be using this a whole lot more often then they do. I've seen quite a few computations that are probably meaningless due to the fact that they were calculated using floats. Its better to require a decrease in your certainty about a computation in order to cause your program to terminate then it is to end up with some computation that may be completely worthless.

A related method is...

...automatic differentiation which you can view as a limiting case where you allow the intervals to become infinitely small. So instead of storing the interval from a to b as the pair (a,b) you store the interval (a,a+be) as (a,b) where epsilon is an 'infinitesimal' with e^2=0. It's fairly straightforward to generalise the usual operations. For example multiplication: (a,b)*(c,d) = (a*c,a*d+b*c). This has many applications such as allowing you to differentiate functions 'automatically' without using symbolic differentiation or finite differences, or automatically modifying computational geometric algorithms to avoid degenerate cases. This paper describes it in a functional setting. It also plays nicely with interval arithmetic for writing optimisation code where you might need to bound derivatives. There's also a half-way house called affine arithmetic which deals with some of the shortcomings of interval arithmetic.

As basic automatic differentiation is easier to implement than complex numbers (in fact, complex arithmetic is a crude approximation to automatic differentiation) and has as many applications it really ought to be part of every standard programming language mathematics library.

It's also interesting to think about automatic differentiation abstractly where there are nice connections between the "imperative but in reverse order" monad and a variant of automatic differentiation called adjoint differentiation.

It is similar...

I believe that automatic differentiation can be implemented as a degenerate case of interval arithmetic, but I've never implemented automatic differentiation, so I may be missing subtleties. I think that automatic differentiation is bit weaker in a lot of ways, though. In effect, it only knows about the behavior of a function at a single point, and assumes linearity around that point, while interval arithmetic takes into account the true behavior of a function in an entire enclosing range.

For example, if you just look at the tangent of cos[x] at x=0, and/or x=2 pi, and extrapolate that behavior, you may get an odd idea of the way the function behaves, and extrapolate errors quite badly. In comparison, interval arithmetic knows that the function varies between -1 and 1 in that range.

Also, since you're working so close to the machine epsilon of your hardware, doesn't this suffer from significant rounding error?

Different applications

Automatic differentiation (AD) is essentially about local behaviour at a point, so yes, it is in some sense an approximation. But it's a good approximation with lots of applications eg. solving equations, optimisation, and all the stuff calculus is good for. Automatic differentiation deals gracefully with 'correlations' between variables which intervals can't handle. And you can adapt automatic differentiation in an efficient way to partial derivatives so you can simultaneously ask "what happens if we allow x to vary and hold the other variables constant" for all of the variables of interest.

But I mention AD not as a competitor for interval arithemtic, but as a neat twist on interval arithmetic that has formal similarity and a similar implementation, but is useful for a slightly different, but related, class of problem. And interval arithmetic can be combined with AD to give optimisation algorithms that are both efficient and are guaranteed not to miss optima because you can guarantee, say, that in a certain volume, the derivative of your objective function never reaches zero.

And you're not working with the machine epsilon. I think I may have said something that misled you. You define a new kind of number a+eb with e^2=0 but a and b are 'ordinary sized' numbers and 'e' is a formal trick (like i^2=-1) so you don't actually deal with small numbers. In fact, this is the nice thing about automatic differentiation - you don't have to deal with the machine epsilon like the way you do with finite differences.

And note there is also 'affine arithmetic' which is a kind of combination of both approaches - but I've never used that.

Synthetic Differential Geometry

Having a d such that d2=0 is exactly how Synthetic Differential Geometry works (to the first order) incidentally. For those who might be interested Synthetic differential calculus, and nilpotent real numbers (a PDF) is a good introduction and Anders Kock's homepage is a good source for more (especially the book available online called Synthetic Differential Geometry).

Yes, it's much the same thing

What Kock describes is straightforwardly and efficiently implementable. You end up having to deal with many of the same issues as with interval arithmetic. Can we just take any old piece of code that computes over (some approximation to) the real numbers and expect it to work when we replace those reals with "infinitesimals"? In general the answer is no but in practice the tweaks required to make it work are often quite small.

A tangent (no pun intended)

Interestingly enough, one of the guys in Brazil (?) who does all the interval/affine arithmetic stuff is also the guys who did the Lua scripting language.

As another aside, I've been working on a Mandelbrot/Julia set generator using hierarchical complex interval arithmetic. The advantage is that you can avoid or at least defer the sampling problems, and actually construct trustworthy images. The Brazil/Lua guys have done this already for Julia sets, but the fun bit I'm adding is memoization of regions in the escape/captive sets, which would is an interesting performance optimization.

-W

Ooh, complex intervals

I'd be very interested in this. Can you share more details? Intervals of complex numbers appear to be significantly more... uh... complex than intervals of real numbers. I've only implemented intervals of real numbers, but I'd like to know what resources you've found useful, and how your representation behaves (e.g. in the complex plane, are the intervals represented as a rectangle, or a disc, or...?)

And do you have any comment on the most expensive book ever written? (This is on my Amazon.com wishlist if anyone wants to buy it for me or has a copy sitting around.) Are there better resources for complex interval arithmetic?

Not so epic

It's probably not as cool as you may be thinking: for simplicity's sake, all I'm doing is using conservative rectangles (to make the tableau simpler, and so you can deal with Re and Im separably), and all I did was sit down to do some high-school brute force a la pen and paper, to turn z^2 + c into interval arithemetic on Re and Im. Basically, this is the naive thing to do.

If you want to talk more about it (the parts that I think are interesting aren't the interval arithmetic parts) then perhaps you should e-mail me (my gmail is wonchun) so we can take it offline.

-W

Not For Language

Interval arithmetic should be a library, not an actual language feature in my view. Rather, the language should be designed to make this sort of feature implementable in a clean way.

Framing the questions

Interval arithmetic should be a library, not an actual language feature in my view. Rather, the language should be designed to make this sort of feature implementable in a clean way.

Obviously it is better if this can be done via a library. So the real question is: Which language features are needed in order to make such a library possible and useful?


For example, it is argued below that you would want to be able to use interval computations in any computational code, wether originally designed for interval math or not. How would you make this goal easy to achieve?

Polymorphism

Polymorphic type systems should aim towards such substitution patterns.

One problem with many languages though is that the built-in math operators behave differently than user defined operators - they are overloaded for a fixed set of primitive data types.

Obvious?

Ehud, could you explain why it's obviously better? There are some trade-offs, but I've implemented it as a first-class language feature, and it now seems to me that I'd certainly do it that way again for the reasons cited here, and for more unstated reasons. Everything's a trade-off, though, so I'd like to calibrate our obviousness-meters.

I think it would be interesting to hear from Sun's team about why intervals were implemented as a native type in their Fortran implementation, and as an external library in C++, and what they think of both approaches. I'm sure that language features have a lot do do with it. Their implementations are excellent.

Clarification

The sort of statements include an implied other things being equal clause. In general we prefer compact languages, with big libraries, rather than the other way around. This makes it easier to implement the language correctly, learn it, etc.

What I meant to say is that if you can provide the same level of support for intervals from a library, it's better. It would meand that (a) the language is more compact and (b) the language has good support for building libraries.

plus/minus?

you store the interval (a,a+be) as (a,b) where epsilon is an 'infinitesimal'

Sorry I haven't had a chance to read the cited papers yet: do you mean to say that (a-be, a+be) is stored as (a,b)?

wrong reply

Sorry, that was meant as a reply to sigfpe's note.

Same thing

(a-be,a+be) is basically the same interval as (a,a+be). By choosing to let e^2=0 we're working to first order and discarding second order effects - which is exactly what calculus is about. The difference between the intervals (a-be,a+be) and (a,a+be) is essentially a second order effect.

You can get an idea of how automatic differentiation arises in the limit of interval arithmetic by considering the product of two intervals (assuming a,b,c,d are >=0 and e is so small that e^2=0):

[a,a+be]*[c,c+de] = [a*c,(a+be)*(c+de)] ~ [a*c,a*c+(b*c+a*d)e]

So if [a,a+be] is represented by (a,b) we get
(a,b)*(c,d) = (a*c,b*c+a*d)

Calculus basically tells us how infinitesimal intervals scale when we apply functions to them. And we can implement automatic differentiation by analogy with complex arithmetic, except we define e^2=0 instead of i^2=-1.

Benefits of being a language feature

There are some benefits for intervals to be a first-class language feature. If it's part of the language, almost any code or libraries, even those written by a third party, can automatically use intervals in computations without recompilation or modification, especially if you have a unified "do the right thing" numerical type that can contain integers, real numbers, rationals, complex numbers, and intervals. Compare this to the difficulty needed to convert a program to use intervals using, say, something like Sun's Fortran compiler (which has INTERVAL as a native type,) or Sun's interval libraries for C++, or Boost.

Any equation can instantaneously benefit from simple interval-based error analysis, simply by passing in intervals instead of real numbers. I've found this very useful in my Frink language which has intervals (of real numbers) as a native numerical type. (See relevant documentation.) For example, I have some rather complicated calculations for astronomy that can instantly work with intervals, giving bounds on uncertainties. I found this almost magical in its power. In Frink, you can pass in intervals almost anywhere you can use any numerical type.

Case in point: I was working on a geocaching puzzle that required triangulation of GPS coordinates from several locations. Solving the systems of ellipsoidal equations simultaneously would have been really, really hard. With interval arithmetic, I could find a solution and even tell how bad of an error the "to the nearest degree" bearings might produce in the result. With literally no changes in the code other than passing in an interval instead of a real number. It literally took me seconds to provide a "good enough" error analysis of the problem, one that would have been very, very difficult otherwise.

In the other thread, I compared it to the feeling of power I first got when I wrote my first program containing regular expressions for text parsing. I knew that I could use a language that didn't have this as a feature, but I'd be working a whole lot harder than I needed to, and I'd really miss it. And when one looks at, say, implementations of regular expressions as a library, one can see how much absolutely uglier it can be (I'm looking at you, Java, and your forest of backslashes.)

Interval arithmetic also gives you some incredibly powerful tools for finding solutions to systems of equations, as I mentioned in the other thread.

In addition, it allows you to add mathematical rigor to many statements. Mathematica has interval arithmetic, and this allows it to return well-bounded intervals in places where you'd otherwise have to throw exceptions, or bomb out:


Limit[ Sin[1/x], x -> 0 ]
Interval[{-1,1}]

Properly-implemented interval arithmetic also helps you ensure that your values come out right. A proper interval arithmetic implementation will bound floating-point errors, and round calculations properly, giving you a bound in which the real solution exists. The controlled rounding direction of IEEE 754 math was designed primarily to support interval arithmetic, so your hardware may already largely support it, but you may want to work at a very low level to control hardware rounding direction and do so atomically.

The less-obvious benefits of interval arithmetic come in with automatic parallelization of algorithms. Bill Walster of Sun is one of the leading researchers in the field, and his team has built one of the strongest implementations of interval arithmetic. (He's a really nice guy, too.) He says in this fascinating article:

"Computing with intervals is the only approach that lets you reliably scale up some parallel computations to 100,000 processors and beyond," he says. "You can't get there with floating-point arithmetic alone."

Why not?

"With floating-point, you sometimes get different answers on parallel machines, because the order in which floating-point operations are performed matters. When this happens, you don't know whether the differences in successive runs of the same problem are due to rounding and its consequences, or the result of a software error or a hardware flaw," Walster says.

"With intervals, I am guaranteed that the value I'm computing is contained in the interval result. It's a mathematical guarantee, a numerical proof."

Read that article. It's eye-opening. Any language designed for parallelism should include interval arithmetic. Walster has called interval algorithms "embarrassingly parallelizable." It may well be the best way to automatically parallelize algorithms.

I found it easiest to implement intervals at the lowest level of my numerical libraries, and as part of the language itself. (Your interval mathematics often just delegate calculations to your other numeric libraries at each endpoint of the interval, passing them an appropriate rounding direction. If a function or operator is non-monotonic, it may do more. Since they're so tightly bound, it makes sense from a programming simplicity and performance standpoint to have these calculations in the numeric library itself.) It did require quite significant rewrites of my numeric libraries, but these were for the best. For example, you may need to pass in a rounding direction and precision for each mathematical operation, but doing so gave me the ability to control working precision for various numeric algorithms.

In short, if parallelization of algorithms or guaranteed numeric results are important to you, you really should consider intervals as a language feature. If your language cares enough about numerics to, say, have complex numbers as a language feature, you might consider implementing intervals. Interval arithmetic is, quite simply, one of the easiest ways to let your program automatically analyze its own algorithms, and provide truly rigorous results.

Doing the right thing with numerics and physical quantities is one of Frink's design goals, but I'd certainly use intervals in another language if it were as easy to apply. If I have to rewrite my program to use intervals, or if I couldn't use interval analysis in third-party code, it loses much of its potential power, and I might start looking for another tool.

Bill and Sun's patents

There was big uproar in the intervalls community when it became public that Sun filed patents on intervall methods. Bill was mentioned as one of the authors.

Several researchers were quite upset because they believed prior art was covered by the Sun patents.

Bill behaved like he was forced to issue the patents, but I am not sure about his role.

Another remark:
The first view on intervall arithmetics is as a kind of improved numerical method.
But lately it seemed to me that one might rather see the technique as automatic proofing. E.g. you can prove that a certain given function has an optimum or not.
It is called reliable computing, if I remember correctly.
I would like to have some time to make me more familiar with it.

Regards,
Marc

The type of 3.14

I think langauges should use exact arithmetic (rationals and decimals) in more places. But inexact arithmetic is still a fact of life. (There are some cool exact real number libraries, supporting transcendentals and exact trig and everything; but if I recall correctly, they don't support exact == or >.)

Now if you're going to do inexact arithmetic, it just sounds like good common sense to carry around an error term. Surely something like this should be the default.

By "the default", I mean, a language can provide as many numeric types as it likes in the standard library, but what is the type of the literal 3.14? In C++/Java/Python, it's a double (in Python they call it a float). That is probably a bad default.

Here are the questions I see:

1. Is there a standard semantics for intervals that most users will appreciate and use? A language doesn't need features that fill 30% of the use cases.

2. Do intervals better help users notice and cope with the problems of inexact arithmetic? I'm interested in mundane cases where neither hardware acceleration nor parallelization is important.

If yes, it might be a good idea to relegate float and double to some "highly optimized math" corner of the library and use intervals instead as the default type.

My understanding is that

My understanding is that interval arithmetic is about as tricky to use correctly as floating point arithmetic. In the second you can quickly loose all precision. In the first, you can get an overpessimistic interval which gives you about in fact about as much information as the FP result. The major advantage of interval arithmetic is that you can't easily get overconfident in your results, but getting too often a too wide interval will do no good and probably result in the impression that it is in fact useless.

That leave us with the question "is there a model of real compuation which behaves sanely even when put in the hand on the uninitiated?" My impression is that FP and interval arithmetic behave about as sanely, just in oposite direction of sanity.

There has been some mention of taking some FP code and converting it for interval arithmetic. Something to note is that good floating point code will behave especially badly with interval arithmetic as it will purposely use error cancellation and converging iterations, both techniques which will often increase the interval when used with interval arithmetic.

Why it's a language feature

This brings up a very good issue on why intervals are better implemented as a low-level language feature--to deal with the uncertainties in finite representations of real numbers.

It may be that you want to automatically turn floating-point literals into intervals directly, which denote some of the uncertainty and truncation of any finite representation. For example, the literal 3.14 might be turned into the interval [3.135, 3.145] indicating the uncertainty in the last digit.

(Frink actually allows 3-valued intervals, with a "main/middle" value which represents our best estimate at any given time, so this might become [3.135, 3.14, 3.145].)

Some implementations allow this, but the library can't get at the original program text passed in early enough, so you're forced to construct intervals from strings:


pi = new interval["3.14"]

Which is rather unfortunate, and ugly, and requires a lot of rewriting of code to work. The parser simply got its hands on the value too early, and may have jammed it into an IEEE 754 double or something, losing data, and even rounding 0.1 to 0.9999999999999 or something.

A language that can handle intervals start to finish may have a compiler pragma that automatically turns floating-point literals in a program like 3.14 into an interval equivalent, obviating the need for ugly hacks like passing them in as strings. You usually can't do this on a library level. The parser will parse it before you ever see the raw characters. This is another subtle reason why intervals are good to implement as a first-class language feature, if you want all literals to be appropriately treated as uncertain intervals, and do so automatically.

Is there a standard semantics for intervals that most users will appreciate and use? A language doesn't need features that fill 30% of the use cases.

There have been lots of efforts to unify the notational styles used in interval analysis. Here's one proposal. Some of the notation is very common, but the edge cases are less so. There is a lot of discussion about what comparison operators like > mean when applied to overlapping intervals, so interval-aware applications use special operators to compare intervals. There is also some room for interpretation of functions like the two-argument version of arctan.

Do intervals better help users notice and cope with the problems of inexact arithmetic? I'm interested in mundane cases where neither hardware acceleration nor parallelization is important.

Yes, they can, but users have to care. Do most users really think about what they're doing when they use a double? Do they even know that 0.1 can't be represented exactly?

Affordances

Do intervals better help users notice and cope with the problems of inexact arithmetic?

Yes, they can, but users have to care.

Right, but a user gets a pretty significant hint that there's something there to care about when the answer comes back 0.7 +/- 2000. Whereas if it came back 0.7, bingo, done, publish it.

As a user, I would rather not be bothered if the system's assertion that the error range is huge is unreliable.

popular article from sun -

popular article from sun - http://research.sun.com/minds/2004-0527/

An alternative to interval arithmetic

There was a fleeting mention of exact real number computation and I thought it deserved better coverage.

The problem with all inexact arithmetic (including interval arith) is that the precision is fixed a priori. Interval arithmetic gives you an estimate of the error and widens the error bound if it loses precision. You don't get to dictate the tightness of the error bounds of the result.

Exact real number computational approaches have "infinite precision", in that numbers are represented as lazy streams (one representation). While computationally slower, you can ask for n digits of precision and be guaranteed to get the *result* to that level of accuracy; the library takes care of figuring out the appropriate level of precision necessary for all intermediate calculations. This is what differentiates it from arbitrary precision arithmetic.

  1. Nice introduction: A Calculator for Exact Real Number Computation . The bonus for the LtU crowd is that it is written in Haskell.
  2. "Many Digits" Friendly Competition
  3. MPFR library

    Maybe it goes without saying, but GIGO

    The problem with infinite precision is it assume you know the inputs to infinite precision. In the real world you usually do not have that precision.

    Railroads estimate that they have a mile less track in January (that is cold nights) than Augest (that is hot days). (The difference it taken up in all the gaps between rail sections) In theory you can measure their tracks down to at least the 10 millionth of an inch if that is what your infinite precision calculator requires to get the guaranty you want out. In practice not only is the expansion/contraction adding unnoticed error, but also the atmosphere (any tool that can measure that accurate is likely based on a laser) will make your accurate measurements worthless.

    There are constants in physics that are only known to about 8 significant digits. (Perhaps some with less? Or maybe they have learned more since I last studied physics?) My physics professors made a big deal about significant digits, they took off a lot of points from people who wrote down the answer from their calculator, because the calculator was well able to provide more precise answers than the inputs we were given. Physics want infinite precision answers, but their limits are on the input side. It is trivial to get an answer to the maximum precision your inputs allow - compared to the effort involved in getting those precise inputs in the first place.

    Interval arithmetic is useless

    The problem with interval arithmetic is that it's pessimistic- often times wildly pessimistic. For example, if you subtract two numbers, you introduce an error, and when you divide two numbers you introduce an error, right? So at the end of dividing and subtracting, you should have more error than you started with, right? Well, except when you're doing Newton's method, after which you have signifigantly less error than you started with. Take Newton's method, implement it in interval arithmetic, give it the function f(x) = x*x - 2 and the derivitive df(x) = 2*x, give it an initial guess of 1.4, and run it long enough, and you'll get the answer that the square root of 2 is somewhere between 0 and +infinity. Correct, but not real helpfull.

    Not useless

    Well, it is sensitive to the discontinuities in such an iterative algorithm. Because it is not directly applicable to this form of approximation doesn't mean that it is completely useless in all domains!

    FYI, there are ways of doing Newton using intervals, but you're straw-man is correct about something; it isn't always suitable as a drop-in replacement.

    More here:
    http://www.ic.unicamp.br/~stolfi/EXPORT/projects/affine-arith/

    -W

    You've shown why interval arithmetic...

    ...is sometimes useless. But this isn't an argument for why interval arithmetic is (always) useless.

    Incidentally, your statement isn't actually true. Here is a Mathematica session:

    In[1]:= f[x_]:=x/2+1/x
    In[2]:= iterate[f_,n_,x_]:=If[n==0,x,iterate[f,n-1,f[x]]]
    In[3]:= iterate[f,20,Interval[1.3,1.5]]
    Out[3]= Interval[{1.131656,1.51911}]

    (Same result for any integer that is large enough.)

    There's an honest mathematical reason for expecting this. If repeated iteration of a function f at x converges it means that |f'(x)|<1. This means that f shrinks small enough intervals. So f isn't necessarily doomed to grow to [0,+infinity].

    Useless

    The problem with interval arithmetic is that it's pessimistic- often times wildly pessimistic.

    Yeah, even simple cases give interval libraries trouble. For example, suppose x is in the interval [4, 6]. Then x - 0.9 * x is obviously in the interval [0.4, 0.6]. But if you plug this expression into a typical interval library, you'll get [-1.4, 2.4]. This is because the library doesn't know that the two x's in the formula are the same.

    A library can't do this kind of reasoning. A customized compiler could, maybe, but it's probably not worth it.

    One parting thought... Instead of true intervals, why not carry around "number of significant figures"? It's not pessimistic or even necessarily accurate--still not suitable for Newton's method, for example--but if it's good enough for high school chemistry... maybe mathematical rigor is secondary to the need to warn people when they've definitely thrown away all precision.

    A solution

    "Those who do not understand affine arithmetic are condemned to reinvent it, poorly," to paraphrase widely known aphorism.

    Affine arithmetic could solve your kind of questions just right.

    A much better solution

    is to understand what a stable numerical algorithm is, and only use stable numerical algorithms. Because the best that even Affine Arithmetic can do is to tell us that our answers from our numerically unstable algorithm is garbage. While I suppose that's better than giving us confidence in wrong answers, it's still not as good as getting the right answer. With numerically stable algorithms you don't need interval arithmetic or affine arithmetic. Which means you get the advantage of speed as well as the advantage of having the right answer.

    Numerical stability

    Simplify your expressions...

    As noted above, it's essential to symbolically simplify your equations to make your bounds as tight as possible. That particular expression is easy to simplify, but I agree that it gets much harder as you have trickier expressions. This is why programmers who think are still necessary.

    Instead of true intervals, why not carry around "number of significant figures"?

    Mathematica does that. For every numerical value, it carries around a floating-point number indicating how many digits are "significant." If you've used Mathematica much, though, you'll know that sometimes they use and display more than this, sometimes less, and it's not really all that well done.

    Let me see the cards

    Alan, I have used MMA for some 15 years. This incident marks your second critique in this regard, yet still lacks support. Supply examples. They must be software-specific, not mathematically intractable, so: other software does not exhibit the same misbehavior in solving the same problems.

    Intractability plagues all software. A valid critique shows problems which are tractable but mistreated, and recognizes tuning knobs like AccuracyGoal, PrecisionGoal, WorkingPrecision, and $MaxExtraPrecision.

    MMA obeys a proper model/view distinction. Consequently the user interface shows what view settings dictate, not absolutely everything in all cases. If your critique is about user display, then adjust your view settings, perhaps?

    By way of contrast, Fortress lacks a model/view distinction. That's why its design bogs down in precedence gotchas and warped perspectives of character sets as "limited resources." In this day and age of high-end graphics and 3D video games, shoe-horning a math language into an ASCII terminal approach is absolutely goofy; and neglecting model/view is unforgiveable. I have a long speech about that for some later date. I'll not go into it here.

    While I do not worship MMA and it can frustrate, it sports best-of-breed accuracy/precision capability. The syntax is ugly, but let's not be trivial. MMA also does interval arithmetic out of the box, as you note.

    A general comment on PLs and interval arithmetic

    I think interval arithmetic (and affine arithmetic and automatic differentiation) make a strong case for the importance of operator overloading, and also the importance of getting your numerics classes right. Bizarrely I find C++ more friendly towards defining "replacement" number types than some languages that are otherwise better suited to mathematical programming such as ocaml and Haskell (well, Haskell isn't the problem, but Num is).

    There are several math libraries out there that are generic in that they allow users to select the underlying datatype of their numbers - typically single or double precision arithmetic. But these libraries often break when they are stretched to radically different number types like intervals. When writing numerical libraries I think it's well worth the effort to consider the possibility that another user might wish to pass unanticipated numeric types through it.