From Mike Davidson's blog: a web designer gets an education in floating-point arithmetic.

Something happened today which shook the very foundations of what Iâ€™ve always believed about computers. See, maybe this was just a crazy notion, but I was always under the impression that if there was ONE thing computers did well, it was math. Simple math, algebra, geometry, calculusâ€¦ it didnâ€™t matter. Computers have always been equation solving machines. Or so I thought. — Apple Flunks First Grade Math

Not really about programming languages, but I thought it was interesting to see a non-programmer's reaction to a basic programming issue. I say "non-programmer", and yet Davidson is apparently knowledgeable enough to write Flash code and Javascript DOM applications.

As programming skills become more and more necessary in non-programing fields, I expect we will see more and more people writing code with less and less understanding of what they are doing, and a piecemeal education will become (even more) the norm. How can/should we, as programming professionals and researchers, deal with that?

Comment viewing options

Shouldn't be exposed

I don't see that this particular issue should be relevant to end users - Apple has a bug in their calculator, is all. The "basic programming issue" in question shouldn't be exposed to end users, or even end programmers, under such straightforward conditions. The way to deal with non-professionals working with systems is to first make sure that those systems get the basics right. This is at least partially a language issue, since so many languages provide such a thin wrapper over hardware floating point.

Thin wrapper over the hardware

It's a good point. Languages meant for general use should probably support rational bignums as the default; when you need the performance of hardware floating point, you should have to request it explicitly.

Come to think of it, even rational bignums might not be good enough for some cases; you might want symbolic arithmetic (2*sqrt(2)-sqrt(2)==sqrt(2), not 1.414...). Don't do any conversion until you have to display a result to the user.

I'm Reminded...

... of a footnote in one of SICP's early chapters in which they acknowledge that a claim in the body text that they use "simple arithmetic" is a bald-faced lie. :-)

But think of the converse

How many computer people act as if they know so much about other fields. Just look at Slashdot comments. So. Much. Crap.

There's also the Multidimensional Arrays thread where Mark Evans didn't see the connection between functions and array. Now, having taking real analysis the connection is crystal clear to me. Then there's misunderstandings about what mathematics is about and what functions are. I didn't know until I took a course in abstract algebra and it cleared up so many things. It made the distinction between a polynomial and a function absolutely clear.

Back to the non-programmers not knowing what's going on: programming can require excessive knowledge about how computers work. That's what expressiveness is trying to get at. Imagine having to build a car every-time you wanted to drive somewhere. That's why people use "slow" languages, or "slow" libraries. Metaprogramming is rarely exercised, compilers are treated as glaciers, and programmers complain when other people don't understand what's going on.

rational bignums as the default

Couldn't agree more! Floating-point math is a useful but very quirky engineering tool and ought not be in such plain view of ordinary users. I'd definitely like to see, in future languages, data types for integers, natural numbers, bounded natural numbers, and rational numbers that are in the obvious subtype relationship: 0..0 <: 0..1 <: 0..n <: nat <: int <: rat -- and that obey all of the laws of arithmetic. You know, those laws the ancient Greeks invented a few millenia ago, not the fixed-width twos-complement approximation that CPU's use internally.

My experience is that this can be done without significant performance compromises. For example, in Unreal, about 95% of the occurances of integer math are used for indexing into arrays; the bounded natural number types can always be safely used there and fit into traditional operations. The key realization here is that any number that can safely index into an array that exists in memory can be stuffed losslessly into the machine word size without sacrificing performance in all reasonable cases where real bignums aren't required. This is a decision a compiler ought to make rather than forcing programmers to decide, as C# does, "Which one of the language's 12(!) built-in numeric data types do I want to use in THIS situation?"

Bounded natural numbers?

What do you mean by "bounded natural numbers"? Do you mean twos-complement 32-bit ints, or the groups of 0..n-1 with arithmetic mod n? I have long wished for the latter.

Sid Meier's trick was to divide 2^32 by n and use that as ONE, and so on, letting the twos-complementedness mod everything for him. (Only works for powers of 2, though, and don't index your array with ONE!) A cute trick.

Modulo mod

Do you mean twos-complement 32-bit ints, or the groups of 0..n-1 with arithmetic mod n?
I hope Tim is accustomed to other people interpreting his posts. My understanding was that he meant dependent types. And probably without mod: if x:0..n and y:0..m then (x+y):(0..n+m).

I think the whole point of dependent types is being greedy about information, not letting it perish, or be diluted.

Could you elaborate about ONE? I am not sure I understood the idea.

"Could you elaborate about ONE? I am not sure I understood the idea."

Speaking of interpreting other people's posts... I'm assuming he means something like this:

Suppose you want to represent arithmetic modulo n. We would like to do this by mapping to two's complement 32-bit integers. So let's use the following mapping: the modulo-n integer k (0

It's probably easier to see why it works if we choose a smaller radix, e.g. 3. In this case we have

-4 100

-3 101

-2 110

-1 111

0 000

1 001

2 010

3 011

Consider the simplest case of modulo-2^3 arithmetic where we map 0 onto -4 = 100, 1 onto -3 = 101, and so on. To verify that the isomorphism works out in this case, we simply have to verify that two's complement arithmetic "wraps around" in the natural way. I think this is proved in pretty much all books that discuss two's complement arithmetic; this is essentially the property that makes "temporary overflow" not screw everything up in two's complement arithmetic.

Note that for addition and subtraction it is not a problem that our modulo-2^3 zero does not get mapped onto the two's complement zero. We can just account for this by offsetting appropriately when converting to and from the representation.

You can see that it works in the general case of modulo-n arithmetic by appealing to the simple case above.

I'd be interested if the original poster could elaborate on how this works out for multiplication!

"Note that for addition and s

"Note that for addition and subtraction it is not a problem that our modulo-2^3 zero does not get mapped onto the two's complement zero. We can just account for this by offsetting appropriately when converting to and from the representation."

This is wrong and in fact the mapping I describe does not work as advertised. Instead you want to associate a modulo-n integer k with k * (2^3 / n) calculated in UNSIGNED arithmetic, then treat the resulting bit pattern as a two's complement SIGNED integer and do all further calculations with two's complement arithmetic. Converting back is just a matter of inverting that formula.

Could you elaborate about ONE? I am not sure I understood the idea.
Sure. The idea is to have something you want to be intentionally cyclic, like compass directions: NORTH = 0x00000000 EAST = 0x40000000 SOUTH = 0x80000000 WEST = 0xC0000000 TURN_RIGHT = EAST // This is like 1. TURN_LEFT = WEST // This is like -1. NO_TURN = NORTH // This is like 0.  Now if you want your character to turn left three times, just add 3*TURN_LEFT.

You could generalize this to any power-of-2-sized set of "numbers", and call them ONE, TWO, etc. And ZERO is always just regular integer 0.

I hope this also answers the question of how you multiply... one of the numbers needs to be a normal integer. Sounds a little messy, but works pretty well in practice, if you're stuck in C. But how nice if I had a language which would do this for me (or at least let me define such a thing nicely), and not just for powers of 2.

Incidentally, I do *not* see this as a subtype of the integers, because it's a different form of arithmetic: arithmetic mod some n. (But in the same light, I don't see how the naturals would be a subtype, since it's not closed under subtraction.)

The dependent type thing is interesting, though it seems less about "type" and more about "compiler optimization".

You can do modular arithetic in Haskell

There's a paper by Oleg Kiselyov and Chung-chieh Shan about expressing configuration parameters via the type system. One of their examples is modular arithmetic. They use the type of a number to encode its modulus and then make every such type an instance of Num.

Level of Abstraction

Tim Sweeney: You know, those laws the ancient Greeks invented a few millenia ago, not the fixed-width twos-complement approximation that CPU's use internally.

Heresy! Next you'll be saying that we should use high-level languages instead of quasi-portable assemblers!

Besides, the universe is a two's-complement machine. :-)