Patterns of Integer Usage

A lot of embedded systems programmers use C and C++ to develop very large and complex pieces of software. Often these people are never introduced to higher-level constructs or improvements to languages to make that software more reliable. I recently wrote an article that attempts to introduce some ideas from different languages to show their benefits. The article is available as:

Patterns of Integer Usage

The article was written for those who rarely look past C/C++/C# as implementation languages (certainly not this crowd). However, I would appreciate any criticisms or critiques this community may offer.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Good article

It reminds me that we can't have world peace until we at least get the integers right :) I have often pined for a language which can prove all my ints won't overflow -- or take the few cases where they could and dynamically convert to a larger type.

Another area in numerics I don't often see in PLs is unit annotations (km*km/s = km^2/s). A couple of dialects of Scheme have this, and a couple obscure scripting languages, but it would have saved me a few gray-ish hairs.

On bigints and units

Actually, when I was originally contemplating writing the article I was thinking of a language where all ints were bigints. The compiler could then statically decide whether it can subtype those down to nativly sized ints if it can prove that was their only use. (With the restriction that container objects take native-sized ints only.) That was until I read Mr. Sweeney's presentation and learned about type algebras.

I have often contemplated having units in the programming language. I even went so far as to write a whole library in C++ that asserted physical relationships. In "Structure and Interpretation of Classical Mechanics" Scheme's normal algebraic operators are all overloaded to include units.


I'll have to check out the presentation -- I've always been impressed with Tim's insight into the sprawling code base that results from game development.

Certainly one of the more fun things I've done with integers in C++ is a template that modified the endianess of ints via implicit casting :) Funnily enough, it worked, and didn't completely kill performance as much as you'd expect!

Type vs representation

I have often pined for a language which can prove all my ints won't overflow -- or take the few cases where they could and dynamically convert to a larger type.

The latter exists, and is in fact not so uncommon practice in typed languages that provide bigints, or even make them their default integer type - except that you seem to mistake type for representation.

If something has type bigint that does not mean that it is necessarily represented by a complex data structure. An efficient bigint type will use a simple unboxed scalar representation wherever possible, and switch to a fat one only when necessary. But this is an implementation detail and hasn't much to do with typing.

As regards auto-conversion to bigints

Both ruby and python dynamically convert ints that would overflow into big ints.


As far as I know, Python and Ruby catch the overflow signal and perform the operation again, this time with big ints. This is a little different from statically inferring that the operation would overflow and preventing that from ever occurring.

Is it?

What observable difference are you referring to?

If it can be proven

If it can be proven statically that something will not overflow you do not need to put in code for runtime checks. This saves both time and space.

The possibility that things have to be computed twice (i.e doing it runtime) makes it hard to give a tight worst case execution time, something which might not be that important for the languages mentioned in this thread.

Interesting. A while back I

Interesting. A while back I blogged on something related to your comments on ints as indexes.