Numbers and how to represent them.

I'm implementing basic mathematics for a language that tries to do as much the Right Thing as possible rather than getting approximate answers Fast. Here are some design choices I'd like to have some feedback about.

1: "Ordinary" Numbers have a limited-size representation that will NOT consume all memory when repeatedly multiplying by a fraction or something. Ordinary numbers have values in the set (A/B) * 2^C * 10^D where A, C, and D are integers, and B is a natural number. Additionally one bit (the top bit of the bytes otherwise devoted to B) is used to keep track of whether a number is exact or approximate.

2. Because the limited-size representation will not be able to represent all numbers exactly, inexact results (roundoffs) are permitted for all operations on ordinary numbers that cannot be exactly determined or represented in the limited-size representation. In that case the result is marked as being approximate, and contagion rules result in further calculations depending on its value also getting marked as approximate.

3. The representation of ordinary numbers is intended to permit exact representation for all numbers people are likely to write in source code (scientific, decimal fraction, and rational numbers of reasonable sizes) as well as exact representations for IEEE reals that may be imported from binary sources, and the results of many calculations using such numbers. The form described above meets the goal of exactly representing a "reasonable slice" of all of those types of numbers in a uniform format, and has further good properties such as the fact that the additive and multiplicative inverses of all representable values are also representable.

4. All numbers found in source code are considered to be exact if they are exactly representable and not specifically marked as approximate. If a number is encountered in source which is neither exactly representable nor syntactically marked as approximate, a syntax warning is issued and an approximate value near that number's value is used instead.

5. "Bignums" are exact values syntactically distinguished from ordinary numbers. Bignums are always exact. Operations on bignums where exact results cannot be determined result in approximate ordinary numbers. Otherwise any operation on bignums must give the correct, exact answers as bignums if sufficient memory is available to calculate and represent those answers, or else fail with an insufficient-memory error.

6. Converting from exact to inexact, or vice versa, never changes a number's value. Exact and inexact ordinary numbers have the same range and precision. Converting from exact ordinary number to bignum never changes a number's value. The set of values representable as exact ordinary numbers is a proper subset of the set of values representable as bignums. Conversion from Bignum to ordinary number will result in an exact number if a matching value exists among the values representable as an ordinary number. Otherwise it may result in an approximate ordinary number if it is within the range of ordinary numbers, or in a range error or infinity, if that bignum has a value outside the range representable as an ordinary number.

Is this a reasonable baseline for building numbers that behave a bit better, or a bit more like most people expect, more of the time than most computer representations of numbers?