# Computational Precision in Advanced Revelation

Published By | Date | Version | Knowledge Level | Keywords |
---|---|---|---|---|

Revelation Technologies | 19 SEP 1991 | 2.1X | EXPERT | PRECISION, FLOATING, POINT |

The system of real numbers you use for pencil and paper calculations is conceptually infinite and continuous. There are no upper or lower limits to the magnitude of the numbers you can employ in a calculation, or to the precision (number of significant digits) you can represent. For any real number, there are always an infinity of numbers both larger and smaller. There is also an infinity of numbers between (i.e., with more significant digits than) any two real numbers. For example, between 2.5 and 2.6 are 2.51, 2.501, 2.5001 2.50001, and 2.59, 2.599, 2.5999, 2.59999, etc.

Ideally, a computer would be able to operate on the entire real number system. In practice this is not possible. Computers, no matter how large, ultimately have fixed size registers and memories that limit the system of numbers that can be accommodated. These limitations proscribe both the range and the precision of numbers. The result is a set of numbers that is very large but finite and discrete, rather than infinite and continuous. This computerized sequence is a subset of the real numbers, which is designed to form a useful approximation of the real number system.

### A Simple Example

This approximation is readily demonstrated in our everyday life using the following simple examples of decimal arithmetic with the values two-thirds and one-third.

Truncation Rounding Addition 0.6666 0.6667 +0.3333 +0.3333 -------- ------- 0.9999 1.0000 Subtraction 0.6666 0.6667 -0.3333 -0.3333 -------- -------- 0.3333 0.3334

Notice that the accuracy of the results depends on two factors: the presence or absence of rounding when approximating a value (two-thirds in our example), and the operation being performed (addition, subtraction, etc.).

### The Binary Situation

The same situation occurs in binary arithmetic used in computers. As with the example above, there are certain numbers which can not be precisely represented and require that a choice be made as to how they will be approximated. Of critical interest in this situation is the fact that a fraction which does not require approximation in decimal generally requires approximation in binary. For example, the decimal value 0.1 converts to the non-terminating binary value 0.00011001100110011 …. This requires that a rounded or truncated approximation be used when storing it in memory.

### Computer Implementations

Whenever approximations are required in a computer, a special form of data representation and calculation is used. This special form is referred to as Floating Point.

Floating Point allows a great many, but not all, of a large range of real numbers to be represented, and is analogous to the scientific notation you may have seen on calculators and in physics or chemistry textbooks.

Rather than exactly representing a number as its complete string of digits, however many, Floating Point breaks a number into two parts: an exponent (sometimes called a characteristic), and its most significant digits (sometimes called either a mantissa or a significand). Floating Point primarily differs from scientific notation only in its use of binary digits (rather than decimal ones) and its need to comply with each computer's fixed size registers and memories. This restricts the number of digits which can be used.

### IBM Personal Computers

In the case of IBM PC's, the maximum precision form of Floating Point uses a total of 80 binary digits (bits) to represent numbers. Sixteen of these are used for the exponent and 64 for the significand. This provides PC users with a range of representable numbers from +-3.4 x 10^4932 to +-1.2 x 10^4932 with a precision of about nineteen decimal digits.

### Advanced Revelation's Use of Floating Point

All PC versions of Advanced Revelation use and store 80 bit Floating Point numbers whenever any of the following conditions are satisfied:

- A Numeric Processor Extension (i.e., a mathematics co-processor such as an 8087, 80x87) is available in the PC hardware.
- An integer constant greater than +-32,767 is compiled by R/BASIC.
- A number with a fractional value (e.g., 39.95) is used.
- Addition, subtraction or multiplication uses or generates a result greater than a 32 bit integer (less than -2 x 10^9 or greater than +2 x 10^9).
- Whenever division is performed.

### Advanced Revelation's Accommodations to Floating Point

Since its inception, Advanced Revelation has, under certain circumstances, implicitly adjusted final arithmetic results in an attempt to compensate for any Floating Point approximation-induced discrepancies. The types and magnitudes of these adjustments has varied with different versions:

### Revelation and Advanced Revelation 1.X

- Number-to-string conversions rounded, and displayed in Fixed Point format as a maximum of 14 whole-number digits plus a maximum of 4 fractional digits.
- Equal and Not-Equal comparison tests performed to a maximum of 4 rounded fractional digits (i.e., to the nearest one ten-thousandth).
- Other comparison tests performed on the entire number.

### Advanced Revelation 2.0

- Number-to-string conversions used as stored in memory, and displayed in either Fixed Point or Floating Point format with any number of whole and fractional digits to a total of 18.
- Equal and Not-Equal comparison tests performed to a maximum of 4 rounded fractional digits (i.e., to the nearest one ten-thousandth).
- Other comparison tests performed on the entire number.

### Advanced Revelation 2.1

- Number-to-string conversions rounded, and displayed in either Fixed Point or Floating Point format with any number of whole and fractional digits to a total of 15.
- Equal and Not-Equal comparison tests performed to a maximum of 4 rounded fractional digits (i.e., to the nearest one ten-thousandth).
- Other comparison tests performed on the entire number.

Unfortunately, depending on the actual values used and the operations performed, even these implicit adjustments can leave results which are not precisely accurate. The worst cases of this most frequently occur during subtraction of numbers close in value (e.g., 30,000.01 - 30,000.00). As the mathematicians always state, no matter how hard you try, there will always be some numbers that will not behave correctly for any given set of values, operations, and rounding choices.

See again the Simple Example of decimal arithmetic earlier in this bulletin. Notice that no matter whether truncation or rounding is chosen, either addition or subtraction, but not both, will give a result requiring adjustment. Even if an appropriate compensating algorithm for calculations based on thirds is defined, you can always change the example to be based on sixths, ninths, fifteenths, ninety-ninths, etc. in such a way that the thirds-based algorithm will fail. The General Theory that there are an infinity of numbers between any two real numbers also means that there are an infinity of compensating algorithms. Since Advanced Revelation has no knowledge of the dynamic operation sequences or values being processed by any application program, it is impossible to implement a single implicit compensating algorithm that will work for every situation.

Explicit Control of Floating Point using Advanced Revelation Advanced Revelation does provide an explicit method for application programs to compensate for approximation-induced discrepancies: ICONV and OCONV, Masked Decimal. These functions can be invoked to, among other things, convert results to a specified number of fractional digits, with or without rounding.

For example, consider the R/BASIC program

VAL_A = 30000.01 VAL_B = 30000.00 RESULT_IMP = VAL_A - VAL_B REV_INTERNAL = ICONV(RESULT_IMP, "MD2") RESULT_EXP = OCONV(REV_INTERNAL, "MD2") PRINT RESULT_IMP PRINT RESULT_EXP

whose Advanced Revelation 2.1 implicitly compensated, and explicitly compensated results are

9.99999999999979E-3 .01

For reference, the uncompensated result is 0.00999999999999978684, which is what would have been displayed using the non-adjusting Advanced Revelation 2.0. The more restrictive Revelation and Advanced Revelation 1.X would have displayed .01, but would have suffered a loss of precision had the result required more than four fractional digits.

### Future Directions in Advanced Revelation

Revelation Technologies is analyzing several areas for potential changes to Advanced Revelation's present computational processes. Among these are:

- The criteria under which Floating Point is selected for calculations rather than integer.
- User-specifiable implicit rounding or truncation precision for number-to-string conversions.
- The number of fractional digits tested by Equal and Not-Equal comparisons, and whether they should first be rounded or not.
- Use of strength reductions in special situations (e.g., performing multiplications for exponentiation to small integer powers rather than using logarithms).

Intertwined issues being considered when analyzing these areas include:

- Backward compatibility for existing 2.X applications (i.e., any changes should not impact the functioning of current programs),
- The possible impact of developer control of the implicit compensating algorithm Ä particularly in the cases of different algorithms being used on different machines in a network environment, and for Environmental BondingTM,
- The accuracy of and potential overflow/underflow of computations where Floating Point may no longer be employed.
- The definitions of Equal and Not-Equal in an approximating environment.
- Program execution speeds of various processor configurations with and without Numeric Processor Extensions.