### Accuracy of double representation for decimals.

Overview
There is some debate as to whether BigDecimal or double is best for monetary values. The simplest answer is, if in doubt choise BigDecimal as it is seen as the safer option.

When doubles are used, the common approach is the round the result before displaying. However, how large can the value be before rounding results in an error?

Approach
I have written a program to find the largest error you get for a decimal with two decimal places for numbers above and below powers of two. The reason powers of two are used is each larger power uses more bits, thus less bits for the decimal representation (increasing the error)

The rounding error which is too large is one which would round incorrectly. In the case of two decimal places, a rounding error of 0.005 or more would result in an error even with rounding.

Searching for the largest rounding error
In this program, I have constructed a number with two decimal places using a double and a String for comparison and used BigDecimal for accuracy.

`import java.math.BigDecimal;  public class DecimalError {     private static final long TRILLION = 1000L * 1000 * 1000 * 1000;      public static void main(String... args) {         printDecimalError(0L);         for (long i = 1; i < 200 * TRILLION; i <<= 1) {             printDecimalError(i-1);             printDecimalError(i);         }     }      private static void printDecimalError(long base) {         BigDecimal biggestErrValue = null;         BigDecimal biggestErr = new BigDecimal(0);         for (int d = 0; d < 100;d++) {             BigDecimal bd1 = fromString(base, d);             BigDecimal bd2 = fromDouble(base, d);             BigDecimal err = bd1.subtract(bd2).abs();             if (err.compareTo(biggestErr) > 0) {                 biggestErr = err;                 biggestErrValue = bd1;             }         }         System.out.println("The largest error was "+biggestErr+" for "+biggestErrValue);     }      private static BigDecimal fromString(long base, int decimal) {         return new BigDecimal(base + "." + (""+(500+decimal)).substring(1));     }      private static BigDecimal fromDouble(long base, int decimal) {         return new BigDecimal(base + (double) decimal / 100);     } } `

When is the rounding error too large for rounding to fix?
The end of the output of the program prints...

`The largest error was 0.001875 for 35184372088831.08The largest error was 0.00375 for 35184372088832.09The largest error was 0.00375 for 70368744177663.09The largest error was 0.0075 for 70368744177664.07The largest error was 0.0075 for 140737488355327.07The largest error was 0.015 for 140737488355328.11`

In this example, the value 70368744177663.09 would round correctly, however 70368744177664.07 would not and would result in an error of 0.01. This gets worse for values like 140737488355328.11 where the error after rounding will be 0.02.

Is that all we need to worry about?
Unfortunately not. There is an error in the double respresentation, but there is also an error in certain calculations which can be much larger.

There are two things which make these calculation errors not as big an issue as they could be.

• Most complex operations like sin, and exp, are not used in precise monetary calculations. These are used in estimating future values of securities and 8 digits of accuracy is usually more than enough. (Actually two digit of accuracy is often difficult to achieve in reality)
• Many computers use Intel/AMD FPUs which use 80-bit registers and round the result to a 64-bit representation. This minimises the rounding error of calculcations, but relying on this many not be platform portable.

In Summary
For values with two decimal places, numbers over 70 trillion may have an error even after rounding. For such large numbers, double are only suitable for estimates as they will not be cent accurate.

However, for numbers significantly smaller than 70 trillion, using a double with approriate rounding will produce the correct value.