Why double Still Outperforms BigDecimal: A Decade-Long Performance Comparison

Overview

Many developers consider BigDecimal the go-to solution for handling money in Java. They often claim that replacing double with BigDecimal has fixed one or more bugs in their applications. However, I find this reasoning unconvincing. The issue may lie not with double but rather with how it was handled. Additionally, BigDecimal introduces significant overhead that may not justify its use.

When asked to improve the performance of a financial application, I know that if BigDecimal is involved, it will eventually need to be removed. While it may not be the largest performance bottleneck initially, as we optimise the system, BigDecimal often becomes one of the main culprits.

BigDecimal is Not an Improvement

BigDecimal comes with several drawbacks. Here are some of its key issues:

  • It has an unnatural syntax. The API is verbose and can be cumbersome to use.

  • It uses more memory. BigDecimal objects consume more memory compared to primitive types.

  • It creates more garbage (i.e., it causes more frequent garbage collection).

  • It is significantly slower for most operations, although there are exceptions.

The following JMH benchmark demonstrates two of the most prominent issues with BigDecimal: clarity and performance.

Code Comparison

The core task is to take an average of two values. Here’s how it looks when using double:

mp[i] = round6((ap[i] + bp[i]) / 2);

Notice the need for rounding. Now, the same operation using BigDecimal requires much more verbose code:

mp2[i] = ap2[i].add(bp2[i])
     .divide(BigDecimal.valueOf(2), 6, BigDecimal.ROUND_HALF_UP);

Does this give you different results? For the most part, double provides 15 digits of precision, which is more than enough for typical monetary values. If these prices had 17 digits, BigDecimal might be more appropriate. However, the complexity it adds to the code is unnecessary for most practical use cases—it’s a poor trade-off for the developer who has to maintain and comprehend the code.

Performance

If you have to incur coding overhead, it’s usually done for performance reasons. However, using BigDecimal for simple arithmetic does not make sense in this case.

The following JMH benchmark results show a significant performance difference between BigDecimal and double:

Running on a Ryzen 5950X on Linux

Benchmark                                          Mode  Cnt        Score        Error  Units
MyBenchmark.bigDecimalMidPriceDivide              thrpt   25    83467.627 ±    529.667  ops/s
MyBenchmark.bigDecimalMidPriceMultiply            thrpt   25    90053.410 ±    785.010  ops/s
MyBenchmark.bigDecimalMidPriceMultiplyWORounding  thrpt   25   114612.951 ±    963.940  ops/s
MyBenchmark.deltixDecimal64MidPrice               thrpt   25    63605.847 ±    434.017  ops/s
MyBenchmark.doubleMidPrice                        thrpt   25   855706.255 ±   3239.675  ops/s
MyBenchmark.doubleMidPriceWORounding              thrpt   25  9751458.388 ± 782845.714  ops/s

Running on an i7-1360P and Java 21

Benchmark                       Mode   Cnt       Score       Error  Units
MyBenchmark.bigDecimalMidPrice  thrpt    5   63179.538 ±  6211.832  ops/s
MyBenchmark.doubleMidPrice      thrpt    5  866728.730 ± 28798.456  ops/s

For comparison, this is a benchmark I ran ten years ago on an older machine:

Benchmark                       Mode     Samples     Score        Score Error   Units
MyBenchmark.bigDecimalMidPrice  thrpt    20          23638.568    590.094      ops/s
MyBenchmark.doubleMidPrice      thrpt    20          123208.083   2109.738     ops/s

As you can see, the double implementation outperforms the BigDecimal implementation by a factor of more than five. Note: using double made more difference than ten years of processor and JVM improvements.

NOTE: Rounding makes a big difference (a factor of ten) for double as it involves a division.

Conclusion

If you’re unsure about how to properly handle rounding with double, or if your project mandates the use of BigDecimal, then by all means, use BigDecimal. However, if you have a choice, don’t just assume that BigDecimal is the right way to go. The additional complexity and performance overhead may not be worth it in many cases.

The benchmark is available for you to run here: MyBenchmark.

Key Points

  • BigDecimal introduces significant overhead in terms of memory and performance.

  • For most monetary calculations, double offers sufficient precision and better performance.

  • Code using BigDecimal tends to be more verbose and harder to maintain.

  • Performance benchmarks consistently show double outperforming BigDecimal by a substantial margin.

  • Consider the trade-offs between precision, performance, and code complexity when choosing between double and BigDecimal.

AI Generated Simulation

Embedded Simulation: Calculations per Second

This simulation illustrates the difference in throughput when using double versus BigDecimal, based on pre-measured benchmark results. Adjust the parameters, start the timer, and watch the counts of completed operations update over time.

Elapsed Time: 0 s

About the Author

As the CEO of Chronicle Software, Peter Lawrey leads the development of cutting-edge, low-latency solutions trusted by 8 out of the top 11 global investment banks. With decades of experience in the financial technology sector, he specialises in delivering ultra-efficient enabling technology that empowers businesses to handle massive volumes of data with unparalleled speed and reliability. Peter’s deep technical expertise and passion for sharing knowledge have established him as a thought leader and mentor in the Java and FinTech communities. Follow Peter on BlueSky or Mastodon.

References

  1. Java Platform, Standard Edition Documentation Oracle. BigDecimal Class. https://docs.oracle.com/javase/8/docs/api/java/math/BigDecimal.html

  2. Java Platform, Standard Edition Documentation Oracle. Double Class. https://docs.oracle.com/javase/8/docs/api/java/lang/Double.html

Comments

  1. Have you tried `Decimal64` from https://github.com/epam/DFP yet?

    ReplyDelete
    Replies
    1. I haven't worth doing a comparison

      Delete
    2. I have extended the benchmark, added it to github so you can try it. In my first attempt DFP was slower

      Delete
  2. Do you have any comments or advice on the safe handling of double, maintaining a defined level of numerical accuracy (e.g. to a given number of decimal places, with a given rounding strategy)?

    ReplyDelete
    Replies
    1. In this example, the rounding is to 6 decimal places, half_up, and that is most of the cost. It is much faster if there is no rounding.

      Delete

Post a Comment

Popular posts from this blog

Java is Very Fast, If You Don’t Create Many Objects

System wide unique nanosecond timestamps

What does Chronicle Software do?