Posts

A Java Conversion Puzzler: Understanding Implicit Casting and Overflow

This article explores a subtle Java conversion puzzle that challenges assumptions about how arithmetic operations, implicit casting, and floating-point conversions interact. Inspired by complexities often encountered in low-latency and high-performance environments, it demonstrates why a keen understanding of Java’s type system is essential for building reliable and efficient applications. Introduction The following example demonstrates a scenario where an innocuous-looking arithmetic operation leads to a surprising result. While such questions are rare and arguably impractical, they highlight subtle behaviours that can affect correctness and performance, especially in critical systems like high-frequency trading platforms or complex data-processing pipelines. The Problem: A Surprising Print Statement Consider the following code: int i = Integer.MAX_VALUE; i += 0.0f; int j = i; System.out.println(j == Integer.MAX_VALUE); // true At first glance, one might assume that adding...

Why Does Math.round(0.49999999999999994) Round to 1?

1. Defining the Problem In many numerical computations, one would reasonably expect that rounding 0.499999999999999917 should yield 0 , since it appears to be slightly less than 0.5 . Yet, in Java 6, calling Math.round() on this value returns 1 , a result that may initially seem baffling. This seemingly minor discrepancy stems from the interplay of binary floating-point representation, rounding modes, and the particular internal implementation details of Math.round() in earlier Java releases. For professionals in performance-sensitive environments—such as those working in financial technology or high-precision scientific applications—understanding these subtleties is more than just an academic exercise. Even tiny rounding differences can influence trading algorithms, pricing models, or simulations. Moreover, developers and enthusiasts who appreciate the low-level mechanics behind Java’s numeric types will find valuable insights into how these internal workings affect everyday pro...

TLDR: Designing Hyper-Deterministic, High-Frequency Trading Systems

Peter Lawrey is the CEO of Chronicle Software, which counts multiple Tier 1 banks among its clients. He is a Java Champion who has provided the highest number of Java and JVM-related answers on stackoverflow.com. He also architected low-latency Java trading libraries downloaded 13 million times in October 2024. In this video, Peter examines how trading systems are designed to support microsecond-latency microservices and how these can be combined to construct complex trading solutions such as Order Management Systems (OMS), pricers, and hedging tools. This presentation was recorded at QCon Shanghai 2019. You can watch the video by following this Link the video or read a summary below. Introduction Building a hyper-deterministic high-frequency trading (HFT) platform requires careful attention to detail. Every microservice, data structure, and line of code must be optimised for both performance and predictability. This article explores practical approaches and techniques—drawn f...

Performance Tip: Specify Collection Capacity When Size is Known

When working with Java collections, their ability to grow dynamically is often valuable. Yet, if you already know the required size, specifying the initial capacity can be more efficient. Doing so may reduce CPU overhead and memory churn, resulting in smoother performance. In this article, we will explore why specifying capacity is beneficial, present practical examples, and highlight when you might consider alternatives such as immutable or fixed-size lists. Efficient Use of ArrayList Many developers rely on collections like ArrayList to handle dynamic workloads. However, frequent resizing can be costly. Each resizing operation may involve allocating a new underlying array and copying existing elements, which consumes CPU cycles and memory bandwidth. If you know how many elements you need, why not avoid these unnecessary steps? When the final size of the list is known at the outset, setting the initial capacity can signal intent to future maintainers. A Practical Example: Opt...

Performance Tip: Rethinking Collection.toArray(new Type[0])

Image
Introduction Have you ever considered the performance implications of converting collections to arrays in Java? It's a common task; your chosen method can impact your application's efficiency. In this article, I will explore different approaches to toArray() , benchmark their performance, and determine which method is optimal for various scenarios. The Challenge Converting a Collection to an array seems straightforward, but the standard practice of using collection.toArray(new Type[0]) might not be the most efficient. Understanding the nuances of this method can help you write more performant code. Exploring the Approaches Let's delve into four primary methods and a combination for converting collections to arrays: 1. Using toArray() Without Arguments Object[] array = { "Hello", "world" }; String[] strings = (String[]) array; // Throws ClassCastException at runtime While this approach avoids additional array creation and can be fast, it ...

Storing 1 TB in Virtual Memory on a 64 GB Machine with Chronicle Queue

Image
As Java developers, we often face the challenge of handling very large datasets within the constraints of the Java Virtual Machine (JVM). When the heap size grows significantly—often beyond 32 GB—garbage collection (GC) pause times can escalate, leading to performance degradation. This article explores how Chronicle Queue enables the storage and efficient access of a 1 TB dataset on a machine with only 64 GB of RAM. The Challenge of Large Heap Sizes Using standard JVMs like Oracle HotSpot or OpenJDK, increasing the heap size to accommodate large datasets can result in longer GC pauses. These pauses occur because the garbage collector requires more time to manage the larger heap, which can negatively impact application responsiveness. One solution is to use a concurrent garbage collector, such as the one provided by Azul Zing , designed to handle larger heap sizes while reducing GC pause times. However, this approach may only scale well when the dataset is within the available main ...

Unveiling Floating-Point Modulus Surprises in Java

When working with double in Java, floating-point representation errors can accumulate, leading to unexpected behaviour—especially when using the modulus operator. In this article, we'll explore how these errors manifest and why they can cause loops to terminate earlier than anticipated. The Unexpected Loop Termination Consider the following loop: Set<Double> set = new HashSet<>(); for (int i = 0; set.size() < 1000; i++) { double d = i / 10.0; double mod = d % 0.1; if (set.add(mod)) { System.out.printf("i: %,d / 10.0 = %s, with %% 0.1 = %s%n", i, new BigDecimal(d), new BigDecimal(mod)); } } At first glance, this loop should run indefinitely. After all, the modulus of d % 0.1 for multiples of 0.1 should always be zero, right? Surprisingly, this loop completes after 2,243 iterations, having collected 1,000 unique modulus values. How is this possible? The full code is available on GitHub. Understanding Flo...