Posts

Showing posts from 2022

Java is Very Fast, If You Don’t Create Many Objects

Image
  You still have to watch how many objects you create. This article looks at a benchmark passing events over TCP/IP at 4 billion events per minute using the net.openhft.chronicle.wire.channel package in Chronicle Wire and why we still avoid object allocations..  One of the key optimisations is creating almost no garbage. Allocation is a very cheap operation and collection of very short-lived objects is also very cheap. Does this really make a difference? What difference does one small object per event (44 bytes) make to the performance in a throughput test where GC pauses are amortised? While allocation is as efficient as possible, it doesn’t avoid the memory pressure on the L1/L2 caches of your CPUs and when many cores are busy, they are contending for memory in the shared L3 cache.  Results Benchmark on a Ryzen 5950X with Ubuntu 22.10. JVM Vendor, Version No objects Throughput, Average Latency* One object per event Throughput, Average Latency* Azul Zulu 1.8.0_322 60.6 M event/s, 528

Comparing Approaches to Durability in Low Latency Messaging Queues

Image
  A significant feature of Chronicle Queue Enterprise is support for TCP replication across multiple servers to ensure the high availability of application infrastructure. I generally believe that replicating data to a secondary system is faster than syncing to disk, assuming the round trip network delay wasn’t high due to quality networks and co-located redundant servers. This is the first time I have benchmarked it with a realistic example. Little’s Law and Why Latency Matters In many cases, the assumption is that the latency won't be a problem as long as throughput is high enough. However, latency is often a key factor in why the throughput isn’t high enough. Little’s law states, “ the long-term average number  L  of customers in a  stationary  system is equal to the long-term average effective arrival rate  λ  multiplied by the average time  W  that a customer spends in the system”. In computer terminology, the level of concurrency or parallelism a system has to support must be

Event-Driven Order Processing Program

Image
  Following the   Hello World example   of a simple, independently deployable real-time Event-Driven Microservice, this article looks at a more realistic example of an Order Processor with a New Order Single in and an Execution Report out.  A  New Order Single  is a standard message type for the order of one asset in the FIX protocol used widely by financial institutions such as banks. The reply is typically one or more  Execution  Report s updating the status of that order. Some Background on Fintech In fintech, when one organisation wishes to purchase an asset or commodity from another, they send an order.  The other organisation sends a message to notify if the order was successful; this message is called an execution report. You could think of it a bit like a trade receipt. These orders and execution reports are transmitted electronically, using a data format standardised by Financial Information eXchange (FIX). There are many different orders, but one of the most popular Orders of