How do low latency applications differ for regular applications?


A common question we get is; What makes a low latency application different? What is it like to read?

There is a number of considerations in designing a low latency application which distinguish it from other applications.

Simplicity is key

The best way to make something go faster is to get the application to do less work.  This means; create less garbage, transform data less times, make data the application needs readily available.

Note: I say you need the option to do this, but this doesn't mean you have to.  You should use a profiler to guide you as to what sections need optimizing.

Simplicity is especially important if you want consistent performance. If you are looking at your 99th percentile (worst 1 in 100) or  99.9th percentile (worst 1 in 1000) you are looking at the time when most things are going wrong at once.  The more complex the system, the more there is to go wrong. If throughput is also important to you, you find that one slow outcome can have a significant knock on effect and you have to worry about your 99.99th percentile or higher because even a very rare delay can impact many requests/events before your system is running normally again. i.e. hundreds or many thousands is not unusual.

Don't think easy to add, think easy to remove.

You need to think less about how easy is it to add code but how easily is it to take away things it doesn't need to be doing.  Many frameworks are design to make it easy to get started and add functionality, but what if you need the framework to do less for you esp stuff it doesn't need to be doing.

The best way to solve this is to use libraries and frameworks which are very thin (in terms of call stack and transformations of data) and do very little, with options to do even less if you don't need it.

You want to start with a clear simple solution, but you also want the option to replace or remove any functionality you don't need.

Transform data less times

I am often amazed how convoluted code can be. It is often implemented like kneading dough or kneading data.  The same data is transformed one way, then another, sometimes back again, into yet another form and finally with some customization is put into a data structure which is used to build another data structure.  All this accidental complexity slows down the application, and obscures what the application is actually doing.

You need to design your application so you can see what is happening to your data from end to end so you can remove any steps you don't actually need (and are possibly a source of error)

I have seen examples of a Date as a 64-bit long integer turned into a Date object, into a String in MM/dd/yyyy turned into a String as  yyyy/MM/dd turned back into a Date object and finally used as a 64-bit long in milli-seconds.  I have seen a 64-bit double turned into a Double object , turned into a String, parsed to a Double, performed some incorrect/complex rounding and finally made the 64-bit double it started with because it was already rounded.

This sounds silly, but this is what frameworks often do.  Different parts are written by different people and at each level in each library, data is transformed between the model each library uses.

To mis-quote  David Wheeler;
    All problems in computer science can be solved by another level of indirection abstraction.

Readability of code

A lot of code is written with the objective of describing the application at a high level in a natural language style.  I can understand what it is doing even if I don't know how to program.  This can work like magic.  The problem arises when what you have written at a high level is not doing what you think it is doing.  While functional issues can be detected with good testing. How do you detect that a piece of code works correctly but performs badly, when you have a design which obscures the details about what the computer really has to do.  Even harder to detect is code which performs reasonably well most of the time but sometimes performs badly.  Even profilers won't help you much here.

The priority with low latency code is to be able to easily read what the computer needs to do to implement what you have asked it to do.  Is this operation O(1), O(log N) or O(N)?  Is this operation thread safe, and if I don't need it to be thread safe how easy is it to make it not so?

This low level code is useful to a point, but you still need to be able to understand what is going on from a high level.  There should always be one section of code which says what your component does.  This high level component calls down so you can see how it does it.  This in turn might call to low level details about how it does that exactly.  

Separation of business and infrastructure code.

Your business code is your essential complexity of your application.  These are the things you have to do to meet your requirements.  Actually how this is done should be a separate and replaceable concern.

Your infrastructure code is your enabler code which should be able to be easily replaced without changing what your application does. 

If your application doesn't do what it should, you should be able to change your business logic to fix it. If the application doesn't work the way it should, you should be able to change your infrastructure with some confidence that your application will still do what it is required to do.

Where can I see examples of low latency coding?

A place to start is with open source libraries designed for low latency.  These include;

- Aeron - IPC Messaging
- Chronicle - Low latency persistence, distributed access and data transformation libraries.

How much difference can it make?

Using low latency techniques can improve the typical performance of an application dramatically, also increasing it's throughput.  In terms of the consistency, the 99%th percentile of the latency can be reduced by a factor of 10, 100 or even 1000, by taking out complexity in the application.

Design for as fast as possible?

This is not the key priority for the projects we work on.  It's just not productive.  The main priority is to develop the simplest solution to solve the problem for the performance needed, but with the option to make it faster with a little effort, but only if it has been identified that this is required.

Simplicity also improve maintainability

The rule of thumb we use is that a successful system will cost around three times as much to maintain as it cost to build.  Simplicity and transparency of what your system is doing, helps minimize your costs longer term as well has help deliver a working application with less risk, in a shorter time frame.

If simplicity is so great why doesn't everybody do it?

Making a big, complex system is easy, you just keep adding bits until it works.  Making a system simple is hard and takes experience.  You often need years of experience developing the same type of solution to know when you can avoid adding anything more than you really need to.

Next article

In my next article I will be looking at how Chronicle libraries can be used to develop a low latency application. 


Post a Comment

Popular posts from this blog

Java is Very Fast, If You Don’t Create Many Objects

System wide unique nanosecond timestamps

Comparing Approaches to Durability in Low Latency Messaging Queues