Posts

Showing posts from March, 2013

Lies, statistics and vendors

Overview Reading performance results supplied by vendors is a skill in itself.   It can be difficult to compare numbers from different vendors on a fair basis, and even more difficult to estimate how a product will behave in your system. Lies and statistics One of the few quotes from University I remember goes roughly like this Peak Performance - A manufacture's guarantee not to exceed a given rating -- Computer Architecture, A Quantitative Approach. (1st edition) At first this appears rather cynical, but over the years I have come to the conclusion this is unavoidable and once you accept this you can trust the numbers you get in if you see them a new light. Why is it so hard to give a trustworthy  performance number? There are many challenges in giving good performance numbers.  Most vendors try harder to give trustworthy numbers but it is not as easy as it looks. Latencies and throughputs don't follow a normal distributi...

Simplifying low latency services

Overview Java Chronicle   is a persisted, inter process messaging system which is   very fast  when used in a low level way.  However, if you don't need this extreme speed, there is a couple of simpler ways to use this open source library.  One of these to use Chronicle's distributed collections.  This is very simple to use but rather slower.  This post explores an intermediate solution.  It is fast (sub 10 microsecond 99.9% of the time), ultra low GC, and performs well even if you have burst of data larger than the main memory size. This post continues from Low latency services  and the demo is an implementation of the gateways and processing engine in the diagram. Service by Contract A way to model the service is to have an interface for the methods/requests/events you want to support and another interface for events out of the processing engine.  A demo has been added to demonstrate this approach. Processing E...

Low latency services

Image
Overview Low latency services are designed to be as simple as possible.  All the same it is good to a have a picture of the the iteration between a low latency processing engine and the rest of the world. Why are we doing all this? Using the following model you can create a processing engine which is deterministic both in behaviour and performance, reproducible for testing, replication and restart of the application, and keeps a record of all actions for issue analysis. including micro-second timings. High level From a high level, a processing engine need inputs from gateway processes or thread or components which normalise incoming data or requests.  These requests are consumed by the processing engine and a log is produced.  From this log the gateway processes can respond to request or send outbound messages to key systems. Also reading the processing engine's log are database persister needed to support reporting an...