Low latency Microservices
In these articles I look at how we can use micro-services are yet have ease of testing, and low latency performance. In the last of these I look at restarting a service after a failure.
Microservices for Performance
Microservices is a buzz word at the moment. Is it really something original or based on established best practices. There are some disadvantages to the way micro-services have been implemented, but can these be solved?
Microservices in the Chronicle World - Part 1
At a high level, different Microservices strategies have a lot in common. They subscribe to the same ideals. When it comes to the details of how they are actually implemented, they can vary
Microservices in the Chronicle world - Part 2
In this part we look at turning a component into a service.
Microservices in the Chronicle World - Part 3
One of the problem with using micro-services is performance. Latencies can be higher due to the cost of serialization, messaging and deserialization, and this reduces throughput. In particular poor throughput is a problem because the reason we are designing a scalable system is to increase throughput.
Microservices in the Chronicle world - Part 4
A common question we cover in our workshops is, how to restart a queue reader after a failure.
Microservices in the Chronicle world - Part 5
How can we evaluate the performance of a series of services in a test harness? We introduce JLBH (Java Latency Benchmark Harness) to test these services.