Plans for Chronicle 2.0
Plans for Chronicle
I have a number of changes planned for Chronicle.- Wider interest domain using openhft.net https://github.com/OpenHFT/
- More modular design, extracting serialization and parsing from Chronicle.
- Take advantage of performance improvements using Java 7
- Support for FIX directly to demonstrate it's support for text writing and parsing (as an additional module)
Latest performance results
I recently bought a PCI SSD and this made it clear to me that there is room for performance improvements in Chronicle. Java 7 may provide improved performance in a thread safety. (In particular the Unsafe.putOrderXxx methods)
Small Messages (13 bytes) | Time to write and read as individual messages | Time to write and read in batches of 10 |
---|---|---|
5 billion | 5 mins, 6 sec | 2 mins, 6 sec |
10 billion | 10 mins, 13 sec | 4 mins, 17 sec |
15 billion | 15 mins, 23 sec | 6 mins, 28 sec |
20 billion | 20 mins, 47 sec | 8 mins, 45 sec |
The test ExampleSimpleWriteReadMain from Chronicle 1.7 uses small messages to demonstrate the overhead on a per message basis. For larger messages, the size of the message matters more.
This performance test suggests to me the library has very good scalability and consistent performance. Note: the data set is over one thousand times the heap size and almost twenty times the main memory size.
The performance of the batched messages demonstrates what might be possible if the overhead were lower (i.e. 10% of what it is now) This suggests there is room for improvement, and will be examined in Chronicle 2.0
The aim for the FIX parser and generator is to saturate a one gigabit network link. i.e. handle anything you could send or receive over such a connection. This should be between 500K and 1M FIX messages per second.
Time frame: I am looking to release Chronicle 2.0 by August 2013.
Comments
Post a Comment