Using JSON in a low latency environment

Overview

In this post I am including the performance of our JSON parser.  Given a choice I suggest YAML for a host of reasons, however if you have to use JSON, how does the performance compare?

The message

The message is simple and contains a number of data typoes.
"price":1234,"longInt":1234567890,"smallInt":123,"flag":true,
"text":"Hello World!","side":"Sell
The actual message doesn't have a new line in it

The performance

This is the time to write and then read the message.

These timings are in micro-seconds (0.001 milli-seconds)


Wire Format
Bytes
99.9 %tile
99.99 %tile
99.999 %tile
worst
JSONWire
100*
3.11
5.56
10.6
36.9
Jackson
100
4.95
8.3
1,400
1,500
Jackson + C-Bytes
100*
2.87
10.1
1,300
1,400
BSON
96
19.8
1,430 
1,400
1,600
BSON + C-Bytes
96*
7.47
15.1
1,400
11,600
BOON Json
100
20.7
32.5
11,000
69,000

"C-Bytes" means using Chronicle Bytes to provide a recycled buffer.

"*" means this data was written to/read from direct memory wouldnt' need additional copying to use with NIO.

The code is the same as in this post and this post, the only difference is the use of JSONWire.

Conclusion

JSONWire may be a very good choice for performance, especially where consistent low latency through ultra low garbage production is required.

Related Links

What does the code for Chronicle Wire to generate JSON and YAML look like?
Why use YAML instead of JSON over the network?



Comments

  1. Looks interesting :). I might want to try to use C-Wire as an additional codec for fst serialization. Currently i can use binary, unsafe-binary and json (using jackson for raw parsing). As in this use case databinding is provided by fst serialization it would require a low level parser API (not jsonobject).

    Peeking at the jackson benchmark it might not be fair, as far i can see the bench allocates a bytearrayoutputstream and then allocates+copies the byte array again using .toByteArray() for each benchmark run. In practice one would reuse an existing ByteArrayStream, also subclass it in order to provide public access to the underlying byte array without the need to alloc+copy again ^^.

    ReplyDelete
    Replies
    1. Using a recycled buffer really helped the BSON parser/generator but produced mixed results for Jackson.

      Delete
    2. it should improve latency profile at least. afair bson requires "backwriting" of object length (length field is first but can be computed only after the object has been written), so its quite a cache missy format...

      Delete
  2. Regarding Jackson: I don't know if tests uses databind or streaming; but if databind, there is method `writeValueAsBytes()` which should be quite efficient. The only thing it can not reuse is the exactly sized output buffer, but other internal things are reused.

    ReplyDelete
    Replies
    1. In the "Jackson with C-Bytes" test I used a recycled buffer. The assumption is that you need to transfer data to/from native memory to interact with TCP or Files and this test does so.

      Delete
    2. One other thing that would be interesting would be to know whether tp99.999 was mostly/only due to GC (except for JSONWire that remained unaffected), or something else. From numbers I would venture a guess it was simply due to young gen collections, but it'd be nice to know for sure.

      Delete
  3. Hey Peter. Thanks for this. I can't find JSONWire using a Google search. Is that a Chronicle package? Can you link to all of the packages used? Also I'm a confused about the 99.9... numbers. Does that mean that, for example, JSONWire's worse 99.9% is an average of 3.11microseconds? A bit more description of the values and maybe a sentence more in the conclusion would improve the piece. For example, Jackson's JSON seems to be better at 3 9s. Certainly the worst is a lot worse but I'm not sure otherwise of the ramifications. Thanks.

    ReplyDelete

Post a Comment

Popular posts from this blog

Low Latency Microservices, A Retrospective

Unusual Java: StackTrace Extends Throwable

System wide unique nanosecond timestamps