How fast are Java sockets


How long a request/response takes and the rate requests can be performed in a Java application depends on a number of factors. The network, the network adapter, Java Socket and TCP layer, and what your application does.

Usually the last factor is your limitation. What if you wanted to test the overhead Java/TCP contributes, here is a way you can test this.

Latency and throughput

The Latency, in this test, is the Round Trip Time (sometimes written as RTT) This is the time between sending a request and receiving the response. This includes the delay on the client side, the transport and the delay on the server side.

The Throughput is a measure of how many request/responses can be performed in a given amount of time. How long each request/response takes is not measured, and has no impact unless its really large.

The Results

These results are for a fast PC doing nothing but pass data back and forth over loopback. This will be one of the limiting factors to using Java and TCP on your system. Your server running a real application on a real network will not be faster than this.

Your should test your system as the hardware used can make a big difference. (See The Code below)

Socket latency was 1/50/99%tile 5.6/5.8/7.0 us
Socket Throughput was 170 K/s
Threaded Socket Latency for 1/50/99%tile 6.0/8.5/10.7 us
Threaded Socket Throughput was 234 K/s

The first pair of results, test just the Socket. Its single threaded and as you would expect, the throughput is the inverse of the latency. i.e. 170K * 5.8e-6 = 0.986 (about 1 thread busy) In the threaded test, the latency and throughput are higher. i.e. 234K * 8.5e-6 = 1.989 (about two threads were busy) Put another way, the throughput was double the inverse of the latency.

Can throughput be increased

Throughput can be increased further by batching and using multiple connections. This will increase/worsen latency but can give a significant increase in throughput. Between 2x to 10x can be expected with one server. Additional servers have the potential to increase throughput to the limits of your budget. ;)

However, once you have a real application on a real network, you will be lucky to achieve this throughput numbers on one server even with batching and multiple connections.

Follow on Article

Send XML over a Socket fast

The Code

Related articles

Send XML over a Socket fast

How fast are Java Datagrams?


  1. I always thought that there was no TCP stack involved to localhost... That it goes straight to a domain socket aka it gets file descriptor and writes the stream into it as it would into a file.

    Did you meant that "domain linux sockets" (not the NET ones) are "one of the limiting factors to using Java and TCP on your system" ?

  2. Also I find 170k requests / s quite a lot :-) If you actually mean 170k "requests" / s. I suppose that the request is a byte stream that represents the most basic request possible, right ?

  3. Its a ByteBuffer with 1024 \0 bytes. There is no encoding/decoding going on, but it a non-trivial size.

  4. It is highly likely that the loopback TCP does all sorts of shortcuts, however it is the sort of numbers I have seen with network adapters using kernel bypass, so its not unrealistic.

    This benchmark includes the application layer as well 14 us latency messaging with Solarflare
    With RTT tests, Solarflare to Solarflare I have seen less than half this.

  5. Recently we have improved our Java sockets performance by replacing ObjectOutputStream to BufferedOutputStream. Sorry if I’m not into the actual topic.

  6. @Kumar, can you link the article where you discuss this including how you measured the improvement you saw, rather than a link to your blog which could be consider spam. ;)

  7. That was in my last project. Nearly One year back. Right now I don’t have the artifacts that you asked for. But I remember serialization of new objects was the performance hit. And we implemented our own custom serialization also to address this. And about my link it’s just my signature, might be to stole your publicity :)

  8. @Kumar, In that case, its an annoying signature. If you want to improve publicity I suggest posting your articles to Hacker News. ;) You could still write an article on the topic you mentioned. Then I would be happy to have a link here.

  9. Cool. will post the article & will update the link then. :) you can delete my comments now. Thanks


  10. Make sure you're at JavaOne this year .. there might be something (low latency networking?) that will blow your socks off coming to a Java near you!


    Cameron Purdy | Oracle

  11. @Cameron, Nice teaser, but its a long way from London. ;) Hopefully some of readers will get a chance to go.

  12. @Lisak: According to this paper on the performance of Java sockets, Java always uses TCP. Also for communication with localhost:
    Unfortunately, I can no longer find any code on the site the paper points to:

  13. @Skender. Java always uses TCP for Sockets including localhost as you say. I am not sure Linux implements the whole TCP stack esp as there is no real transport.

    Obviously, Java also support UDP for DatagramSockets.

    If you want something faster than Sockets you can use shared memory. The inter process latency can be less than 100 ns.

  14. This is a year later, but thought I'd drop a note to say I extracted/mutated the code from your sample into a utility to help measure baseline RTT latency. Blog post is here: and as per your licence I owe you a pint(or 2).


Post a Comment

Popular posts from this blog

Java is Very Fast, If You Don’t Create Many Objects

System wide unique nanosecond timestamps

Comparing Approaches to Durability in Low Latency Messaging Queues