Java NIO is faster than Java IO for Sockets

Most benchmarks comparing Java IO Sockets and Java NIO SocketChannel, compare these libraries using different threading models. Typcially this is Java IO using dedicated thread(s) for each Socket compared with the a dispatcher model for Java NIO, with non-blocking SocketChannels sharing threads.

However, this comparison is using IO and NIO using the same threading model with one decidiated thread per Socket/SocketChannel.

The point of this comparision is to say, NIO doesn't have to be slow, its just that threading model which is commonly used with NIO is perhaps more trouble than its worth.

For more details, including the source see my wiki page on this topic.


  1. Hi, about the wiki page:

    "Site deleted

    The site you are trying to reach has been deleted by the Wikidot Team.
    If you believe that this is a mistake, you can appeal at
    or contact"

    And about the considerations about the threading model...i fully agree with you indeed the New (old? :) ) IO give you more flexibility to choose the right threading model for the application you want to design. Only one question: did you have benchmarked and compared all (?) the well-known proactor/reactor/multireactor/plain single threaded threading models using NIO in different contexts (low latency needs/long packet/short packet..)?

  2. Perhaps you can shed some light on the following:

    When using nio and a limited group of machines talking to each other, and you have some request/reply system, would you prefer to send the reply on the channel of the request? Or would you consider creating a dedicated channel to send replies on.

    The advantage of the first of course is simplicity and you don't need to deal with wakeing up the selector of the writer. But the problem is that while returning the response, other requests are piling up. So the flow is being disrupted.

    When a dedicated reply channel, you can have 1 thread that doesn't nothing else than reading and processing that reply, and one thread doing nothing else than returning responses. So in this case the flow is not being disrupted.

    Scaling can be done quite easy in our case, just open more ports so yo can have multiple nio threads in parallel. The idea is that such a nio thread is processing a group of partitions so on the client side you calculate the partition id, then you calculate the port e.g. port = base + parttitionid%cpucount and then you shoot the request directly in the cpu that is able to process that request.

    This is just a toy application but it has been lingering in my mind for a long time.

    1. In general, I prefer to keep things as simple as possible and only add something when needed. In the case of Sockets, they often perform better/more consistently if you send data in both directions. This could be because this is the use case they are optimised for.

      I find flow is more disrupted if you don't reply on the same stream. I used to thing this had something to do with nagle but I see this even without Nagle.

      The major sources of delay are usually not in the send but the bandwidth of the connection. With optimised hardware you can get the send down to a few micro-seconds.

      I suggest you try it and see if it really helps or not, either I would be interested to see this in a blog post.


Post a Comment

Popular posts from this blog

Java is Very Fast, If You Don’t Create Many Objects

System wide unique nanosecond timestamps

Comparing Approaches to Durability in Low Latency Messaging Queues