Understanding how Core Java really works can help you write simpler, faster applications.
Java: What is the limit to the number of threads you can create?
Get link
Facebook
X
Pinterest
Email
Other Apps
Overview
I have seen a number of tests where a JVM has 10K threads. However, what happens if you go beyond this?
My recommendation is to consider having more servers once your total reaches 10K. You can get a decent server for $2K and a powerful one for $10K.
Creating threads gets slower
The time it takes to create a thread increases as you create more thread. For the 32-bit JVM, the stack size appears to limit the number of threads you can create. This may be due to the limited address space. In any case, the memory used by each thread's stack add up. If you have a stack of 128KB and you have 20K threads it will use 2.5 GB of virtual memory.
Bitness
Stack Size
Max threads
32-bit
64K
32,073
32-bit
128K
20,549
32-bit
256K
11,216
64-bit
64K
stack too small
64-bit
128K
32,072
64-bit
512K
32,072
Note: in the last case, the thread stacks total 16 GB of virtual memory.
You still have to watch how many objects you create. This article looks at a benchmark passing events over TCP/IP at 4 billion events per minute using the net.openhft.chronicle.wire.channel package in Chronicle Wire and why we still avoid object allocations.. One of the key optimisations is creating almost no garbage. Allocation is a very cheap operation and collection of very short-lived objects is also very cheap. Does this really make a difference? What difference does one small object per event (44 bytes) make to the performance in a throughput test where GC pauses are amortised? While allocation is as efficient as possible, it doesn’t avoid the memory pressure on the L1/L2 caches of your CPUs and when many cores are busy, they are contending for memory in the shared L3 cache. Results Benchmark on a Ryzen 5950X with Ubuntu 22.10. JVM Vendor, Version No objects Throughput, Average Latency* One object per event Throughput, Average Latency* Azul Zulu 1.8.0_322 60.6 ...
Table of Contents Introduction Superhuman Intelligence Is Already Here ATMs Didn’t Replace Bank Tellers About Me Multidimensional Growth Areas of Career Development Scope of Consideration Roles Where All Areas Are Important The Range of a Founder’s Role How Will AI Change Development? How You Ask the Question Changes the Result Some key terms in understanding how Generative AI works Estimating the Value of AI-Generated Documentation AI and the Reverse Baltimore Phenomenon The Baltimore Phenomenon The Reverse Baltimore Phenomenon Filling a void Brainstorming Ideas Sample Project 2048 Using Prompts as Meta-Programming When AI is useful What Generative AI Can’t Yet Do Human in the Loop Conclusion This article is background material for this talk Lessons learnt from founding my own company, and over 30 years hands-on coding Introduction Unlike most deterministic development tools, Generative AI is a productivity...
Introduction Measuring an object’s size in Java is not straightforward. The platform encourages you to consider references and abstractions rather than raw memory usage. Still, understanding how objects fit into memory can yield significant benefits, especially for high-performance, low-latency systems. Over time, the JVM has introduced optimisations like Compressed Ordinary Object Pointers (Compressed Oops) and, more recently, Compact Object Headers. Each of these can influence how large or small your objects appear. Understanding these factors helps you reason about memory usage more concretely. Measuring Object Sizes In principle, you can estimate an object’s size by creating instances and observing changes in the JVM’s free memory. However, you must neutralise certain factors to get consistent results. For example, turning off TLAB allocation ( -XX:-UseTLAB ) makes memory usage more directly observable. Repeated measurements and median calculations can reduce the im...
Comments
Post a Comment