Probably not an issue you run into much with high speed trading, but when you are running multiple vmware/virtualbox instances and each is running a glassfish app ... PermGen taming becomes a serious concern.
You still have to watch how many objects you create. This article looks at a benchmark passing events over TCP/IP at 4 billion events per minute using the net.openhft.chronicle.wire.channel package in Chronicle Wire and why we still avoid object allocations.. One of the key optimisations is creating almost no garbage. Allocation is a very cheap operation and collection of very short-lived objects is also very cheap. Does this really make a difference? What difference does one small object per event (44 bytes) make to the performance in a throughput test where GC pauses are amortised? While allocation is as efficient as possible, it doesn’t avoid the memory pressure on the L1/L2 caches of your CPUs and when many cores are busy, they are contending for memory in the shared L3 cache. Results Benchmark on a Ryzen 5950X with Ubuntu 22.10. JVM Vendor, Version No objects Throughput, Average Latency* One object per event Throughput, Average Latency* Azul Zulu 1.8.0_322 60.6 ...
Table of Contents Introduction Superhuman Intelligence Is Already Here ATMs Didn’t Replace Bank Tellers About Me Multidimensional Growth Areas of Career Development Scope of Consideration Roles Where All Areas Are Important The Range of a Founder’s Role How Will AI Change Development? How You Ask the Question Changes the Result Some key terms in understanding how Generative AI works Estimating the Value of AI-Generated Documentation AI and the Reverse Baltimore Phenomenon The Baltimore Phenomenon The Reverse Baltimore Phenomenon Filling a void Brainstorming Ideas Sample Project 2048 Using Prompts as Meta-Programming When AI is useful What Generative AI Can’t Yet Do Human in the Loop Conclusion This article is background material for this talk Lessons learnt from founding my own company, and over 30 years hands-on coding Introduction Unlike most deterministic development tools, Generative AI is a productivity...
Introduction Measuring an object’s size in Java is not straightforward. The platform encourages you to consider references and abstractions rather than raw memory usage. Still, understanding how objects fit into memory can yield significant benefits, especially for high-performance, low-latency systems. Over time, the JVM has introduced optimisations like Compressed Ordinary Object Pointers (Compressed Oops) and, more recently, Compact Object Headers. Each of these can influence how large or small your objects appear. Understanding these factors helps you reason about memory usage more concretely. Measuring Object Sizes In principle, you can estimate an object’s size by creating instances and observing changes in the JVM’s free memory. However, you must neutralise certain factors to get consistent results. For example, turning off TLAB allocation ( -XX:-UseTLAB ) makes memory usage more directly observable. Repeated measurements and median calculations can reduce the im...
Happy New Year to you too.
ReplyDeleteIt has been a pleasure to read your blog during this year. I've learnt a lot.
Please, keep on writting ;)
Hi Peter,
ReplyDeleteThanks for a great Year ful of interesting posts!
Keep up the good work!
Thanks,
Markus
keep up !!!
ReplyDeleteThanks Peter. Happy New Year to you also and all the best for a prosperous year of blogging ahead
ReplyDeleteHappy New Year! Keep up the good work!
ReplyDeleteAlso, if you need a topic... PermGen profiling :)
Probably not an issue you run into much with high speed trading, but when you are running multiple vmware/virtualbox instances and each is running a glassfish app ... PermGen taming becomes a serious concern.