Should server applications limit themselves to 4 GB?
OverviewA very interesting talk by CTO of Azul, Gil Tene raises many timely issues about GC performance and server memory sizes.
A typical new server should be 100 GBOne point he brings up is; with memory so cheap a typical server should be around 100 GB (at around $7K) Installing less memory is wasting data centre space. A typical JVM only uses 2-4 GB as the GC pause hurts responsiveness at this point. However, if you are developing an application which only uses 2-4 GB you are writing an application which will soon fit on a mobile phone. So how can we write server applications which can use the whole server in Java?
Java without the GC Pauses: Keeping Up with Moore’s Law and Living in a Virtualized World
He raises questions about how best to use that memory and not surprisingly Azul have a product which will scales to that much memory efficiently.
For those who are not planing to buy such a system, you could be thinking about how you can use more memory efficiently.
My viewWhen you increase the Eden size of an application but just create temporary garbage, the pause times don't increase. Its the retained objects which increase the pause time.
The simplest thing to try is having a very large Eden size and see if the pause times increase or perhaps decrease. It can decrease as you give more time for medium lived objects to die and less objects leave the Eden/Survivor spaces to make it into the tenured space which require long pauses to clean up.
What can you buy todayThe prices Gil quotes are actually 6 months old now.
I only quote Dell here because I find their web site easy to get a quote from.
I found a Dell T610 with 96GB cost $5.8K and T710 with 192 GB costs $12.2K on their web site.
I recently bought a PC with 24 GB for £1K. ;)