JVM memory tuning with Docker

I have several applications that were running with 450GB memory before. It did seem like it’s a valid memory consumption, after manually calculating the memory footprint, specifically the sum up of all primitive types and object sizes.

however, the same applications are now able to run with ~50GB from a docker container.

looks like the misalignment is coming from the underestimate of the JVM’s intelligence at the moment.

A little background here, it has been pretty common now in the market, in general, to set the -Xmx and -Xms parameters manually at the application startup. but seems like JVM itself is able to do a better job than developers’ estimate for the heap size.

I have been tuning some of the applications continuously for quite some, up and down some of the applications of the `-Xmx` from 300GB to 400GB to 5-600GB, then finalized at 450GB.

while at the moment, together with string deduplication, which is an important factor as there are a lot of repeated strings like currency, security, and PM names; and the docker set up where I have set up a memory limit, and dropping out the -Xmx parameter completely, it seems like these actions together made the JVM shine its capabilities, that it probably have kicked the GC more intelligently/aggressively, and the `-Xmx` has been calculated more accurately by JVM.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s