As a result of the CFMX/Commonspot upgrade I blogged a few days ago
, I've been troubleshooting some CFMX crashing problems along with "java.lang.OutOfMemoryError" errors and have been tuning the JVM settings for the servers. Along the way I ended up doing a good bit of research on JVM settings and what the differences are between the Heap Size and the MaxPermSize.
It's fairly well documented on various blogs and discussion lists that both the JVM Heap Size and MaxPermSize are important when dealing with "out of memory" errors in CFMX, but I've never really completely understood what these setting or their values mean.
First off, what is the JVM Heap? The heap is memory that's been allocated to store Java classes and objects. The JVM heap is split up into several areas and newer/older objects are shuffled between the sections based on their age and use. The shuffling of object occurs during Garbage Collection, a process which also discards objects which are no longer used. Depending on system settings, garbage collection runs now and then, cleaning up objects and shuffling them around. More great information on Garbage collection in CFMX can be found on Brandon Purcell's blog
and in Sun's tuning article on garbage collection
(via Tom Link).
About heap size - it's generally a good idea to set the JVM's max heap size to at least 512, and no matter what the max is, the initial heap size should be set to the same value. Making these settings the same value prevents the JVM from frequently re-sizing the heap as it grows, something that can trigger frequent garbage collections and can degrade performance.
So when does it make sense to increase the JVM heap to a value larger than 512? Generally, no-one cares about this value till they start to get java.lang.OutOfMemoryError messages, and perhaps that's why you are reading this. If the JVM heap size is set too small, the server's memory slowly fills up and eventually an object is created and there's no more room in memory for it. This becomes the proverbial straw that breaks the camel's back and will cause your server to crash. Inversely, if you set the value too high, alot of junk objects build up in the heap and garbage collection can take a long time cleaning it up. This can cause a serious degradation in performance.
Again, I'll point at Brandon Purcell's blog
for more help in finding the right size for the heap, though it also work to just adjust the heap size and watch the server for errors or performance problems.
That's a description of the heap, but what about this other setting set with -XX:MaxPermSize? Turns out this sets the size for something called the "Permanent Generation". A good definition for the Permanent Generation is found in the Sun article Frequently Asked Questions about Garbage Collection in the HotspotTM JavaTM Virtual Machine
The permanent generation is used to hold reflective of the VM itself such as class objects and method objects. These reflective objects are allocated directly into the permanent generation, and it is sized independently from the other generations. Generally, sizing of this generation can be ignored because the default size is adequate. However, programs that load many classes may need a larger permanent generation.
So the permanent generation contains information about the objects in the heap. Ah-ha! Now we can start to understand how these two numbers are related to each other. The heap stores the objects, and the permanent generation keeps track of information about the objects. Consequently, the more objects there are in the heap, the more information there is to be tracked about them, and the larger the permanent generation needs to be. Another quote from Sun's documentation
comes in handy here:
For most applications the permanent generation is not relevant to garbage collector performance. However, some applications dynamically generate and load many classes. For instance, some implementations of JSP(TM) pages do this. If necessary, the maximum permanent generation size can be increased with MaxPermSize.
This also applies to compiled ColdFusion class files. If there are alot of class files in /coldfusionmx/wwwroot/WEB-INF/cfclasses/, then you'll likely need to increase the MaxPermSize setting. Following this logic, if you have an application server with a large number of small class files, you are likely to need a larger MaxPermSize than if you have a server consisting of a small number of large class files.
Note: Instructions on increasing the MaxPermSize are found in this ColdFusion MX: Tips for performance and scalability
As it turns out in my recent Commonspot/CFMX upgrade, the new codebase must have generated more classes, or at least had more information generated by those class files. Even though the heap size was still large enough after the upgrade, I had to change the MaxPermSize from 128 to 256 to make the java.lang.OutOfMemoryError plaguing the site go away.
Hopefully this explanation will come in handy for others tuning their memory settings!
Added 4:14PM 5.28.04
Note: Important tidbit I forgot to add into this post initially - The MaxPermSize is in addition to
the Heap, so if you add both of the numbers together you'll get the total size Jrun/CF should consume before blowing up.