Bootstrap FreeKB - Mule - Analyze and respond to an out of memory heap dump on Linux
Mule - Analyze and respond to an out of memory heap dump on Linux

Updated:   |  Mule articles

Get the PID

If a heap dump is unexpectedly or automatically created, ensure that the JVM associated with the heap dump is still running properly. The heap dump files can be used to determine the PID associated with the heap dump, and then the PID can be used to determine the JVM associated with the heap dump. The heap dump files are in this format:

java_pid<pid>.hprof

 

Is the PID in use

Let's say the PID is 12345. The ps command can be used to determine if the PID is still associated wth the JVM. In this example, the output of the ps command only displays the grep command, which means that the PID is no longer in use, which means that the JVM that was associated with the PID is no longer running, or was automatically restarted as part of the heap dump. If there is a significant amount of output, the JVM is still running. However, the JVM is probably in a bad way. For example, the JVM may be out of memory.

~]# ps -ef | grep 12345
root  12345  1  0  21:54  pts/0  00:00:00  grep 12345

 

Kill the PID if in use

When the JVM is in a bad way, you first will want to kill the PID, and the start the JVM.

~]# kill -9 12345

 

Determine the JVM associated with the PID

You can search the logs with the PID to determine what JVM was associated with the PID. This command will usually produce quite a bit of output, as this command searches every file at and below the specified directory for the string (12345 in this example). This may help you find the JVM that had the PID associated with the heap dump.

~]# grep -R /path/to/logs/directory -ie 'java_pid12345'

 

Ensure the JVM is running

Once you know the JVM that had the PID with the heap dump, determine if the JVM was restarted. You can check the mule_ee.log file to determine when the JVM was last restarted.

~]# cat mule_ee.log
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ Mule is up and kicking (every 5000ms)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

**********************************************************
* default                                    * DEPLOYED  *
* name-373                                   * DEPLOYED  *
**********************************************************

 

Check for out of memory

Use the following command to determine if any of the Mule logs or application logs contain an out of memory events.

~]# grep -R /path/to/logs/directory -ie 'OutOfMemory'

 

If event OutOfMemory is in any of the logs, this means that the JVM heap dumped because the JVM ran out of memory.

java.lang.OutOfMemoryError: PermGen space
java.lang.OutOfMemoryError: Metaspace
java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: GC overhead limit exceeded

​

PermGen / Metaspace

Java heap space

GC overhead limit exceeded




Did you find this article helpful?

If so, consider buying me a coffee over at Buy Me A Coffee



Comments


March 24 2021 by Manishs
Hi Jeremy, I'm facing GC overhead limit exceeded issue when I add 2 api endpoints in RAML.Both endpoint have same query parameters only difference is with Response they generate. Can you please help to resolve this issue? Thanks, Manisha

Add a Comment


Please enter f91e74 in the box below so that we can be sure you are a human.