Get the PID
If a heap dump is unexpectedly or automatically created, ensure that the JVM associated with the heap dump is still running properly. The heap dump files can be used to determine the PID associated with the heap dump, and then the PID can be used to determine the JVM associated with the heap dump. The heap dump files are in this format:
Is the PID in use
Let's say the PID is 12345. The ps command can be used to determine if the PID is still associated wth the JVM. In this example, the output of the ps command only displays the grep command, which means that the PID is no longer in use, which means that the JVM that was associated with the PID is no longer running, or was automatically restarted as part of the heap dump. If there is a significant amount of output, the JVM is still running. However, the JVM is probably in a bad way. For example, the JVM may be out of memory.
~]# ps -ef | grep 12345 root 12345 1 0 21:54 pts/0 00:00:00 grep 12345
Kill the PID if in use
When the JVM is in a bad way, you first will want to kill the PID, and the start the JVM.
~]# kill -9 12345
Determine the JVM associated with the PID
You can search the logs with the PID to determine what JVM was associated with the PID. This command will usually produce quite a bit of output, as this command searches every file at and below the specified directory for the string (12345 in this example). This may help you find the JVM that had the PID associated with the heap dump.
~]# grep -R /path/to/logs/directory -ie 'process id 12345'
The JVMs SystemOut.log can be used to know what PID the JVM is using (12345 in this example).
********** Start Display Current Environment ********** WebSphere Platform x.x.x.x running with process named cell\node\jvm and process id 12345
Start the application server
You can now start the application server.
Ensure the JVM is running
Once you know the JVM that had the PID with the heap dump, determine if the JVM was restarted. You can check the SystemOut.log file for the event "open for e-business" to determine when the JVM was last restarted.
~]# cat SystemOut.log | grep e-business [2/1/18 4:27:32:991 CST] 0000001 WsServerImpl A WSVR0002I: Server server1 open for e-business; process id is 12345.
Check for out of memory
An OutOfMemory event may be in the SystemErr.log, native_stderr.log, or FFDC log before the heap dump event. Since this event may be in various logs, I like to use this command to search all of the logs.
grep -R /path/to/logs/directory -ie 'OutOfMemory'
If there are out of memory events, this means that the JVM heap dumped because the JVM ran out of memory.
java.lang.OutOfMemoryError: PermGen space java.lang.OutOfMemoryError: Metaspace java.lang.OutOfMemoryError: Java heap space java.lang.OutOfMemoryError: GC overhead limit exceeded
- PermGen / Metaspace - Look into the perm gen or metaspace event
- Java heap space - Analyze the heap dump using IBMs Heap Analyzer or Eclipse Memory Analyzer (mat) and check for a memory leak in Introscope
- GC overhead limit exceeded - Analyze the garbage collection log (native_stderr.log native_stdout.log) and check for garbage collection problems in Introscope
Check for hung threads
If there are hung threads, refer to this article.
grep -R /path/to/logs/directory -ie 'WSVR0605W'