
Get the PID
If a heap dump is unexpectedly or automatically created, ensure that the JVM associated with the heap dump is still running properly. The heap dump files can be used to determine the PID associated with the heap dump, and then the PID can be used to determine the JVM associated with the heap dump. The heap dump files are in this format:
heapdump.<date>.<time>.<pid>.phd
Is the PID in use
Let's say the PID is 12345. The ps command can be used to determine if the PID is still associated wth the JVM. In this example, the output of the ps command only displays the grep command, which means that the PID is no longer in use, which means that the JVM that was associated with the PID is no longer running, or was automatically restarted as part of the heap dump. If there is a significant amount of output, the JVM is still running. However, the JVM is probably in a bad way. For example, the JVM may be out of memory.
~]# ps -ef | grep 12345
root 12345 1 0 21:54 pts/0 00:00:00 grep 12345
Kill the PID if in use
When the JVM is in a bad way, you first will want to kill the PID, and the start the JVM.
~]# kill -9 12345
Determine the JVM associated with the PID
You can search the logs with the PID to determine what JVM was associated with the PID. This command will usually produce quite a bit of output, as this command searches every file at and below the specified directory for the string (12345 in this example). This may help you find the JVM that had the PID associated with the heap dump.
~]# grep -R /path/to/logs/directory -ie 'process id 12345'
The JVMs SystemOut.log can be used to know what PID the JVM is using (12345 in this example).
********** Start Display Current Environment **********
WebSphere Platform x.x.x.x running with process named cell\node\jvm and process id 12345
Start the application server
You can now start the application server.
Ensure the JVM is running
Once you know the JVM that had the PID with the heap dump, determine if the JVM was restarted. You can check the SystemOut.log file for the event "open for e-business" to determine when the JVM was last restarted.
~]# cat SystemOut.log | grep e-business
[2/1/18 4:27:32:991 CST] 0000001 WsServerImpl A WSVR0002I: Server server1 open for e-business; process id is 12345.
Check for out of memory
If basic logging is being used, an OutOfMemory event may be in the SystemErr.log, native_stderr.log, or FFDC log before the heap dump event. Since this event may be in various logs, I like to use this command to search all of the logs.
grep -R /path/to/logs/directory -ie 'OutOfMemory'
If HPEL logging is being used,the logViewer command can be used.
logViewer.sh | grep -i 'OutOfMemory'
If the out of memory event was captured, the type of memory that ran out of memory may be identified.
java.lang.OutOfMemoryError: PermGen space
java.lang.OutOfMemoryError: Metaspace
java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: GC overhead limit exceeded
PermGen / Metaspace
Java heap space
- If you have similar application servers in different environments (development, production), check to see if the heap size is the same across environments
- Analyze the heap dump using IBMs Heap Analyzer or Eclipse Memory Analyzer (mat)
- Determine what caused the heap dump using Introscope
- Check for a memory leak in Tivoli Performance Monitor or Introscope
- Check to see if sessions are allowed to overflow
GC overhead limit exceeded
- Analyze the garbage collection log (native_stderr.log native_stdout.log) using IBMs Garbage Collection and Memory Visualizer tool or Pattern Modeling and Analysis Tool or using GCEasy
- Check for garbage collection problems in Introscope
Check for hung threads
If there are hung threads, refer to this article.
grep -R /path/to/logs/directory -ie 'WSVR0605W'
Check for CWRLS0030W
If CWRLS0030W is found in the HPEL or basic logs, refer to this article.
CWRLS0030W: Waiting for HAManager to activate recovery processing for local WebSphere server
Did you find this article helpful?
If so, consider buying me a coffee over at