A Java stack trace displays a snapshot of the current state of all threads in a JVM (Java Virtual Machine) process. This trace represents a quick and precise way to determine bottlenecks, hung threads, and resource contention in your application. Here we will discuss on how to read and understand and do the java stack trace analysis.
Understanding Thread States
The snapshot produced by a Java stack trace will display threads in various states. Not all Java stack traces will use the same naming convention, but typically each thread will be in one of the following states: runnable, waiting on a condition variable, and waiting on a monitor lock.
Threads in the runnable state represent threads that are either currently running on a processor or are ready to run when a processor is available. At any given time, there can be only one thread actually executing on each processor in the machine; the rest of the runnable threads will be ready to run but waiting on a processor. You can identify threads in a runnable state by the runnable keyword in the stack trace, as shown here:
"DynamicListenThread[Default]" daemon prio=10 tid=0x2dcb0800 nid=0x4ac runnable [0x3116f000..0x3116fc94] java.lang.Thread.State: RUNNABLE at java.net.PlainSocketImpl.socketAccept(Native Method) ...
Threads waiting on a condition variable are sleeping, waiting to be notified by their manager that work is ready for processing. The stack trace indicates this with the in Object.wait() message:
"[ACTIVE] ExecuteThread: ‘0’ for queue: ‘weblogic.kernel.Default (selftuning)’" daemon prio=6 tid=0x2ce82000 nid=0x1748 in Object.wait() [0x2e0cf000..0x2e0cfd14] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x0c89aaa8> (a weblogic.work.ExecuteThread) at java.lang.Object.wait(Object.java:485) at weblogic.work.ExecuteThread.waitForRequest(ExecuteThread.java:157) - locked <0x0c89aaa8> (a weblogic.work.ExecuteThread) at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
Applications use monitor locks, or mutexes, to synchronize access to critical sections of code that require single-threaded access. When you see a thread that has waiting for monitor entry in its stack trace, the thread is waiting to access synchronized code, such as the thread shown here:
"[ACTIVE] ExecuteThread: ‘1’ for queue: ‘weblogic.kernel.Default (selftuning)’" daemon prio=6 tid=0x2cff8400 nid=0x1630 waiting for monitor entry [0x2e44f000..0x2e44fd14] java.lang.Thread.State: BLOCKED (on object monitor) at professional.weblogic.test.MutexServlet.doGet(MutexServlet.java:20) - waiting to lock <0x289aefb0> (a professional.weblogic.test.MutexServlet) at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
Two different types of thread dumps can be produced in a typical environment: system-level process dumps, otherwise known as core dumps, and Java thread dumps.
Generating System-Level Process Dumps
System-level process dumps are generated by the JVM itself in response to a system error condition; typically, this happens because some native code is trying to access an illegal memory address. The content of this dump depends on whether the JVM can call the signal handler before the process itself core dumps. If the JVM can call the signal handler, it will typically produce a text file in the current directory containing information about the process and the thread in which the low-level error occurred. If the JVM is unable to call the signal handler, a core dump file will be produced containing information about the JVM’s native operating system process rather than the Java classes themselves. This type of dump is much less valuable and should be used only if no Java stack trace is available.
Generating Java Thread Dumps
Sending a special signal to the JVM generates a Java stack trace. On Unix platforms you send the SIGQUIT signal using the kill command. On most Unix platforms, the command kill –QUIT <PID>, where <PID> is the process identifier for the JVM process, will produce a Java thread dump that shows the call stack of every user-level thread in the JVM. On a Windows platform, you generate a thread dump by pressing the Ctrl-Break key sequence in the console window in which the Java program is executing. In addition, you can generate a stack trace either by invoking the static method Thread.dumpStack() or by invoking the printStackTrace() method on an instance of the Throwable class.
WebLogic Server also provides several tools that allow you to generate a thread dump. First, the WebLogic Console provides a Dump Thread Stacks button on the server’s Threads Monitoring tab. Second, the deprecated weblogic.Admin tool we briefly discussed in Chapter 12 supports a THREAD_DUMP command. Finally, you can use WLST to get the ThreadStackDump attribute on the JVMRuntime MBean, as shown here.
connect(’weblogic’, ‘weblogic1’, ‘t3://127.0.0.1:7001’) cd(’serverRuntime:/JVMRuntime/AdminServer’) cmo.getThreadStackDump()
When analyzing or troubleshooting an application it is important to generate multiple thread dumps over a period of time to identify hung threads properly and better understand the application state. Start by generating three to five separate thread dumps approximately 15 to 30 seconds apart. If your servers communicate with each other using RMI it may be necessary to perform this operation on all servers in the cluster simultaneously to get a full picture. Depending on the number of servers in the cluster and the number of threads in the execute queue, this process may generate a large amount of output, but the output is invaluable in diagnosing thread-related problems. Later in this section we discuss how to read and interpret these thread dumps.
Reading Core Dumps
Sometimes it will be necessary to examine the core file to determine what has caused the JVM to core dump. When you are examining this core file, remember that Java itself uses a safe memory model and that any segmentation fault must have occurred in either the native code of the application or the native code of the JVM itself. On Unix systems a core file will be produced when a JVM fails. On Windows systems, a drwtsn32.log file will be produced when a segmentation fault occurs in a JVM. You have several ways to examine these core files, usually through debuggers like gdb or dbx. On Solaris you can also use the pstack command, as shown here:
When using dbx to examine the JVM core file, first move to the directory where the core file resides, then execute the dbx command with the binary executable as a parameter. Remember that the java command is usually a shell script and that you must specify the actual java binary in the command. Once you have started the debugger you can use the dbx where command to show the function call stack at the time of the failure, indicating the location of the segmentation fault.
dbx /usr/java/native/java ./core (dbx) where Segmentation fault in glink.JNU_ReleaseStringPlatformChars at 0xd074e66c 0xd074e66c (JNU_ReleaseStringPlatformChars+0x5b564) 80830004 lwz r4,0x4(r3)
From this information you can often determine the location of the error and take the appropriate action.
For example, if the segmentation fault is the result of a just-in-time (JIT) compiler problem and you are using the HotSpot compiler you can modify the behavior of the JIT to eliminate the problem. Create a file called .hotspot_compiler in the directory used to start the application, and indicate in this file the methods to exclude from JIT processing using entries similar to the following.
exclude java/lang/String indexOf
The specified methods will be excluded from the JIT compilation process, eliminating the core dump.
Reading Java Stack Traces
Java stack traces can be very useful during the problem resolution process to identify the root cause for an application that seems to be hanging, deadlocked, frozen, extremely busy, or corrupting data. If your data is being corrupted, you are probably experiencing a race condition. Race conditions occur when more than one thread reads and writes to the same memory without proper synchronization. Race conditions are very hard to find by looking at stack traces because you will have to get your snapshot at the proper instant to see multiple threads accessing the same non-synchronized code. Thread starvation or thread exhaustion can occur when threads are not making progress because they are waiting for an external resource that is either responding slowly or not at all. One particular case of this happens when WebLogic Server A makes an RMI call to WebLogic Server B and blocks waiting for a response. WebLogic Server B then calls via RMI back into WebLogic Server A before the original call returns from WebLogic Server B.
If enough threads on both servers are awaiting responses from the other server, it is possible for all threads in both servers’ execute queues to be exhausted. This exhaustion behavior will show itself initially as no idle threads available in the WebLogic Server execute queue when viewed in the WebLogic Console. You can confirm this problem by generating a stack trace and looking for threads blocked waiting for data in the weblogic.rjvm.ResponseImpl.waitForData() method. Look for entries like this:
"[ACTIVE] ExecuteThread: ‘1’ for queue: ‘weblogic.kernel.Default (selftuning)’" daemon prio=6 tid=0x2cff7c00 nid=0x1388 waiting for monitor entry [0x2e44f000..0x2e44fd14] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x02a03f38> (a weblogic.rjvm.ResponseImpl) at weblogic.rjvm.ResponseImpl.waitForData(ResponseImpl.java:87)
If a large number of threads are in this state you need to make appropriate design changes to eliminate RMI traffic between the servers or better throttle the number of threads allowed to call out and block in this way.
Deadlock occurs when individual threads are blocked waiting for the action of other threads. A deadly embrace deadlock occurs when one thread locks resource A and then tries to lock resource B, while a different thread locks resource B and then tries to lock resource A. Stack traces will show blocked threads within synchronized application code or within code that accesses objects using exclusive locking in one form or another. Remember that it is also possible for the application to be deadlocked across multiple JVMs with one server’s threads in a deadly embrace with another server’s threads.
A system that is inactive and has poor application performance may, in fact, be performing normally.
The problem may instead be indicative of an upstream bottleneck, as described earlier in this chapter. A Java stack trace for a system in this state will display a high percentage of threads in the default execute queue blocking until they receive some work, having a stack trace similar to the following.
"[ACTIVE] ExecuteThread: ‘0’ for queue: ‘weblogic.kernel.Default (selftuning)’" daemon prio=6 tid=0x2cf9f000 nid=0x7f0 in Object.wait() [0x2e0cf000..0x2e0cfb14] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x0c433020> (a weblogic.work.ExecuteThread) at java.lang.Object.wait(Object.java:485) at weblogic.work.ExecuteThread.waitForRequest(ExecuteThread.java:157) - locked <0x0c433020> (a weblogic.work.ExecuteThread) at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
The stack trace indicates that this thread is idle, or waiting for a request, rather than busy in application code or waiting on an external resource.
Understanding WebLogic Server Stack Traces
Stack traces of a WebLogic Server instance will also show a number of common elements based on the internal design of the WebLogic Server product.
All client requests enter WebLogic Server through a special thread called the listen thread. There will usually be two listen threads visible in a stack trace, one for SSL and the other for nonsecure transport. Here is an example of the WebLogic Server listen thread waiting for a connection to arrive:
"DynamicListenThread[Default]" daemon prio=10 tid=0x2d384c00 nid=0x1254 runnable [0x310cf000..0x310cfb94] java.lang.Thread.State: RUNNABLE at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384) - locked <0x0cb9ee20> (a java.net.SocksSocketImpl) at java.net.ServerSocket.implAccept(ServerSocket.java:453) at java.net.ServerSocket.accept(ServerSocket.java:421)
Socket connections received by WebLogic Server are registered and maintained by the socket muxer. The socket muxer reads data from the socket and dispatches the request to the appropriate subsystem. The socket muxer has its own execute thread pool that it uses to read the requests off the socket by calling the processSockets() method, as shown here for the Windows version of the native socket muxer.
"ExecuteThread: ‘1’ for queue: ‘weblogic.socket.Muxer’" daemon prio=6 tid=0x2d03f400 nid=0x180 runnable [0x2ebff000..0x2ebffc14] java.lang.Thread.State: RUNNABLE at weblogic.socket.NTSocketMuxer.getIoCompletionResult(Native Method) at weblogic.socket.NTSocketMuxer.processSockets(NTSocketMuxer.java:81) at weblogic.socket.SocketReaderRequest.run(SocketReaderRequest.java:29) at weblogic.socket.SocketReaderRequest.execute(SocketReaderRequest.java:42) at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:145) at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:117)