An unexpected error has been detected by HotSpot Virtual Machine

Copy file to shared folder java

If you have got a JVM crash on ZFS file system used generally in running zones in Solaris SPARC machines.

Below steps might be the root cause of the crash and solution is also provided to resolve/avoid this kind of crashs.

Crash Error in Logs:

# An unexpected error has been detected by HotSpot Virtual Machine:
#
# SIGSEGV (0xb) at pc=0xfffffd7fff2d720a, pid=14388, tid=282 # # JavaVM: Java HotSpot(TM)
siginfo:si_signo=11, si_errno=0, si_code=1, si_addr=0xfffffd7fbf9ffff8
# Problematic frame:
# C [libc.so.1+0xd720a] _thr_slot_offset+0x25a #
Stack: [0xfffffd7379200000,0xfffffd7379400000), sp=0xfffffd73793fec30, free space=2043k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
C [libc.so.1+0xd720a] _thr_slot_offset+0x25a do_decomp+0x56a
C [libc.so.1+0xd36b5] _pthread_cond_destroy+0x16b5 mark_dead_and_buried+0x25
C [libc.so.1+0xd3a0f] thr_create+0x3f thr_setprio+0x2f
V [libjvm.so+0x4ee1b7]
V [libjvm.so+0x4f5d64]
V [libjvm.so+0x4fbb06]
j java.lang.Thread.start0()V+0

This stack trace reveals that the crash was because the system was either short of or out of swap space. In this particular case, the JVM is trying to create a new Java thread and there was not enough memory to create the stack for the Java thread.

ZFS has a different file system caching strategy and mechanism to UFS filesystems. if a system is running multiple Solaris zones, which are using ZFS filesystems, and the zones are sharing the underlying resources of the machine they are running on, then how much memory is needed will depend on the memory demands of each zone.

If additional JVMs are run, for example additional Application Servers, then that zone may use more memory thereby reducing the amount available to other zones on the system. This could lead to this sort of JVM crash on a zone that was hitherto considered perfectly stable.
Memory usage of the ZFS file system, specifically its Adjustable Replacement Cache (ARC), usually not restricted.

This allows it to use close to half of the available memory for a single zone.

ZFS uses a significantly different caching model than page-based file systems like ufs and vxfs. This is done for both performance and architectural reasons.

ZFS frees up it’s cache in a way that does not cause memory shortage. System can operate with lower freemem fine without actually suffer from this. ZFS unlike ufs and vxfs file systems does not throttle writers. Ufs file system, throttles writes when number of dirty pages/vnode reaches 16MB. The objective is to preserve free memory. The downside is slow application write performance that may be unnecessary when plenty of free memory is available. ZFS does not throttled individual application like ufs and vxfs. ZFS only throttles the application when data load over flows the IO subsystem capacity for 5 to 10 second.

This may impact existing application software (like oracle that likes to consume large amount of memory). There is a work being done to improve the interface with the VM subsystem so that it can recover memory from ZFS cache when necessary.

This reduces the memory available to the other zones running on the system which can lead to a crash of a JVM in one of the zones.

The Solaris Modular debugger (mdb) can be used to find out the amount of memory ZFS is using:

machine1# echo "::memstat" | mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 1018529 3978 3%
ZFS File Data 15093689 59959 45% <-------*** Almost 1/2 of total memory
Anon 336221 1313 1%
Exec and libs 10534 41 0%
Page cache 57536 224 0%
Free (cachelist) 30668 119 0%
Free (freelist) 17004719 66424 51%

Total 33551896 141062
Physical 32626709 127448

There is an /etc/system parameter that can be used to reduce/limit zfs’s memory usage.

We can set below size and reboot the system to cap the memory usage so that next time the JVM don’t run out of space.

— set zfs:zfs_arc_max=179869184
— Reboot the system

The zfs_arc_max is a Solaris kernel tuning parameter, and it should be set to a value that it suitable to the system on which is being set.

 

 

In case of any ©Copyright or missing credits issue please check CopyRights page for faster resolutions.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.