The Plugins2 system in JIRA4 appears to be leaking heap and permgen when restarting it.

      Restarting Plugins2 (in JIRA this happens when doing a full system import) when there are active Plugins2 plugins does not allow all memory previously referenced to be available for GC, most particularly there are a number of ClassLoaders left active with no obvious GC root the classes of whom are themselves roots for a whole graph of circularly referenced garbage. The plugins do not have to have been used for the leak to occur. There is not a one-one correspondence to restarts and leaks however (we generally see about 6 dead plugin systems although 30 or more restarts may have occured).

      In the process of diagnosing this we have tracked down and removed every possible reference to a large section of memory (a plugin framework) that we could find. Starting with all strong references and proceeding to remove soft and weak references - even things like clearing the java.lang.reflect.Proxy cache - and even Finalizer references until YourKit, Eclipse MAT, JProfiler and jhat profilers all report that the memory in question is dead and should be collectable, but inexplicably the JVM still holds on to it. There are no JNI Global references either, yet this memory remains uncollectable!

      This happens on all JVMs we have tried so far, which are the 1.5 and 1.6 JVMs on Linux and Mac, and the IBM 1.6 JDK on Linux.

      In practice this doesn't hurt a standard JIRA customer as a full system import is generally a one-off operation. It seriously affects runs of our functional test suite.

      This is a JVM GC bug (4957990). No known memory analysis tool can find the heap root (i.e. according to "the rules" there is no heap root). Are there any known GC memory leaks caused by ClassLoaders being dropped for instance?

      The application is creating and disposing a lot of ClassLoaders via OSGi (Apache Felix) with Spring OSGi. It creates a lot of java.lang.reflect.Proxy class instances.

      Strictly speaking this is not actually a memory leak as the additional memory use is bounded - memory use does not go up forever.

        1. java_pid10050.hprof.gz
          126.64 MB
        2. java_pid10050-gc-debug-log.gz
          147 kB
        3. java_pid2881.hprof.gz
          39.23 MB
        4. tomcat-sun6-macos-cms.hprof.gz
          45.46 MB

            [JRASERVER-16932] Memory Leak in Sun JVM tripped by Plugins2

            It seems to be fixed in the future 6u21 release : http://download.java.net/jdk6/6u21/promoted/b05/changes/JDK6u21.list.html

            Also to work around it on MacOS X 10.6 (64 bits), you should use "-d32 -client" to use the client VM.

            Sylvain Laurent added a comment - It seems to be fixed in the future 6u21 release : http://download.java.net/jdk6/6u21/promoted/b05/changes/JDK6u21.list.html Also to work around it on MacOS X 10.6 (64 bits), you should use "-d32 -client" to use the client VM.

            Marking as Won't Fix as there is nothing we can do about it. Also, it does not affect customers in the general instance as they will not be regularly re-importing the data.

            Jed Wesley-Smith (Inactive) added a comment - Marking as Won't Fix as there is nothing we can do about it. Also, it does not affect customers in the general instance as they will not be regularly re-importing the data.

            Edited to confirm that this is indeed a Sun JVM bug. Note a workaround is to use a different JVM.

            Jed Wesley-Smith (Inactive) added a comment - Edited to confirm that this is indeed a Sun JVM bug. Note a workaround is to use a different JVM.

            Attaching heap dump and gc debug logs generated using: -XX:+PrintGCDetails -XX:+TraceClassLoading -XX:+TraceClassUnloading

            Jed Wesley-Smith (Inactive) added a comment - Attaching heap dump and gc debug logs generated using: -XX:+PrintGCDetails -XX:+TraceClassLoading -XX:+TraceClassUnloading

            Actually, specifying -client on a 64bit MacOSX JVM does nothing. The linked JVM bug above or some variant is almost definitely the cause.

            Jed Wesley-Smith (Inactive) added a comment - Actually, specifying -client on a 64bit MacOSX JVM does nothing. The linked JVM bug above or some variant is almost definitely the cause.

            Some conjecture that this jdk bug is the culprit.

            edit removed stupidity regarding specifying -client on a MacOSX 64bit JVM which doesn't work.

            Jed Wesley-Smith (Inactive) added a comment - - edited Some conjecture that this jdk bug is the culprit. edit removed stupidity regarding specifying -client on a MacOSX 64bit JVM which doesn't work.

            The attached heap dump was generated via a HeapDumpOnOutOfMemoryError handler with an Xmx=128m. It has 45MB of referenced memory and 80MB of dead but uncollectable memory. For instance search for intances of the JiraPluginManager.

            Jed Wesley-Smith (Inactive) added a comment - The attached heap dump was generated via a HeapDumpOnOutOfMemoryError handler with an Xmx=128m. It has 45MB of referenced memory and 80MB of dead but uncollectable memory. For instance search for intances of the JiraPluginManager .

            How do we search for what is referencing this uncollectable memory? Are there any other tools that can help find why this memory is not collected? Can we query the VM directly somehow?

            Jed Wesley-Smith (Inactive) added a comment - How do we search for what is referencing this uncollectable memory? Are there any other tools that can help find why this memory is not collected? Can we query the VM directly somehow?

              Unassigned Unassigned
              jed Jed Wesley-Smith (Inactive)
              Affected customers:
              0 This affects my team
              Watchers:
              5 Start watching this issue

                Created:
                Updated:
                Resolved: