-
Type:
Bug
-
Resolution: Fixed
-
Priority:
High
-
Affects Version/s: 6.4.3, 6.9.0
-
Component/s: Server - Platform
-
8
-
Severity 2 - Major
-
11
Summary
There is a bug in the version of Spring framework that Confluence uses.
Analysis
Application hangs - when this happens, threads in Confluence are blocked on a thread that looks like this:
ajp-nio-48009-exec-289 Stack Trace is: java.lang.Thread.State: RUNNABLE at java.util.Arrays.copyOf(Arrays.java:3181) at java.util.ArrayList.grow(ArrayList.java:265) at java.util.ArrayList.ensureExplicitCapacity(ArrayList.java:239) at java.util.ArrayList.ensureCapacityInternal(ArrayList.java:231) at java.util.ArrayList.add(ArrayList.java:462) ... at com.atlassian.plugin.osgi.spring.DefaultSpringContainerAccessor.createBean(DefaultSpringContainerAccessor.java:97) at com.atlassian.confluence.plugin.ConfluencePluginObjectFactory.buildAction(ConfluencePluginObjectFactory.java:45) at com.opensymphony.xwork.DefaultActionInvocation.createAction(DefaultActionInvocation.java:199) at com.opensymphony.xwork.DefaultActionInvocation.init(DefaultActionInvocation.java:272) at com.opensymphony.xwork.DefaultActionInvocation.<init>(DefaultActionInvocation.java:65) at com.opensymphony.xwork.DefaultActionInvocation.<init>(DefaultActionInvocation.java:58) at com.opensymphony.xwork.DefaultActionProxyFactory.createActionInvocation(DefaultActionProxyFactory.java:32) at com.opensymphony.xwork.DefaultActionProxy.prepare(DefaultActionProxy.java:124) at com.opensymphony.xwork.DefaultActionProxy.<init>(DefaultActionProxy.java:75) at com.servicerocket.confluence.randombits.conveyor.xwork.ConveyorActionProxy.<init>(ConveyorActionProxy.java:13) at com.servicerocket.confluence.randombits.conveyor.xwork.ConveyorActionProxyFactory.createActionProxy(ConveyorActionProxyFactory.java:25) ... at com.atlassian.confluence.impl.vcache.VCacheRequestContextFilter$$Lambda$816/1971323094.perform(Unknown Source) at com.atlassian.confluence.impl.vcache.VCacheRequestContextManager.doInRequestContextInternal(VCacheRequestContextManager.java:87) at com.atlassian.confluence.impl.vcache.VCacheRequestContextManager.doInRequestContext(VCacheRequestContextManager.java:71) ... at org.apache.coyote.ajp.AbstractAjpProcessor.process(AbstractAjpProcessor.java:877) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:684) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1539) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1495) - locked <0x00000006d607a218> (a org.apache.tomcat.util.net.NioChannel) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:748)
For example, we've seen this thread block ~300 other threads from completing. From a fastthread.io report:
ajp-nio-48009-exec-289 thread obtained java.util.concurrent.ConcurrentHashMap's lock & did not release it. Due to that 319 threads are BLOCKED as shown in the below graph.
Notes
This is the bug in Spring that we believe is causing this issue: (SPR-14388) Deadlock while creating a new thread on bean initialization with transactional code invocation - Spring JIRA
- is blocked by
-
PSR-106 Loading...