-
Type:
Bug
-
Resolution: Fixed
-
Priority:
Medium
-
Affects Version/s: 7.0.1, 7.4.18, 7.13.9, 7.19.1
-
Component/s: Editor - Synchrony
-
38
-
Severity 2 - Major
-
36
The fix for this bug will be released to our Long Term Support release.
The fix for this bug has been approved for backport and will be available in an upcoming 8.5 release of Confluence. Check the fix-version field for details.
This fix will not be available in Confluence 7.19, so an upgrade to Confluence 8.5 will be required.
Issue Summary
After turning off Collaborative editing, synchrony proxy will still try to connect to synchrony until handshake timeout is reached thus a number of http threads can be created during this. In the catalina.out, there are a number of warning that synchrony proxy failed to stop http exec threads.
Environment
- Confluence Server 7.0.1
- Synchrony-proxy running in front of Synchrony
Steps to Reproduce
- Login as admin
- Go to general configuration -> collaborative editing
- Turn off collaborative editing
- Check catalina.out (could be done by creating and downloading support zip)
Expected Results
Synchrony proxy stop trying to talk to synchrony. No http threads will be holding to connect to synchrony until timeout.
Actual Results
Synchrony proxy is still trying to connect to synchrony until timeout.
The below exception is thrown in the catalina.out file:
29-Sep-2019 21:57:13.728 WARNING [Catalina-utility-1] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [synchrony-proxy] appears to have started a thread named [http-nio-8090-exec-1] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1695) java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323) java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1775) java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1915) com.atlassian.synchrony.proxy.websocket.WebSocketUpstreamHandler.getSynchronySession(WebSocketUpstreamHandler.java:64) com.atlassian.synchrony.proxy.websocket.WebSocketProxy.afterConnectionEstablished(WebSocketProxy.java:47) org.springframework.web.socket.handler.PerConnectionWebSocketHandler.afterConnectionEstablished(PerConnectionWebSocketHandler.java:81) org.springframework.web.socket.handler.WebSocketHandlerDecorator.afterConnectionEstablished(WebSocketHandlerDecorator.java:70) org.springframework.web.socket.handler.LoggingWebSocketHandlerDecorator.afterConnectionEstablished(LoggingWebSocketHandlerDecorator.java:48) org.springframework.web.socket.handler.ExceptionWebSocketHandlerDecorator.afterConnectionEstablished(ExceptionWebSocketHandlerDecorator.java:48) org.springframework.web.socket.adapter.standard.StandardWebSocketHandlerAdapter.onOpen(StandardWebSocketHandlerAdapter.java:103) org.apache.tomcat.websocket.server.WsHttpUpgradeHandler.init(WsHttpUpgradeHandler.java:133) org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:899) org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1587) org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) java.lang.Thread.run(Thread.java:748)
Notes
Confluence might become unresponsive while the internal synchrony-proxy is still trying to reach Synchrony. You can confirm if that is the case by taking thread dumps, all HTTP EXEC threads will be in the following state:
"http-nio-8090-exec-48" #320 daemon prio=5 os_prio=0 tid=0x00007f9c8c0e8000 nid=0x5d8e waiting on condition [0x00007f9c78a0a000] java.lang.Thread.State: TIMED_WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x0000000768ad32d0> (a java.util.concurrent.CompletableFuture$Signaller) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1695) at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323) at java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1775) at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1915) at com.atlassian.synchrony.proxy.websocket.WebSocketUpstreamHandler.getSynchronySession(WebSocketUpstreamHandler.java:64)
Workaround
Option 1
Use the following system property to configure a shorter timeout for the synchrony proxy handshake:
-Datlassian.synchrony.proxy.handshake.timeout.sec=1
Option 2
Point your Load Balancer or Reverse Proxy directly to Synchrony and avoid using the internal proxy. More details here.
- mentioned in
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...