-
Bug
-
Resolution: Fixed
-
Highest
-
None
-
8.5.2
-
8.05
-
77
-
Severity 2 - Major
-
76
-
-
Issue Summary
This may or may not be isolated to webhooks usage.
Steps to Reproduce
- Connect MS Teams to Jira.
Expected Results
The connection stays active and does not spawn new thread.
Actual Results
Teams shows disconnected.
The below exception is thrown in the atlassian-jira.log file:
2020-07-23 15:08:14,700+0000 Timer-32 ERROR ServiceRunner [c.microsoft.signalr.HubConnection] HubConnection disconnected with an error: Server timeout elapsed without receiving a message from the server.. 2020-07-23 15:08:14,700+0000 Timer-12 ERROR [c.microsoft.signalr.HubConnection] HubConnection disconnected with an error: Server timeout elapsed without receiving a message from the server..
You may also see
ServiceRunner [c.m.teams.service.SignalRService] startConnectionIfDisconnected() error (java.lang.RuntimeException): Unexpected status code returned from negotiate: 503 Service Unavailable.
A "Client error - 400 when posting webhook" may also be logged.
Thread dumps contain thousands of threads with the same stack trace:
pool-1551-thread-1 priority:5 - threadId:0x0000000000001a41 - nativeId:0 - nativeId (decimal):0 - state:WAITING stackTrace: java.lang.Thread.State: WAITING (parking) at java.base@11.0.6/jdk.internal.misc.Unsafe.park(Native Method) - parking to wait for <0x000000002a1d3451> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.base@11.0.6/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194) at java.base@11.0.6/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081) at java.base@11.0.6/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1170) at java.base@11.0.6/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:899) at java.base@11.0.6/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054) at java.base@11.0.6/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114) at java.base@11.0.6/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base@11.0.6/java.lang.Thread.run(Thread.java:834) Locked ownable synchronizers: - None
OkHttp WebSocket https://msteams-jira-server.service.signalr.net/... priority:5 - threadId:0x0000000000001784 - nativeId:0 - nativeId (decimal):0 - state:WAITING stackTrace: java.lang.Thread.State: WAITING (parking) at java.base@11.0.6/jdk.internal.misc.Unsafe.park(Native Method) - parking to wait for <0x0000000031ea767a> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.base@11.0.6/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194) at java.base@11.0.6/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081) at java.base@11.0.6/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1170) at java.base@11.0.6/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:899) at java.base@11.0.6/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054) at java.base@11.0.6/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114) at java.base@11.0.6/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base@11.0.6/java.lang.Thread.run(Thread.java:834) Locked ownable synchronizers: - None
You may also see thousands of open files.
This may also be related to Out Of Memory errors as described in Jira server throws OutOfMemoryError: unable to create new native thread.
Workaround
Currently, there is no known workaround for this behavior. A workaround will be added here when available.
As a short time solution, the only remedy is to remove the Teams integration completely to avoid major problems.
PS: Simply disabling the app will not resolve this.
- mentioned in
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
- relates to
-
RAID-2530 Loading...