Details
-
Bug
-
Resolution: Fixed
-
Low
-
7.6.0, 7.6.5, 7.11.2
-
None
-
1
-
Severity 2 - Major
-
Description
Issue Summary
On attempting to search for results in Bitbucket, it's possible that the thread handling the search request can get into a deadlock, permanently increasing the number of concurrently executing threads until the JVM is restarted.
Steps to Reproduce
- The exact conditions for when this issue can occur are not currently known. It's currently estimated to be related to the configured Elasticsearch instance being unreachable for an extended period of time following a restart of the Bitbucket instance.
Expected Results
One of the following happens:
- Bitbucket successfully completes the search request
- Bitbucket gets back an error and logs it/sends it back to the entity making the request
- Bitbucket times out in waiting for a response and logs it and reports the timeout to the entity making the request
Actual Results
The thread becomes stuck in a deadlock. On taking a series of thread dumps, the following type of WAITING threads can be seen:
"https-jsse-nio-8443-exec-178" #125950 daemon prio=5 os_prio=0 tid=0x00007f51b013d800 nid=0x16f01 waiting on condition [0x00007f517fe0c000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x00000000ad69a1a8> (a java.util.concurrent.CountDownLatch$Sync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231) at com.atlassian.bitbucket.internal.search.indexing.util.Observables.consume(Observables.java:75) at com.atlassian.bitbucket.internal.search.indexing.util.Observables.consumeSingle(Observables.java:92) at com.atlassian.bitbucket.internal.search.search.rest.SearchResource.processRequest(SearchResource.java:340) at com.atlassian.bitbucket.internal.search.search.rest.SearchResource.lambda$search$5(SearchResource.java:112) at com.atlassian.bitbucket.internal.search.search.rest.SearchResource$$Lambda$3322/193574928.apply(Unknown Source) ...
Workaround
Restart the Bitbucket instance to clear up the stuck threads, and then confirm that the configured Elasticsearch instance is both reachable and able to be successfully queried.
Attachments
Issue Links
- causes
-
PS-82422 Loading...
-
PS-84546 Loading...
-
PS-89013 Loading...
-
PS-94542 Loading...
-
PSSRV-20998 Loading...