-
Bug
-
Resolution: Fixed
-
Medium
-
6.3.13.0, 6.3.13.1, 6.4.0.3, 6.7.12, 6.7.14, 7.0.10
-
None
-
6.03
-
21
-
Severity 2 - Major
-
51
-
Summary
If a large number of issues are created without a rank value (say from a project import), the subsequent full reindex will take a significant amount of time.
Steps to Reproduce
- Project import a large amount of issues into a JIRA Agile enabled project.
- Perform a full reindex.
This can also be reproduced by any of the following:
- Adding a new rank field to an existing instance and reindexing.
- Very high issue creation load.
Expected Results
The reindex operations completes as per the reasonable expected speed.
Actual Results
The full reindex takes a significant amount of time.
Verification
- Generate a Thread Dump during reindexing operations.
- Verify if there are a number of threads waiting to lock java.lang.Object. This can be done by using TDA, or if using Linux the below command will help:
grep -A1 "com.atlassian.greenhopper.service.lexorank.LexoRankOperation.rankInitially" thread* |grep "waiting to lock" |awk '{print $1}' | sort |uniq -c 46 thread_dump_01.txt- 47 thread_dump_02.txt- 18 thread_dump_03.txt- 46 thread_dump_04.txt- 46 thread_dump_05.txt- 47 thread_dump_06.txt- 47 thread_dump_07.txt- 46 thread_dump_08.txt- 46 thread_dump_09.txt- 46 thread_dump_10.txt-
In the above results there are a large number of threads waiting for that lock - due to the way the code is currently written it means every single indexing thread awaits a slow operation to complete.
In the attached thread dumps:
- Customer has increased the max reindex threads to 50 in order to attempt to increase the indexing speed.
- This has caused 46 threads to block waiting for com.atlassian.greenhopper.service.lexorank.LexoRankOperation.rankInitially.
- As a result, indexing operations take a significant amount of time.
Notes
It appears this is due to rank values being created for the issues that do not have them, as com.atlassian.greenhopper.service.lexorank.LexoRankOperation.rankInitially causes all indexing threads to block waiting for it to complete.
On an instance with 1500 issues the indexing time on the first run was over 60 seconds, on subsequent runs it was around 30.
It is not advised to increase the maximum indexing threads, as this can cause significant performance problems such as what we're seeing in this issue, and also result in more memory being used which can lead to OutOfMemoryErrors. This will leave the indexing process in an unexpected state and require another full reindex to guarantee consistency.
Workaround
Wait for the indexing operations to complete, slowly. This can be sped up by increasing the available resources on the server, reducing the load and also verifying an appropriate read / write speed is present on the Lucene directory. It can be tested as per our Test the Disk Speed. Also note we do not support NFS mounts for Lucene as per JIRA Supported Platforms.
- relates to
-
JSWSERVER-11491 LexoRank Balance takes a very long time on larger instances
- Closed
-
JDEV-33154 Loading...
-
SW-1181 Loading...
-
SW-1185 Loading...
- was cloned as
-
JDEV-33155 Loading...
-
JSB-130 Loading...