-
Bug
-
Resolution: Fixed
-
Low
-
6.4, 6.4.14, 7.1.10, 7.2.5
-
6.04
-
4
-
Severity 2 - Major
-
11
-
Summary
Some gadgets and reports may cause a heavy increase in JIRA's memory footprint due to inneficient OneDimensionalObjectHitCollector caching.
This may cause the application to crash after entering into a Full GC loop or to throw OutOfMemoryErrors.
Afftected gadgets / reports:
Created vs Resolved:
The Created vs Resolved gadget is loading all created and resolved fields into an array.
For big instance with a lot of issues this require creating arrays that can hold almost all the issues in the system.
Those arrays are treated by G1GC as humongous objects and leads to, inefficiency in GC, memory fragmentation and makes this gadget very slow.
Memory footprint is not dependent on the query nor on the reporting period, it only affected by size of biggest segment in Lucene index.
Steps to reproduce
- Generate instance with 130k issues and memory of 1.5GB.
- Configure G1GC to use 1MB segments with the following JVM flags: -XX:UseG1GC -XX:G1HeapRegionSize=1m. This allows to trigger humongous allocation with a limited data set. Also set the following flags to allow GC logging: -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintReferenceGC -XX:+PrintAdaptiveSizePolicy
- Perform a lock & re-index operation.
- Create a dashboard with a Resolved vs Closed gadget on it.
- Capture the URL that was used for generating content of the gadget (something containing createdVsResolved), copied this as curl from chrome dev tools
- Create loop that hits this URL constantly.
- Observe in GC log:
016-11-24T12:57:31.463+0000: 963.481: [GC pause (G1 Humongous Allocation) (young) (initial-mark) 963.481: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 653, predicted base time: 21.10 ms, remaining time: 178.90 ms, target pause time: 200.00 ms]
Single Level Group By Report
Steps to reproduce
- Generate an instance with 100k issues and memory of 1GB.
- When running this report, select a filter containing all issues.
- Inspecting the Garbage Collection logs, several Full GCs should be visible, even when using G1GC.
- Taking a thread dump during this period, while the report is loading, should reveal the stuck thread below:
"http-bio-8349-exec-24" #569 daemon prio=5 os_prio=0 tid=0x00007f2fb0002000 nid=0x38f6 runnable [0x00007f2f473f9000] java.lang.Thread.State: RUNNABLE at org.apache.lucene.store.DataInput.readString(DataInput.java:182) at org.apache.lucene.index.FieldsReader.addField(FieldsReader.java:431) at org.apache.lucene.index.FieldsReader.doc(FieldsReader.java:261) at org.apache.lucene.index.SegmentReader.document(SegmentReader.java:471) at org.apache.lucene.index.DirectoryReader.document(DirectoryReader.java:564) at org.apache.lucene.index.IndexReader.document(IndexReader.java:844) at com.atlassian.jira.issue.statistics.util.OneDimensionalDocIssueHitCollector.getDocument(OneDimensionalDocIssueHitCollector.java:61) at com.atlassian.jira.issue.statistics.util.OneDimensionalDocIssueHitCollector.collectWithTerms(OneDimensionalDocIssueHitCollector.java:50) at com.atlassian.jira.issue.statistics.util.AbstractOneDimensionalHitCollector.collect(AbstractOneDimensionalHitCollector.java:190) at com.atlassian.jira.issue.search.providers.LuceneSearchProvider.searchAndSort(LuceneSearchProvider.java:482) at com.atlassian.jira.issue.search.providers.LuceneSearchProvider.searchAndSort(LuceneSearchProvider.java:184) at com.atlassian.jira.plugin.report.impl.SingleLevelGroupByReport.searchMapIssueKeys(SingleLevelGroupByReport.java:96) at com.atlassian.jira.plugin.report.impl.SingleLevelGroupByReport.getOptions(SingleLevelGroupByReport.java:77) at com.atlassian.jira.plugin.report.impl.SingleLevelGroupByReport.generateReportHtml(SingleLevelGroupByReport.java:125) at com.atlassian.jira.web.action.browser.ConfigureReport.doExecute(ConfigureReport.java:156) at webwork.action.ActionSupport.execute(ActionSupport.java:165) at com.atlassian.jira.action.JiraActionSupport.execute(JiraActionSupport.java:88) (...)
- Capturing a heap dump should reveal a very large object referenced by such thread.
Expected results
No excessive amount of arrays containing all issues on the instance is created. GC operations are not impacted.
Actual results
JIRA allocates several arrays that might contain all issues on the instance. Increased memory pressure can trigger Full GC operations, decreasing the instance responsiveness.
Workaround
Single Level Group By Report: Reduce the number of issues in the filter configured in the report.
- is related to
-
JRASERVER-63801 Caching of reports data is causing high memory pressure and performance degradation
- Closed
- relates to
-
JRASERVER-63665 Lucene Search with unlimited PageFilter causes unnecessary memory allocation
- Closed
-
JRASERVER-65531 SingleLevelGroupByReport with Empty Filter Causes OOME
- Closed
- mentioned in
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...