Uploaded image for project: 'Jira Data Center'
  1. Jira Data Center
  2. JRASERVER-65930

Favourite filters REST Resource consumes large amount of memory when user has large number of filters

    XMLWordPrintable

Details

    Description

      Summary

      If some user has many favourite filters, REST API call /rest/api/2/filter/favourite triggered by him will use large amount of memory (200MB+) and that causes high JVM memory pressure. In case of many simultaneous requests this could cause OOM and JVM/JIRA crash.

      Environment

      • User with many favourite filters: 100+

      Steps to Reproduce

      1. Create many filters
      2. Register the filters as your favourite ones
      3. Open Issue Navigator (check left sidebar)
        • or do REST API call /rest/api/2/filter/favourite directly

      Expected Results

      No performance problem happens.

      Actual Results

      JVM heap memory pressure (causes FullGC). Specific thread can allocate 200MB+ of memory.

      Notes

      • Thread dump will show the following stack-trace for the thread which loads /rest/api/2/filter/favourite, part related to com.atlassian.jira.rest.v2.search.FilterBean
        ...
        	at com.atlassian.jira.rest.v2.search.UserListResolver.getShareCount(UserListResolver.java:137)
        	at com.atlassian.jira.rest.v2.search.UserBeanListWrapper.<init>(UserBeanListWrapper.java:40)
        	at com.atlassian.jira.rest.v2.search.FilterBeanBuilder.build(FilterBeanBuilder.java:173)
        	at com.atlassian.jira.rest.v2.search.FilterResource$SearchRequestToFilterBean.apply(FilterResource.java:650)
        	at com.atlassian.jira.rest.v2.search.FilterResource$SearchRequestToFilterBean.apply(FilterResource.java:622)
        	at com.google.common.collect.Iterators$8.next(Iterators.java:812)
        	at com.google.common.collect.Lists.newArrayList(Lists.java:139)
        	at com.google.common.collect.Lists.newArrayList(Lists.java:119)
        	at com.atlassian.jira.rest.v2.search.FilterResource.getFavouriteFilters(FilterResource.java:422)
        ..
        
      • Each FilterBean object (corresponds to favourite filter) is around 2.8MB, so having 100 of them will use ~300MB of memory.
      • Heap dump will show large memory consumed by org.apache.tomcat.util.threads.TaskThread
        • Snippet from dominator tree with 350 favourite filters:
          Object / Stack Frame Name Shallow Heap Retained Heap
          org.apache.tomcat.util.threads.TaskThread @ 0x235d499d8 http-nio-8080-exec-1061 url:/jira/rest/api/2/filter/f... 128 961,473,680
          org.apache.tomcat.util.threads.TaskThread @ 0x235d49b98 http-nio-8080-exec-1059 url:/jira/rest/api/2/filter/f... 128 863,698,080

      Workaround

      • Reduce number of favourite filters
      • Set dark feature com.atlassian.jira.rest.v2.search.UserListResolver.getShareCount.disabled flag, that will always return sharedUsers.size: 0 in JSON response, eg:
        sharedUsers: {
         size: 0,
         items: [ ],
         max-results: 1000,
         start-index: 0,
         end-index: 0
        },
        
        • API break. Since after change /rest/api/2/filter/favourite API doesn't return a correct number of the share count there is a risk that it will affect some other functionality.

      Note on fix

      • The issue was work-arounded in 7.2.12, 7.6.1 and 7.7 by introducing an extra query param enableSharedUsers to /rest/api/2/filter/favourite and /rest/api/2/filter/id to show that users shouldn't be calculated.
        • By default, this param is true, so endpoints are fully backwards compatible. So that still can cause the mentioned problem
        • We've started using this param in JIRA itself ( issue-nav uses enableSharedUsers=false), so the problem should be fixed for JIRA.

      Attachments

        Issue Links

          Activity

            People

              izinoviev Ilya Zinoviev (Inactive)
              ayakovlev@atlassian.com Andriy Yakovlev [Atlassian]
              Votes:
              1 Vote for this issue
              Watchers:
              13 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: