Uploaded image for project: 'Jira Data Center'
  1. Jira Data Center
  2. JRASERVER-74338

Filter subscriptions for too many recipients may result in performance degradation or OutOfMemoryError in Jira

XMLWordPrintable

    • 8.2
    • 6
    • Severity 1 - Critical
    • 165
    • Hide

      Jira 9.4 (JSM 5.4) LTS backport notice

      We know how impactful this is to enterprise-level customers (that rely on LTS releases). To ensure the best quality, we want to ensure this significant change does not bring any side effects; thus, we are currently monitoring it over the soaking period. We strongly consider backporting it, but we cannot provide any exact dates at the current stage. We'll keep updating this Issue as we have more details on release dates or possible mitigations.

       

       
Related links of interest:


      (note last updated on April 6, 2023)

      Show
      Jira 9.4 (JSM 5.4) LTS backport notice We know how impactful this is to enterprise-level customers (that rely on LTS releases). To ensure the best quality, we want to ensure this significant change does not bring any side effects; thus, we are currently monitoring it over the soaking period. We strongly consider backporting it, but we cannot provide any exact dates at the current stage. We'll keep updating this Issue as we have more details on release dates or possible mitigations.    
Related links of interest:
 Jira Data Center announcements Test-drive Jira 9.7 EAP 01! Jira Software release notes (note last updated on April 6, 2023)

      Issue Summary

      When a Filter subscription has thousands of recipients, the com.atlassian.jira.mail.SubscriptionSingleRecepientMailQueueItem object retains too much Heap memory, causing the JVM to go into Full GC cycles or eventually into the OutOfMemory state.

      The symptoms are exactly the same as JRASERVER-31588 "Large filter subscriptions can crash a JIRA instance with an OutOfMemoryError", but the Objects seen accumulating the Heap are different.
      That specific issue's been fixed in 7.1.6 but this one here affects 8.20 at least (and possibly other versions of Jira 8.x).
       

      Steps to Reproduce

      1. Create an instance with 50 projects, 20 Issues in each and 100,000 users that all belong to a single group
      2. Create a Filter of all Issues (~1,000) and subscribe it to e-mail to all 100k users.
      3. Wait for the subscription to run and take a Heap dump
         

      Expected Results

      1. Jira would not deplete the Heap memory
         

      Actual Results

      1. The java.lang.Thread Sending mailitem com.atlassian.jira.mail.SubscriptionSingleRecepientMailQueueItem will grow and retain a large portion of the Heap. Full GC cycles or OutOfMemory may occur depending on the Heap size.

      Real data

      A customer had a Subscription to 130k recipients:

      2022-09-09 00:00:01,945-0700 Sending mailitem com.atlassian.jira.mail.SubscriptionMailQueueItem id: '11111' owner: 'xxxxx(JIRAUSER111111)' INFO anonymous Mail Queue Service [c.a.jira.mail.SubscriptionMailQueueItem] Sending subscription '11111' of filter '222222' to 130948 recipients.
      

      And the SubscriptionSingleRecepientMailQueueItem grew up to 32GB:

        

      Workaround

      There is no workaround for this. Subscriptions to too many recipients should be duplicated to smaller groups (and preferably, scheduled at different times).
      The below grep can be used to spot subscriptions to 1000 recipients or more ([0-9]{4}{}) and :

      $ grep -E "Sending subscription '[0-9]{+}' of filter '[0-9]{+}' to [0-9]{4}" <jira-home>/log/atlassian-jira.log*
      

      In Windows you may search the logs for similar keywords: "Sending mailitem com.atlassian.jira.mail.SubscriptionMailQueueItem", "Sending subscription".

      This other Linux command stream prints a table of the key data, also for 1000+ recipients:

      $ grep -E "Sending subscription '[0-9]+' of filter '[0-9]+' to [0-9]{4}" <jira-home>/log/atlassian-jira.log* | awk 'BEGIN {printf "%s %s %s %s %s\n", "Time", "Owner", "Subscription_Id", "Filter_Id", "Recipients"}; {print $1"_"$2, $9, $18, $21, $23}' | sed 's/'\''//g' | column -tx;
      

      Sample output (with usernames and user keys redacted):

      Time                          Owner              Subscription_Id  Filter_Id  Recipients
      2022-12-11_17:05:00,121+0000  username(userkey)  14305            31001      1962
      2022-12-11_17:15:18,022+0000  username(userkey)  14304            31000      1962
      2022-12-12_09:27:18,762+0000  username(userkey)  13300            19063      2026
      2022-12-12_09:27:18,793+0000  username(userkey)  13901            18712      1962
      2022-12-12_09:27:18,805+0000  username(userkey)  13902            18710      1962
      2022-12-12_09:27:18,819+0000  username(userkey)  13903            18711      1962
      2022-12-12_09:27:18,832+0000  username(userkey)  13904            18716      1962
      2022-12-12_09:27:18,845+0000  username(userkey)  13905            18717      1962
      2022-12-12_09:27:18,858+0000  username(userkey)  13906            18718      1962
      2022-12-12_09:27:18,870+0000  username(userkey)  13907            18713      1962
      2022-12-12_09:27:18,882+0000  username(userkey)  13908            18714      1962
      2022-12-12_09:27:18,895+0000  username(userkey)  13909            18715      1962
      2022-12-12_09:27:18,908+0000  username(userkey)  13910            18719      1962
      2022-12-12_09:27:18,921+0000  username(userkey)  13911            18720      1962
      2022-12-12_09:27:18,934+0000  username(userkey)  13912            18721      1962
      

      Mitigation

      This bug can be mitigated by limiting who can create a Group Subscription - there is a Jira Global permission "Manage Group Filter Subscription" that can be used to limit filter subscription assignment to only admins or specific users:

              28aa227a1664 Szymon Rachański
              rmartinez3@atlassian.com Rodrigo Martinez
              Votes:
              21 Vote for this issue
              Watchers:
              32 Start watching this issue

                Created:
                Updated:
                Resolved: