Uploaded image for project: 'Jira Server and Data Center'
  1. Jira Server and Data Center
  2. JRASERVER-70934

Requests to `/jira/rest/internal/2/user/mention/search` where the parameters do NOT include a query are very slow

    XMLWordPrintable

    Details

    • Type: Bug
    • Status: Gathering Impact (View Workflow)
    • Priority: Low
    • Resolution: Unresolved
    • Affects Version/s: 7.13.13
    • Fix Version/s: None
    • Component/s: Issue - Create Issue
    • Labels:
      None

      Description

      Issue Summary

      We had an outage on a Jira instance running JSD 3.16.11 today which appeared to be caused by requests to the `/jira/rest/internal/2/user/mention/search` endpoint.

      Looking at request times for different queries, those that do NOT contain a query string appear to be significantly slower.

      Requests with a query string:

      Requests WITHOUT a query string:

      Note the scale change - the longest with a query string was 13s, the shortest without was 28s, with longest over 60s

      Examples:

      1. With a query string, took 0.14s: /jira/rest/internal/2/user/mention/search?issueKey=KEY-5678&maxResults=10&query=il&_=1587494420052, (referrer is /jira/browse/KEY-5678)
      2. Without a query string, took 62s: /jira/rest/internal/2/user/mention/search?issueKey=KEY-1234&maxResults=10&_=1587407226218 (referrer is /jira/browse/KEY-1234?filter=-1)

      Steps to Reproduce

      I am not sure what this endpoint is - I went into the referrer (for a lot of them this was filter=-1) and clicked around but was not able to hit that endpoint with anything I did.

      Expected Results

      Searches return promptly with no performance impact.

      Actual Results

      This actually caused an outage for us due to a spike in CPU which seems connected to GC activity.

      CPU spike:

      Unhealthy hosts:

      Increased request latency:

      New Relic showing that this endpoint is the slowest:

      CPU activity by thread showing that GC is the main consumer of CPU. NB: we got the threads long after the request returned/timed out - it is likely still processing in the back end at this point:

      Workaround

      Currently there is no known workaround for this behavior. A workaround will be added here when available

        Attachments

          Issue Links

            Activity

              People

              Assignee:
              Unassigned Unassigned
              Reporter:
              dunterwurzacher Denise Unterwurzacher [Atlassian]
              Votes:
              1 Vote for this issue
              Watchers:
              7 Start watching this issue

                Dates

                Created:
                Updated: