Uploaded image for project: 'Jira Data Center'
  1. Jira Data Center
  2. JRASERVER-69432

Associating a field configuration scheme with a new project is very slow in large instances if the scheme is reused

XMLWordPrintable

      Issue Summary

      As an administrator, I want to use a single fieldConfigurationScheme for new projects. However, when we have a large number of projects using a single fieldConfigurationScheme, it takes a very long time to associate a new project with the fieldCongurationScheme.

      fieldConfigSchemeManager.updateFieldConfigScheme(...) takes as a parameter a list of associated projects (i.e. all of them), and updates the database for each project one-by-one (com.atlassian.jira.issue.context.persistence.FieldConfigContextPersisterWorker.store(....) ) also flushing the fieldConfigScheme cache each time.

      Environment

      Large Jira Instance, >2000 projects, using a specific fieldConfigurationScheme

      Steps to Reproduce

      1. create >2000 projects
      2. create a fieldConfigurationScheme
      3. associate all projects with the fieldConfigurationScheme
      4. create a new project
      5. set DEBUG for the com.atlassian.cache.event package
      6. associate new project with fieldConfigurationScheme and see in the logs that com.atlassian.jira.issue.context.persistence.FieldConfigContextPersisterWorker.configContextsBySchemeId cache is flushed 2000 times

      Expected Results

      Persistence is handled efficiently, i.e. only add the association for the new project instead of updating all associations, hence causing a single cache flush.

      Actual Results

      *com.atlassian.jira.issue.context.persistence.FieldConfigContextPersisterWorker.store(....) * is called once for each project that's already associated with the same fieldConfigurationScheme.

      Workaround

      None

              Unassigned Unassigned
              keroglu Kurtcebe Eroglu
              Votes:
              1 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: