Uploaded image for project: 'Jira Data Center'
  1. Jira Data Center
  2. JRASERVER-45903

Adding a new worklog entry is slow due to a large change history

    • 6.04
    • 132
    • Severity 2 - Major
    • 117
    • Hide
      Atlassian Update – 31 May 2023

      Dear Customers,

      Thank you for taking the time to file and comment on this issue. We realize it still occurs and impacts your organization. We are now working on multiple customer requests and on new features, so we have to postpone our resolution of this issue. We’ve decided to move this issue to our long-term backlog.

      The workaround for this bug is as follows:
      Clone the problematic issues with long history and use the cloned issues to log work

      Please continue watching this ticket for future updates and changes in the timeline that impacts your work.

      Best regards

      Jakub Reczycki

      Jira DC Developer

      Show
      Atlassian Update – 31 May 2023 Dear Customers, Thank you for taking the time to file and comment on this issue. We realize it still occurs and impacts your organization. We are now working on multiple customer requests and on new features, so we have to postpone our resolution of this issue. We’ve decided to move this issue to our long-term backlog. The workaround for this bug is as follows: Clone the problematic issues with long history and use the cloned issues to log work Please continue watching this ticket for future updates and changes in the timeline that impacts your work. Best regards Jakub Reczycki Jira DC Developer

      NOTE: This suggestion is for JIRA Server. Using JIRA Cloud? See the corresponding suggestion.

      Performing some testing with JIRA 6.4.5, I've noticed that there is a huge difference when logging work on an issue with no history and on an issue with a long history.

      I enabled Profiling on JIRA to check the difference:

      Example 1: Issue with 858 entries on history:

      2015-10-21 15:18:53,596 http-bio-8080-exec-21 DEBUG sysadmin 918x163x1 1zsbpp 0:0:0:0:0:0:0:1 /secure/CreateWorklog.jspa [atlassian.util.profiling.UtilTimerStack] [6104ms] - /secure/CreateWorklog.jspa
      [0ms] - PermissionManager.hasPermission()
      [6100ms] - CreateWorklog.execute()
      [1ms] - PermissionManager.hasPermission()
      [1ms] - PermissionManager.hasPermission()
      [0ms] - PermissionManager.hasPermission()
      [0ms] - PermissionManager.hasPermission()
      [0ms] - PermissionManager.hasPermission()
      [0ms] - PermissionManager.hasPermission()
      [44ms] - IssueIndexManager.reIndexIssueObjects()
      [1ms] - PermissionManager.hasPermission()
      [11ms] - IssueIndexManager.reIndexWorklogs()
      

      Example 2: Issue with 4 entries on history:

      2015-10-21 15:21:44,662 http-bio-8080-exec-24 DEBUG sysadmin 921x169x1 1zsbpp 0:0:0:0:0:0:0:1 /secure/CreateWorklog.jspa [atlassian.util.profiling.UtilTimerStack] [189ms] - /secure/CreateWorklog.jspa
      [0ms] - PermissionManager.hasPermission()
      [186ms] - CreateWorklog.execute()
      [0ms] - PermissionManager.hasPermission()
      [0ms] - PermissionManager.hasPermission()
      [0ms] - PermissionManager.hasPermission()
      [0ms] - PermissionManager.hasPermission()
      [0ms] - PermissionManager.hasPermission()
      [0ms] - PermissionManager.hasPermission()
      [37ms] - IssueIndexManager.reIndexIssueObjects()
      [0ms] - PermissionManager.hasPermission()
      [9ms] - IssueIndexManager.reIndexWorklogs()
      [1ms] - PermissionManager.getProjects()
      

      Notes

      It further degrade the performance of the instance and it takes longer to load issues with large history items

      • SQL to identify issues with large worklog history:
        select issueid, count(id) from worklog group by issueid having count(id) > 100 order by (count(id)) desc;
        
        • With join
          select concat(p.pkey,'-',i.issuenum) as issue, count(a.id) from worklog a, jiraissue i, project p where i.project = p.id and i.id = a.issueid group by p.pkey,i.issuenum having count(a.id) > 100 order by count (a.id) DESC;
          
      • SQL to find large number of jiraaction (comments ... ) per issue:
        select a.issueid, count(a.id) from jiraaction a group by a.issueid order by count (a.id) DESC;
        
        • With join
          select concat(p.pkey,'-',i.issuenum) as issue, count(a.id) from jiraaction a, jiraissue i, project p where i.project = p.id and i.id = a.issueid group by p.pkey,i.issuenum order by count (a.id) DESC;
          

      Workaround

      Clone the problematic issues with long history and use the cloned issues to log work

        1. 10.1 add worklog.jpg
          10.1 add worklog.jpg
          710 kB
        2. 27.jpg
          2.40 MB
        3. 9.1 add worklog.jpg
          9.1 add worklog.jpg
          728 kB

          Form Name

            [JRASERVER-45903] Adding a new worklog entry is slow due to a large change history

            Hello,

            is there any update or plan for any improvement in this bad design query?

            Thank you

             

             

            Lefteris Pitselis added a comment - Hello, is there any update or plan for any improvement in this bad design query? Thank you    

            Gabriel Udvar added a comment - - edited

            Due to this issue, we periodically check for issues that have a large number of WorkLogs using the below db query:

            WITH RecordsPerIssuesTable AS
            (
                SELECT issueid, COUNT AS RecordsPerIssues
                from worklog
                group by issueid
                having COUNT > 1000
            )
            select project.pkey,
                   issuenum,jiraissue.ID,
                   PROJECT,SUMMARY,
                   }}{{project.pname,
                   RecordsPerIssues
            from jiraissue
            inner join project on jiraissue.PROJECT = project.ID
            inner join RecordsPerIssuesTable on RecordsPerIssuesTable.issueid = jiraissue.ID
            ORDER BY RecordsPerIssuesTable.RecordsPerIssues DESC

            Now, depending on your Jira instance, you might want to change the COUNT number to a number of your liking and close those issues. 

            You might want to isolate them as well in a Project with very limited access, so that users don't accidentally open them.

            In combination with Tempo Timesheets, this bug proves to be very damaging.

            It seems that Tempo Timesheets does a lot of checks or locks the index files when reading/writing on those issues, causing a backlog of permissions checks that ultimately cause Jira to crash.

            Gabriel Udvar added a comment - - edited Due to this issue, we periodically check for issues that have a large number of WorkLogs using the below db query: WITH   RecordsPerIssuesTable  AS (      SELECT   issueid,  COUNT   AS   RecordsPerIssues      from worklog      group   by   issueid      having   COUNT  > 1000 ) select project.pkey,         issuenum,jiraissue.ID,         PROJECT,SUMMARY,        }}{{project.pname,         RecordsPerIssues from jiraissue inner   join project  on jiraissue.PROJECT = project.ID inner   join   RecordsPerIssuesTable  on   RecordsPerIssuesTable.issueid = jiraissue.ID ORDER   BY   RecordsPerIssuesTable.RecordsPerIssues  DESC Now, depending on your Jira instance, you might want to change the COUNT number to a number of your liking and close those issues.  You might want to isolate them as well in a Project with very limited access, so that users don't accidentally open them. In combination with Tempo Timesheets, this bug proves to be very damaging. It seems that Tempo Timesheets does a lot of checks or locks the index files when reading/writing on those issues, causing a backlog of permissions checks that ultimately cause Jira to crash.

            Atlassian Update – 31 May 2023

            Dear Customers,

            Thank you for taking the time to file and comment on this issue. We realize it still occurs and impacts your organization. We are now working on multiple customer requests and on new features, so we have to postpone our resolution of this issue. We’ve decided to move this issue to our long-term backlog.

            The workaround for this bug is as follows:
            Clone the problematic issues with long history and use the cloned issues to log work

            Please continue watching this ticket for future updates and changes in the timeline that impacts your work.

            Best regards

            Jakub Reczycki

            Jira DC Developer

            Jakub Reczycki added a comment - Atlassian Update – 31 May 2023 Dear Customers, Thank you for taking the time to file and comment on this issue. We realize it still occurs and impacts your organization. We are now working on multiple customer requests and on new features, so we have to postpone our resolution of this issue. We’ve decided to move this issue to our long-term backlog. The workaround for this bug is as follows: Clone the problematic issues with long history and use the cloned issues to log work Please continue watching this ticket for future updates and changes in the timeline that impacts your work. Best regards Jakub Reczycki Jira DC Developer

            Was this solved on https://jira.atlassian.com/browse/JRASERVER-66251 or its another thing?

            Jhonata Nascimento added a comment - Was this solved on https://jira.atlassian.com/browse/JRASERVER-66251 or its another thing?

            Anyone with similar issues have you checked your Postgres memory settings?

             

            I've just found that by default a really key performance setting for returning and processing worklog data is set to a measly 4MB if left #'d out in the PostgreSQL.conf file.

             

            Recommendations on another ticket around gathering deleted worklogs via an API call cause the DB to lock and max out the CPU... Increased the work_mem setting to 500MB, it's working brilliantly now and other pages like Tempo Planning etc are also much faster.

             

            (https://jira.atlassian.com/browse/JRASERVER-70716?focusedCommentId=2373506&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-2373506)

             

            In this ticket, they also recommended updating the shared_buffers to 200MB but ours was already on 1GB.

            James Webster added a comment - Anyone with similar issues have you checked your Postgres memory settings?   I've just found that by default a really key performance setting for returning and processing worklog data is set to a measly 4MB if left #'d out in the PostgreSQL.conf file.   Recommendations on another ticket around gathering deleted worklogs via an API call cause the DB to lock and max out the CPU... Increased the work_mem setting to 500MB, it's working brilliantly now and other pages like Tempo Planning etc are also much faster.   ( https://jira.atlassian.com/browse/JRASERVER-70716?focusedCommentId=2373506&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-2373506 )   In this ticket, they also recommended updating the shared_buffers to 200MB but ours was already on 1GB.

            Dusty added a comment -

            Same issue here

            Dusty added a comment - Same issue here

            This affects us too with a recent Jira upgrade from v8.5 to v8.20.

            Russell Quiring added a comment - This affects us too with a recent Jira upgrade from v8.5 to v8.20.

            This is currently affecting over 800 issues in our instance  

            Sam Hudson added a comment - This is currently affecting over 800 issues in our instance  

            Hello,

            We've have the same problem in our Jira Instance. 

            Could you find a fix quickly ?

             Thanks for your answer

            Regards

            Sébastien Lalau added a comment - Hello, We've have the same problem in our Jira Instance.  Could you find a fix quickly ?  Thanks for your answer Regards

            Tim added a comment -

            I'm having the same too, the page gets stuck loading *.psdsn.com which Tempo is reliant on but looks like a third party script.Also ttps://app.tempo.io/rest/hobson/log-time-form-configuration/ is super slow!

            Tim added a comment - I'm having the same too, the page gets stuck loading *.psdsn.com which Tempo is reliant on but looks like a third party script.Also ttps://app.tempo.io/rest/hobson/log-time-form-configuration/ is super slow!

              f10143f0bd42 Sergii Sinelnychenko
              rrosa@atlassian.com Rodrigo Rosa
              Affected customers:
              173 This affects my team
              Watchers:
              160 Start watching this issue

                Created:
                Updated:
                Resolved: