Uploaded image for project: 'Confluence Data Center'
  1. Confluence Data Center
  2. CONFSERVER-100756

Default value of 2000 for jobs.limit.per.purge is too small to maintain size of scheduler_run_details table

XMLWordPrintable

    • Icon: Suggestion Suggestion
    • Resolution: Unresolved
    • None
    • Server - Performance
    • None
    • We collect Confluence feedback from various sources, and we evaluate what we've collected when planning our product roadmap. To understand how this piece of feedback will be reviewed, see our Implementation of New Features Policy.

      The scheduled job titled "Purge Old Job Run Details" runs once per day at 23:00 local time (by default). Per iteration, a maximum of 2000 rows (system property 'jobs.limit.per.purge') are removed.

      In Confluence 9.2, without any 3rd party plugins, a minimum of 97 rows per minute are added to this table. Extrapolating that rate, the daily cleanup job will remove less than 21 minutes worth of entries from scheduler_run_details per day.

      Excluding the use of 3rd-party plugins (which add more job entries), we need Confluence to remove approximately 140,000 rows per day.

      The only workaround currently is to stop Confluence and truncate this table to curb the volume of content in this table, as running the above job more frequently (ie. hourly) with a higher 'jobs.limit.per.purge' value as described in this article is not a solution for some databases with slow 'DELETE FROM' performance on large tables such as this one.

              Unassigned Unassigned
              mninnes@atlassian.com Malcolm Ninnes
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

                Created:
                Updated: