Uploaded image for project: 'Jira Data Center'
  1. Jira Data Center
  2. JRASERVER-69114

Provide cluster lock mechanism that can be used with one-off lock names



    • 15
    • 9
    • We collect Jira feedback from various sources, and we evaluate what we've collected when planning our product roadmap. To understand how this piece of feedback will be reviewed, see our Implementation of New Features Policy.



      Jira uses Beehive library to manage locks in DC. Locks are held internally in JVM and also implemented through the database table clusterlockstatus, which is shared between nodes.
      Locks are allocated through lock() / unlock() methods, which is expected semantics and standard API.

      Current implementation

      The current cluster lock mechanism was designed to work with statically-named locks. The implementation stores each lock in the database permanently. Therefore using dynamically generated lock names, such as "lock_for_task_" + taskId, causes rows to pile up in large numbers in the clusterlockstatus table, and there is no mechanism to prune them.

      Suggested implementation

      There should be a mechanism that can provide one-off cluster locks that are not kept in the DB forever.

      Steps to Reproduce

      • Run Jira DC
      • Create a unique lock
      • Lock the lock
      • Unlock the lock

      Expected Results

      Once the operation is done, no entry should be left over in the database (or should be cleaned after some time).

      Actual Results

      lock_name entries like "com.atlassian.jira.workflow.DefaultWorkflowSchemeManager$WorkflowAction.DELETE_SCHEME_10102" pile up in the clusterlockstatus table.

        id   |                                              lock_name                                               | locked_by_node |  update_time  
       10505 | com.atlassian.jira.workflow.DefaultWorkflowSchemeManager$WorkflowAction.DELETE_SCHEME_10102          |                | 1542810654292

      See JRASERVER-69113, JRASERVER-68477 for some examples.


      Always have a backup of your database before doing any changes to it. 

      In general we don't anticipate performance problems, even with a million old locks. If for some reason you do need to purge the clusterlockstatus, you can:

      Workaround #1

      1. Shut down the whole cluster (all nodes).
      2. Remove all the locks: 
        delete from clusterlockstatus;
        Note there's no where clause in the query above. Cluster locks do not survive cluster shutdown, so all rows can be safely removed when the cluster is down.
      3. Start nodes one by one (as usual).

      Workaround #2

      You can prune the clusterlockstatus table without downtime, too.

      1. Remove only the unlocked locks:
        delete from clusterlockstatus where locked_by_node is NULL;
      2. At this point these pruned locks are unacquirable. Therefore you need to...
      3. Do a rolling restart of all nodes.


        Issue Links



              Unassigned Unassigned
              ayakovlev@atlassian.com Andriy Yakovlev [Atlassian]
              17 Vote for this issue
              18 Start watching this issue