Jira uses Beehive library to manage locks in DC. Locks are held internally in JVM and also implemented through the database table clusterlockstatus, which is shared between nodes.
Locks are allocated through lock() / unlock() methods, which is expected semantics and standard API.
The current cluster lock mechanism was designed to work with statically-named locks. The implementation stores each lock in the database permanently. Therefore using dynamically generated lock names, such as "lock_for_task_" + taskId, causes rows to pile up in large numbers in the clusterlockstatus table, and there is no mechanism to prune them.
There should be a mechanism that can provide one-off cluster locks that are not kept in the DB forever.
- Run Jira DC
- Create a unique lock
- Lock the lock
- Unlock the lock
Once the operation is done, no entry should be left over in the database (or should be cleaned after some time).
lock_name entries like "com.atlassian.jira.workflow.DefaultWorkflowSchemeManager$WorkflowAction.DELETE_SCHEME_10102" pile up in the clusterlockstatus table.
Always have a backup of your database before doing any changes to it.
In general we don't anticipate performance problems, even with a million old locks. If for some reason you do need to purge the clusterlockstatus, you can:
- Shut down the whole cluster (all nodes).
- Remove all the locks:
delete from clusterlockstatus;
Note there's no where clause in the query above. Cluster locks do not survive cluster shutdown, so all rows can be safely removed when the cluster is down.
- Start nodes one by one (as usual).
You can prune the clusterlockstatus table without downtime, too.
- Remove only the unlocked locks:
delete from clusterlockstatus where locked_by_node is NULL;
- At this point these pruned locks are unacquirable. Therefore you need to...
- Do a rolling restart of all nodes.