-
Bug
-
Resolution: Fixed
-
Highest
-
3.6.4, 3.7.0
-
Severity 1 - Critical
-
54
-
Problem
Due to a fix in JSDSERVER-5299 to stop race conditions, SLA locks were added. There has been some unexpected behaviours such as:
- Locks may become congested due to high load causing in IO failures. The result is that locks may not be released, thus causing related tickets to become unresponsive.
- Re-indexing results in update locks for each SLA per ticket causing high contention and drastically slowing down indexing speeds.
Diagnosis
Issues may be unresponsive and
- The cluster health check may show messages like this:
"Node 'i-0fb543049bcd09a2e' has been holding cluster lock, 'sla_issue_update_232939', for 20,550 seconds." appears
- Thread Dumps will show:
http-nio-8080-exec-42" #1390 daemon prio=5 os_prio=0 tid=0x00007fa92c888000 nid=0x1b9a waiting on condition [0x00007fa77613b000] java.lang.Thread.State: TIMED_WAITING (sleeping) at java.lang.Thread.sleep(Native Method) at com.atlassian.beehive.db.DatabaseClusterLock.sleep(DatabaseClusterLock.java:530) at com.atlassian.beehive.db.DatabaseClusterLock.uninterruptibleWait(DatabaseClusterLock.java:102) at com.atlassian.beehive.db.DatabaseClusterLock.lock(DatabaseClusterLock.java:82) at com.atlassian.servicedesk.internal.sla.customfield.SlaFieldUpdateLockManagerImpl.lockSlaUpdate(SlaFieldUpdateLockManagerImpl.java:21) at com.atlassian.servicedesk.internal.sla.customfield.SLACFType.getValueFromIssue(SLACFType.java:434) at com.atlassian.servicedesk.internal.sla.customfield.SLACFType.getValueFromIssue(SLACFType.java:88) at com.atlassian.jira.issue.customfields.impl.AbstractSingleFieldType.getValueFromIssue(AbstractSingleFieldType.java:71) at com.atlassian.servicedesk.internal.sla.customfield.SLAFieldManagerImpl.getOptionalFieldValue(SLAFieldManagerImpl.java:191) at com.atlassian.servicedesk.internal.sla.searcher.SLACustomFieldIndexer.addDocumentFields(SLACustomFieldIndexer.java:160) at com.atlassian.servicedesk.internal.sla.searcher.SLACustomFieldIndexer.lambda$addDocumentFieldsSearchable$0(SLACustomFieldIndexer.java:150) at com.atlassian.servicedesk.internal.sla.searcher.SLACustomFieldIndexer$$Lambda$1101/585067346.run(Unknown Source) at com.atlassian.servicedesk.internal.util.SafeRunner.lambda$run$0(SafeRunner.java:40) at com.atlassian.servicedesk.internal.util.SafeRunner$$Lambda$1102/840344782.call(Unknown Source) at com.atlassian.servicedesk.bootstrap.lifecycle.LifecycleLock.lambda$withPluginLifecycleSafety$0(LifecycleLock.java:51) at com.atlassian.servicedesk.bootstrap.lifecycle.LifecycleLock$$Lambda$1103/2107628499.call(Unknown Source) at com.atlassian.ozymandias.SafePluginPointAccess.call(SafePluginPointAccess.java:263) at com.atlassian.servicedesk.internal.util.SafeRunner.run(SafeRunner.java:44) at com.atlassian.servicedesk.internal.sla.searcher.SLACustomFieldIndexer.addDocumentFieldsSearchable(SLACustomFieldIndexer.java:150) at com.atlassian.jira.issue.index.indexers.impl.AbstractCustomFieldIndexer.addIndex(AbstractCustomFieldIndexer.java:40) at com.atlassian.jira.issue.index.DefaultIssueDocumentFactory$Builder.add(DefaultIssueDocumentFactory.java:84) at com.atlassian.jira.issue.index.DefaultIssueDocumentFactory$Builder.addAll(DefaultIssueDocumentFactory.java:75) at com.atlassian.jira.issue.index.DefaultIssueDocumentFactory.apply(DefaultIssueDocumentFactory.java:50) at com.atlassian.jira.issue.index.DefaultIssueDocumentFactory.apply(DefaultIssueDocumentFactory.java:30) at com.atlassian.jira.issue.index.DefaultIssueIndexer$DefaultDocumentCreationStrategy.get(DefaultIssueIndexer.java:556) at com.atlassian.jira.issue.index.DefaultIssueIndexer.lambda$reindexIssues$1(DefaultIssueIndexer.java:166) at com.atlassian.jira.issue.index.DefaultIssueIndexer$$Lambda$1082/646135629.perform(Unknown Source) at com.atlassian.jira.issue.index.DefaultIssueIndexer.lambda$null$2(DefaultIssueIndexer.java:308) at com.atlassian.jira.issue.index.DefaultIssueIndexer$$Lambda$1084/999436899.get(Unknown Source) at com.atlassian.jira.index.SimpleIndexingStrategy.get(SimpleIndexingStrategy.java:7) at com.atlassian.jira.index.SimpleIndexingStrategy.get(SimpleIndexingStrategy.java:5) at com.atlassian.jira.issue.index.DefaultIssueIndexer.lambda$perform$3(DefaultIssueIndexer.java:306) at com.atlassian.jira.issue.index.DefaultIssueIndexer$$Lambda$1083/620300249.consume(Unknown Source) at com.atlassian.jira.util.collect.CollectionUtil.foreach(CollectionUtil.java:39) at com.atlassian.jira.util.collect.CollectionUtil.foreach(CollectionUtil.java:52) at com.atlassian.jira.issue.util.IssueObjectIssuesIterable.foreach(IssueObjectIssuesIterable.java:24) at com.atlassian.jira.issue.index.DefaultIssueIndexer.perform(DefaultIssueIndexer.java:282) at com.atlassian.jira.issue.index.DefaultIssueIndexer.reindexIssues(DefaultIssueIndexer.java:162) at com.atlassian.jira.issue.index.DefaultIndexManager.reIndexIssues(DefaultIndexManager.java:571) at com.atlassian.jira.issue.index.DefaultIndexManager.reIndexIssues(DefaultIndexManager.java:547) at com.atlassian.jira.issue.index.DefaultIndexManager.reIndexIssues(DefaultIndexManager.java:530) at com.atlassian.jira.issue.index.DefaultIndexManager.release(DefaultIndexManager.java:519) at sun.reflect.GeneratedMethodAccessor2224.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.atlassian.jira.config.component.SwitchingInvocationHandler.invoke(SwitchingInvocationHandler.java:22) at com.sun.proxy.$Proxy34.release(Unknown Source) at com.atlassian.jira.workflow.OSWorkflowManager.enableIndexingForThisThread(OSWorkflowManager.java:914) at com.atlassian.jira.workflow.OSWorkflowManager.doWorkflowAction(OSWorkflowManager.java:790) at com.atlassian.jira.bc.issue.DefaultIssueService.transition(DefaultIssueService.java:492) at com.atlassian.jira.web.action.workflow.SimpleWorkflowAction.doExecute(SimpleWorkflowAction.java:28) at webwork.action.ActionSupport.execute(ActionSupport.java:165)
Performing a re-index will have noticeably slower speeds and thread dumps will show that indexing threads are constantly performing cluster locks
"IssueIndexer:thread-7" #353 prio=5 os_prio=0 tid=0x00007fcb200f6000 nid=0x1362a runnable [0x00007fcc643f9000] java.lang.Thread.State: RUNNABLE at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) at java.net.SocketInputStream.read(SocketInputStream.java:171) at java.net.SocketInputStream.read(SocketInputStream.java:141) at sun.security.ssl.InputRecord.readFully(InputRecord.java:465) at sun.security.ssl.InputRecord.read(InputRecord.java:503) at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:973) - locked <0x00000004078afec0> (a java.lang.Object) at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:930) at sun.security.ssl.AppInputStream.read(AppInputStream.java:105) - locked <0x00000004078b2778> (a sun.security.ssl.AppInputStream) at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3008) ... at org.apache.commons.dbcp2.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:98) at org.apache.commons.dbcp2.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:98) at org.ofbiz.core.entity.jdbc.SQLProcessor.executeUpdate(SQLProcessor.java:562) at com.atlassian.jira.ofbiz.DefaultOfBizDelegator.bulkUpdateByAnd(DefaultOfBizDelegator.java:565) ... at com.atlassian.jira.cluster.lock.JiraClusterLockDao.tryUpdateAcquireLock(JiraClusterLockDao.java:49) at com.atlassian.beehive.db.DatabaseClusterLock.tryLockUsingDatabase(DatabaseClusterLock.java:281) at com.atlassian.beehive.db.DatabaseClusterLock.tryLock(DatabaseClusterLock.java:171) at com.atlassian.beehive.db.DatabaseClusterLock.lock(DatabaseClusterLock.java:77) at com.atlassian.servicedesk.internal.sla.customfield.SlaFieldUpdateLockManagerImpl.lockSlaUpdate(SlaFieldUpdateLockManagerImpl.java:21) at com.atlassian.servicedesk.internal.sla.customfield.SLACFType.getValueFromIssue(SLACFType.java:434) at com.atlassian.servicedesk.internal.sla.customfield.SLACFType.getValueFromIssue(SLACFType.java:88) at com.atlassian.jira.issue.customfields.impl.AbstractSingleFieldType.getValueFromIssue(AbstractSingleFieldType.java:71) at com.atlassian.servicedesk.internal.sla.customfield.SLAFieldManagerImpl.getOptionalFieldValue(SLAFieldManagerImpl.java:191) ... at com.atlassian.jira.issue.index.DefaultIssueIndexer$$Lambda$1207/1229406744.get(Unknown Source) at com.atlassian.jira.index.SimpleIndexingStrategy.get(SimpleIndexingStrategy.java:7) at com.atlassian.jira.index.SimpleIndexingStrategy.get(SimpleIndexingStrategy.java:5) at com.atlassian.jira.index.MultiThreadedIndexingStrategy$1.call(MultiThreadedIndexingStrategy.java:33) at com.atlassian.jira.index.MultiThreadedIndexingStrategy$1.call(MultiThreadedIndexingStrategy.java:31) ...
Workarounds
None
- causes
-
JSDSERVER-5468 Service Desk causes memory pressure during indexing in Data Center
- Closed
- is a regression of
-
JSDSERVER-5299 SLA custom field should use last updated value if the event of a race condition
- Closed
- relates to
-
JSDSERVER-5681 Non-optimal computation of SLA values in addDocumentFields() method
- Closed
-
JSDSERVER-5685 While loading values for SLA CustomField getValueFromIssue method flushes EagerLoadingOfBizCustomFieldPersister cache
- Closed
-
JSMDC-1287 Loading...
- mentioned in
-
Page Loading...