-
Bug
-
Resolution: Fixed
-
High
-
4.4.3, 4.5.3, 4.6.1
-
1
-
Severity 2 - Major
-
32
-
XML Restore gets stuck at 90% and does not progress further in DC 8.5.0-m0002 / JSD 4.5.0-m0002
How to replicate
- Create a Jira instance in DC Mode, using atlassian-jira-software-8.5.0-m0002 and jira-servicedesk-application-4.5.0-m0002.obr (tested with instenv, details in "environment" section)
- Create a sample Jira project
- Create a JSD project, and add some dummy issues
- Create a XML backup
- Jira Admin > Restore System > Select the XML Backup
Can use pre-exported dataset [^basic_backup.zip]
Observed:
- Restore progresses to 90%, but never goes past 90%
- Restore progressed to 90% within minutes, but hours pass with no progress past 90%
- Last entries in logs:
2019-09-26 21:28:26,595 JiraImportTaskExecutionThread-1 INFO admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [c.a.j.bc.dataimport.DefaultDataImportService] Importing data is 88% complete... 2019-09-26 21:28:26,596 JiraImportTaskExecutionThread-1 INFO admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [c.a.j.bc.dataimport.DefaultDataImportService] Importing data is 89% complete... 2019-09-26 21:28:26,597 JiraImportTaskExecutionThread-1 INFO admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [c.a.j.bc.dataimport.DefaultDataImportService] Importing data is 90% complete... 2019-09-26 21:28:26,655 JiraImportTaskExecutionThread-1 INFO admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [c.a.j.bc.dataimport.DefaultDataImportService] Finished storing Generic Values.2019-09-26 21:28:26,655 JiraImportTaskExecutionThread-1 INFO admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [c.a.j.bc.dataimport.DefaultDataImportService] Finished storing Generic Values.2019-09-26 21:28:26,689 JiraImportTaskExecutionThread-1 INFO admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [c.a.j.c.d.localq.tape.TapeLocalQCacheOpQueue] Created persistent cache replication queue for node: mycluster2 with id: queue_mycluster2_2_be4ee67b71729bbfa51fef35601226c5 in : /Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_2_be4ee67b71729bbfa51fef35601226c52019-09-26 21:28:26,695 JiraImportTaskExecutionThread-1 INFO admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [c.a.j.c.distribution.localq.LocalQCacheManager] Created cache replication queue: [queueId=queue_mycluster2_2_be4ee67b71729bbfa51fef35601226c5, queuePath=/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_2_be4ee67b71729bbfa51fef35601226c5] with queue reader running: true2019-09-26 21:28:26,696 localq-reader-0 INFO admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [c.a.j.c.distribution.localq.LocalQCacheOpReader] Started listening for cache replication queue: [queueId=queue_mycluster2_2_be4ee67b71729bbfa51fef35601226c5, queuePath=/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_2_be4ee67b71729bbfa51fef35601226c5] 2019-09-26 21:28:26,829 JiraImportTaskExecutionThread-1 DEBUG admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [c.a.activeobjects.osgi.ActiveObjectsServiceFactory] startCleaning2019-09-26 21:28:26,833 JiraImportTaskExecutionThread-1 DEBUG admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [c.a.activeobjects.osgi.ActiveObjectsServiceFactory] stopCleaning2019-09-26 21:28:27,028 cluster-watchdog-0 INFO [c.a.j.c.d.localq.tape.TapeLocalQCacheOpQueue] Created persistent cache replication queue for node: mycluster2 with id: queue_mycluster2_0_be4ee67b71729bbfa51fef35601226c5 in : /Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_0_be4ee67b71729bbfa51fef35601226c52019-09-26 21:28:27,029 cluster-watchdog-0 INFO [c.a.j.c.distribution.localq.LocalQCacheManager] Created cache replication queue: [queueId=queue_mycluster2_0_be4ee67b71729bbfa51fef35601226c5, queuePath=/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_0_be4ee67b71729bbfa51fef35601226c5] with queue reader running: true2019-09-26 21:28:27,029 localq-reader-1 INFO [c.a.j.c.distribution.localq.LocalQCacheOpReader] Started listening for cache replication queue: [queueId=queue_mycluster2_0_be4ee67b71729bbfa51fef35601226c5, queuePath=/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_0_be4ee67b71729bbfa51fef35601226c5] 2019-09-26 21:28:27,030 cluster-watchdog-0 INFO [c.a.j.c.d.localq.tape.TapeLocalQCacheOpQueue] Created persistent cache replication queue for node: mycluster2 with id: queue_mycluster2_1_be4ee67b71729bbfa51fef35601226c5 in : /Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_1_be4ee67b71729bbfa51fef35601226c52019-09-26 21:28:27,030 cluster-watchdog-0 INFO [c.a.j.c.distribution.localq.LocalQCacheManager] Created cache replication queue: [queueId=queue_mycluster2_1_be4ee67b71729bbfa51fef35601226c5, queuePath=/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_1_be4ee67b71729bbfa51fef35601226c5] with queue reader running: true2019-09-26 21:28:27,030 localq-reader-2 INFO [c.a.j.c.distribution.localq.LocalQCacheOpReader] Started listening for cache replication queue: [queueId=queue_mycluster2_1_be4ee67b71729bbfa51fef35601226c5, queuePath=/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_1_be4ee67b71729bbfa51fef35601226c5] 2019-09-26 21:28:27,032 cluster-watchdog-0 INFO [c.a.j.c.d.localq.tape.TapeLocalQCacheOpQueue] Created persistent cache replication queue for node: mycluster2 with id: queue_mycluster2_3_be4ee67b71729bbfa51fef35601226c5 in : /Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_3_be4ee67b71729bbfa51fef35601226c52019-09-26 21:28:27,032 cluster-watchdog-0 INFO [c.a.j.c.distribution.localq.LocalQCacheManager] Created cache replication queue: [queueId=queue_mycluster2_3_be4ee67b71729bbfa51fef35601226c5, queuePath=/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_3_be4ee67b71729bbfa51fef35601226c5] with queue reader running: true2019-09-26 21:28:27,032 localq-reader-3 INFO [c.a.j.c.distribution.localq.LocalQCacheOpReader] Started listening for cache replication queue: [queueId=queue_mycluster2_3_be4ee67b71729bbfa51fef35601226c5, queuePath=/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_3_be4ee67b71729bbfa51fef35601226c5] 2019-09-26 21:28:27,033 cluster-watchdog-0 INFO [c.a.j.c.d.localq.tape.TapeLocalQCacheOpQueue] Created persistent cache replication queue for node: mycluster2 with id: queue_mycluster2_4_be4ee67b71729bbfa51fef35601226c5 in : /Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_4_be4ee67b71729bbfa51fef35601226c52019-09-26 21:28:27,033 cluster-watchdog-0 INFO [c.a.j.c.distribution.localq.LocalQCacheManager] Created cache replication queue: [queueId=queue_mycluster2_4_be4ee67b71729bbfa51fef35601226c5, queuePath=/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_4_be4ee67b71729bbfa51fef35601226c5] with queue reader running: true2019-09-26 21:28:27,033 localq-reader-4 INFO [c.a.j.c.distribution.localq.LocalQCacheOpReader] Started listening for cache replication queue: [queueId=queue_mycluster2_4_be4ee67b71729bbfa51fef35601226c5, queuePath=/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_4_be4ee67b71729bbfa51fef35601226c5] 2019-09-26 21:28:27,035 cluster-watchdog-0 INFO [c.a.j.c.d.localq.tape.TapeLocalQCacheOpQueue] Created persistent cache replication queue for node: mycluster2 with id: queue_mycluster2_5_be4ee67b71729bbfa51fef35601226c5 in : /Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_5_be4ee67b71729bbfa51fef35601226c52019-09-26 21:28:27,035 cluster-watchdog-0 INFO [c.a.j.c.distribution.localq.LocalQCacheManager] Created cache replication queue: [queueId=queue_mycluster2_5_be4ee67b71729bbfa51fef35601226c5, queuePath=/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_5_be4ee67b71729bbfa51fef35601226c5] with queue reader running: true2019-09-26 21:28:27,035 localq-reader-5 INFO [c.a.j.c.distribution.localq.LocalQCacheOpReader] Started listening for cache replication queue: [queueId=queue_mycluster2_5_be4ee67b71729bbfa51fef35601226c5, queuePath=/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_5_be4ee67b71729bbfa51fef35601226c5] 2019-09-26 21:28:27,036 cluster-watchdog-0 INFO [c.a.j.c.d.localq.tape.TapeLocalQCacheOpQueue] Created persistent cache replication queue for node: mycluster2 with id: queue_mycluster2_6_be4ee67b71729bbfa51fef35601226c5 in : /Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_6_be4ee67b71729bbfa51fef35601226c52019-09-26 21:28:27,036 cluster-watchdog-0 INFO [c.a.j.c.distribution.localq.LocalQCacheManager] Created cache replication queue: [queueId=queue_mycluster2_6_be4ee67b71729bbfa51fef35601226c5, queuePath=/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_6_be4ee67b71729bbfa51fef35601226c5] with queue reader running: true2019-09-26 21:28:27,036 localq-reader-6 INFO [c.a.j.c.distribution.localq.LocalQCacheOpReader] Started listening for cache replication queue: [queueId=queue_mycluster2_6_be4ee67b71729bbfa51fef35601226c5, queuePath=/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_6_be4ee67b71729bbfa51fef35601226c5] 2019-09-26 21:28:27,037 cluster-watchdog-0 INFO [c.a.j.c.d.localq.tape.TapeLocalQCacheOpQueue] Created persistent cache replication queue for node: mycluster2 with id: queue_mycluster2_7_be4ee67b71729bbfa51fef35601226c5 in : /Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_7_be4ee67b71729bbfa51fef35601226c52019-09-26 21:28:27,038 cluster-watchdog-0 INFO [c.a.j.c.distribution.localq.LocalQCacheManager] Created cache replication queue: [queueId=queue_mycluster2_7_be4ee67b71729bbfa51fef35601226c5, queuePath=/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_7_be4ee67b71729bbfa51fef35601226c5] with queue reader running: true2019-09-26 21:28:27,038 localq-reader-7 INFO [c.a.j.c.distribution.localq.LocalQCacheOpReader] Started listening for cache replication queue: [queueId=queue_mycluster2_7_be4ee67b71729bbfa51fef35601226c5, queuePath=/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_7_be4ee67b71729bbfa51fef35601226c5] 2019-09-26 21:28:27,039 cluster-watchdog-0 INFO [c.a.j.c.d.localq.tape.TapeLocalQCacheOpQueue] Created persistent cache replication queue for node: mycluster2 with id: queue_mycluster2_8_be4ee67b71729bbfa51fef35601226c5 in : /Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_8_be4ee67b71729bbfa51fef35601226c52019-09-26 21:28:27,039 cluster-watchdog-0 INFO [c.a.j.c.distribution.localq.LocalQCacheManager] Created cache replication queue: [queueId=queue_mycluster2_8_be4ee67b71729bbfa51fef35601226c5, queuePath=/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_8_be4ee67b71729bbfa51fef35601226c5] with queue reader running: true2019-09-26 21:28:27,039 localq-reader-8 INFO [c.a.j.c.distribution.localq.LocalQCacheOpReader] Started listening for cache replication queue: [queueId=queue_mycluster2_8_be4ee67b71729bbfa51fef35601226c5, queuePath=/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_8_be4ee67b71729bbfa51fef35601226c5] 2019-09-26 21:28:27,041 cluster-watchdog-0 INFO [c.a.j.c.d.localq.tape.TapeLocalQCacheOpQueue] Created persistent cache replication queue for node: mycluster2 with id: queue_mycluster2_9_be4ee67b71729bbfa51fef35601226c5 in : /Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_9_be4ee67b71729bbfa51fef35601226c52019-09-26 21:28:27,041 cluster-watchdog-0 INFO [c.a.j.c.distribution.localq.LocalQCacheManager] Created cache replication queue: [queueId=queue_mycluster2_9_be4ee67b71729bbfa51fef35601226c5, queuePath=/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_9_be4ee67b71729bbfa51fef35601226c5] with queue reader running: true2019-09-26 21:28:27,041 localq-reader-9 INFO [c.a.j.c.distribution.localq.LocalQCacheOpReader] Started listening for cache replication queue: [queueId=queue_mycluster2_9_be4ee67b71729bbfa51fef35601226c5, queuePath=/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_9_be4ee67b71729bbfa51fef35601226c5] 2019-09-26 21:28:28,737 JiraImportTaskExecutionThread-1 INFO admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [c.a.jira.index.MonitoringIndexWriter] [lucene-stats] flush stats: snapshotCount=2, totalCount=2, periodSec=341, flushIntervalMillis=170828, indexDirectory=/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/caches/indexesV1/comments, indexWriterId=com.atlassian.jira.index.MonitoringIndexWriter@3174eaca, indexDirectoryId=MMapDirectory@/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/caches/indexesV1/comments lockFactory=org.apache.lucene.store.NativeFSLockFactory@7c67d6fb2019-09-26 21:28:28,740 JiraImportTaskExecutionThread-1 INFO admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [c.a.jira.index.MonitoringIndexWriter] [lucene-stats] flush stats: snapshotCount=2, totalCount=4, periodSec=0, flushIntervalMillis=1, indexDirectory=/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/caches/indexesV1/comments, indexWriterId=com.atlassian.jira.index.MonitoringIndexWriter@3174eaca, indexDirectoryId=MMapDirectory@/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/caches/indexesV1/comments lockFactory=org.apache.lucene.store.NativeFSLockFactory@7c67d6fb2019-09-26 21:28:28,747 JiraImportTaskExecutionThread-1 INFO admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [c.a.jira.index.MonitoringIndexWriter] [lucene-stats] flush stats: snapshotCount=2, totalCount=2, periodSec=341, flushIntervalMillis=170648, indexDirectory=/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/caches/indexesV1/issues, indexWriterId=com.atlassian.jira.index.MonitoringIndexWriter@552d82fe, indexDirectoryId=MMapDirectory@/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/caches/indexesV1/issues lockFactory=org.apache.lucene.store.NativeFSLockFactory@7c67d6fb2019-09-26 21:28:28,750 JiraImportTaskExecutionThread-1 INFO admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [c.a.jira.index.MonitoringIndexWriter] [lucene-stats] flush stats: snapshotCount=2, totalCount=4, periodSec=0, flushIntervalMillis=1, indexDirectory=/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/caches/indexesV1/issues, indexWriterId=com.atlassian.jira.index.MonitoringIndexWriter@552d82fe, indexDirectoryId=MMapDirectory@/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/caches/indexesV1/issues lockFactory=org.apache.lucene.store.NativeFSLockFactory@7c67d6fb2019-09-26 21:28:28,757 JiraImportTaskExecutionThread-1 INFO admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [c.a.jira.index.MonitoringIndexWriter] [lucene-stats] flush stats: snapshotCount=2, totalCount=2, periodSec=341, flushIntervalMillis=170609, indexDirectory=/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/caches/indexesV1/changes, indexWriterId=com.atlassian.jira.index.MonitoringIndexWriter@73dc8b73, indexDirectoryId=MMapDirectory@/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/caches/indexesV1/changes lockFactory=org.apache.lucene.store.NativeFSLockFactory@7c67d6fb2019-09-26 21:28:28,760 JiraImportTaskExecutionThread-1 INFO admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [c.a.jira.index.MonitoringIndexWriter] [lucene-stats] flush stats: snapshotCount=2, totalCount=4, periodSec=0, flushIntervalMillis=1, indexDirectory=/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/caches/indexesV1/changes, indexWriterId=com.atlassian.jira.index.MonitoringIndexWriter@73dc8b73, indexDirectoryId=MMapDirectory@/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/caches/indexesV1/changes lockFactory=org.apache.lucene.store.NativeFSLockFactory@7c67d6fb2019-09-26 21:28:28,765 JiraImportTaskExecutionThread-1 INFO admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [c.a.jira.index.MonitoringIndexWriter] [lucene-stats] flush stats: snapshotCount=2, totalCount=2, periodSec=341, flushIntervalMillis=170592, indexDirectory=/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/caches/indexesV1/worklogs, indexWriterId=com.atlassian.jira.index.MonitoringIndexWriter@65e32b49, indexDirectoryId=MMapDirectory@/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/caches/indexesV1/worklogs lockFactory=org.apache.lucene.store.NativeFSLockFactory@7c67d6fb2019-09-26 21:28:28,766 JiraImportTaskExecutionThread-1 INFO admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [c.a.jira.index.MonitoringIndexWriter] [lucene-stats] flush stats: snapshotCount=2, totalCount=4, periodSec=0, flushIntervalMillis=0, indexDirectory=/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/caches/indexesV1/worklogs, indexWriterId=com.atlassian.jira.index.MonitoringIndexWriter@65e32b49, indexDirectoryId=MMapDirectory@/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/caches/indexesV1/worklogs lockFactory=org.apache.lucene.store.NativeFSLockFactory@7c67d6fb2019-09-26 21:28:28,769 JiraImportTaskExecutionThread-1 INFO admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [c.a.j.index.ha.DefaultNodeReindexService] Pausing node re-index service2019-09-26 21:28:28,771 JiraImportTaskExecutionThread-1 INFO admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [c.a.j.cluster.lock.HeartbeatScheduledExecutorFactory] Heartbeat scheduler shutdown2019-09-26 21:28:28,771 JiraImportTaskExecutionThread-1 INFO admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [c.a.jira.cluster.ClusterWatchdogService] ClusterWatchdogJob shutting down2019-09-26 21:28:28,780 JiraImportTaskExecutionThread-1 INFO admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [c.a.plugin.manager.DefaultPluginManager] Preparing to shut down the plugin system2019-09-26 21:28:28,793 JiraImportTaskExecutionThread-1 INFO admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [c.a.j.i.m.processor.bootstrap.MailPluginLifeCycleAware] JIRA Email Processor is stopping...2019-09-26 21:28:28,795 JiraImportTaskExecutionThread-1 INFO admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [c.a.j.i.m.processor.bootstrap.MailPluginLifeCycleAware] JIRA Email Processor is stopped.2019-09-26 21:28:28,800 JiraImportTaskExecutionThread-1 INFO admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [c.a.jira.index.MonitoringIndexWriter] [lucene-stats] flush stats: snapshotCount=4, totalCount=4, periodSec=88, flushIntervalMillis=22018, indexDirectory=/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/caches/indexesV1/plugins/servicedeskcannedresponses, indexWriterId=com.atlassian.jira.index.MonitoringIndexWriter@43ebbc5f, indexDirectoryId=MMapDirectory@/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/caches/indexesV1/plugins/servicedeskcannedresponses lockFactory=org.apache.lucene.store.NativeFSLockFactory@7c67d6fb2019-09-26 21:28:28,808 JiraImportTaskExecutionThread-1 INFO admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [c.a.p.internal.bootstrap.Launcher] PSMQ is stopping...2019-09-26 21:28:28,812 JiraImportTaskExecutionThread-1 INFO admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [c.a.p.internal.bootstrap.Launcher] PSMQ is stopped.2019-09-26 21:28:28,814 JiraImportTaskExecutionThread-1 INFO admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [c.a.j.p.w.i.bootstrap.lifecycle.WorkingHoursPluginLauncher] JIRA (SD) Working Hours Plugin stopping...2019-09-26 21:28:28,814 JiraImportTaskExecutionThread-1 INFO admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [c.a.j.p.w.i.bootstrap.lifecycle.WorkingHoursPluginLauncher] JIRA (SD) Working Hours Plugin stopped2019-09-26 21:28:28,828 JiraImportTaskExecutionThread-1 INFO admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [atlassian.servicedesk.lifecycle] stopping...2019-09-26 21:28:28,829 JiraImportTaskExecutionThread-1 INFO admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [atlassian.servicedesk.lifecycle] Server Plugin LifeCycle - Stopping Service Desk2019-09-26 21:28:31,716 localq-reader-0 INFO admin 1288x1035x1 1o9fs2d 0:0:0:0:0:0:0:1 /secure/admin/XmlRestore.jspa [c.a.j.c.distribution.localq.LocalQCacheOpReader] Checked exception: RecoverableFailure occurred when processing: LocalQCacheOp{cacheName='com.atlassian.jira.propertyset.CachingOfBizPropertyEntryStore.cache', action=REMOVE_ALL, key=null, value=null, creationTimeInMillis=1569497306656} from cache replication queue: [queueId=queue_mycluster2_2_be4ee67b71729bbfa51fef35601226c5, queuePath=/Users/allewellyn/jira-home/atlassian-jira-software-8.3.3/localq/queue_mycluster2_2_be4ee67b71729bbfa51fef35601226c5], failuresCount: 1. Will retry indefinitely.com.atlassian.jira.cluster.distribution.localq.LocalQCacheOpSender$RecoverableFailure: java.rmi.ConnectIOException: Exception creating connection to: 10.217.3.111; nested exception is: java.net.SocketTimeoutException: connect timed out
- No active sessions found in DB.
Expected:
Restore completes and user taken to Jira login screen (tested on 7.13.5 OK)
Notes:
Also replicated via MSB locally using 1 node cluster 8.5.0-m0002 / JSD 4.5.0-m0002 , postgres96 and restoring basic_backup.zip
("PostgreSQL 9.6.11 on x86_64-apple-darwin18.2.0, compiled by Apple LLVM version 10.0.0 (clang-1000.11.45.5), 64-bit")
Instenv test details:
Instenv - https://allewellyn-ienv-blz.instenv.internal.atlassian.com/
Node 10-217-3-111 active
(more details in "environment" of this ticket)
Workaround
Required, if there is no workaround please state:
Currently there is no known workaround for this behavior. A workaround will be added here when available
- duplicates
-
JRASERVER-70147 XML Restore stucks at 90%
- Closed
- is related to
-
JRASERVER-60114 CachingOfBizPropertyEntryStore cache failed to start after Restore System
- Closed
-
JRASERVER-66597 JIRA DC might lose Cluster lock due database connectivity problems
- Closed
- is duplicated by
-
JSEV-2701 Loading...
-
JSMDC-5151 Loading...