Saving large XML backups in Amazon S3 fails with OutOfMemoryError

XMLWordPrintable

    • 9.16
    • 3
    • Severity 3 - Minor

      Issue Summary

      While trying to use S3 to store XML backups generated by Jira (see Storing backups in Amazon S3), the backups created don't appear in the S3 bucket, as if the backup process never completes successfully. At the same time, using the local storage as an XML backup destination allows for successful generation without any issues for the same instance.

      There is no specific error reported in the application logs, like atlassian-jira.log. From their perspective XML backup process runs successfully, saves a temporary .zip on the local drive and doesn't report any errors:

      grep -hi "XmlBackup" atlassian-jira*
      2025-04-28 01:33:57,684-0700 http-nio-8090-exec-16 url: /secure/admin/XmlBackup.jspa; user: jira_user WARN jira_user 93x220916x1 1l6m9gg ...ip adresses... /secure/admin/XmlBackup.jspa [c.a.j.w.action.util.XmlBackup] The filename that will be used for exporting is: '20250428083357_test-april-28.zip'
      2025-04-28 01:33:57,688-0700 http-nio-8090-exec-16 url: /secure/admin/XmlBackup.jspa; user: jira_user INFO jira_user 93x220916x1 1l6m9gg ...ip adresses... /secure/admin/XmlBackup.jspa [c.a.j.bc.dataimport.DefaultExportService] Creating backup zip file: /opt/atlassian/jira/atlassian-jira-software-10.3.4-standalone/temp/xmlbackup_9c7ed479-5a72-428d-ae02-5dd9e99cffe5_20250428083357_test-april-28.zip
      2025-04-28 02:10:38,601-0700 http-nio-8090-exec-16 url: /secure/admin/XmlBackup.jspa; user: jira_user INFO jira_user 93x220916x1 1l6m9gg ...ip adresses... /secure/admin/XmlBackup.jspa [c.a.j.bc.dataimport.DefaultExportService] Data export completed in 2200913ms. Wrote 132684083 entities to export in memory.
      2025-04-28 02:10:38,602-0700 http-nio-8090-exec-16 url: /secure/admin/XmlBackup.jspa; user: jira_user INFO jira_user 93x220916x1 1l6m9gg ...ip adresses... /secure/admin/XmlBackup.jspa [c.a.j.bc.dataimport.DefaultExportService] Attempting to save the Active Objects Backup
      2025-04-28 02:30:48,807-0700 http-nio-8090-exec-16 url: /secure/admin/XmlBackup.jspa; user: jira_user INFO jira_user 93x220916x1 1l6m9gg ...ip adresses... /secure/admin/XmlBackup.jspa [c.a.j.bc.dataimport.DefaultExportService] Finished saving the Active Objects Backup
      2025-04-28 02:30:53,530-0700 http-nio-8090-exec-16 url: /secure/admin/XmlBackup.jspa; user: jira_user INFO jira_user 93x220916x1 1l6m9gg ...ip adresses... /secure/admin/XmlBackup.jspa [c.a.j.bc.dataimport.DefaultExportService] Backup file has been deleted from temp directory: /opt/atlassian/jira/atlassian-jira-software-10.3.4-standalone/temp/xmlbackup_9c7ed479-5a72-428d-ae02-5dd9e99cffe5_20250428083357_test-april-28.zip
      

      However, upon review of Tomcat catalina.out, an OOM exception can be observed that terminated the backup event:

      28-Apr-2025 02:30:53.533 SEVERE [http-nio-8090-exec-16 url: /rest/issueNav/1/issueTable/stable; user:  jira_user] org.apache.coyote.AbstractProtocol$ConnectionHandler.process Failed to complete processing of a request
      	java.lang.OutOfMemoryError: Required array length 2147483639 + 9 is too large
      		at java.base/jdk.internal.util.ArraysSupport.hugeLength(ArraysSupport.java:649)
      		at java.base/jdk.internal.util.ArraysSupport.newLength(ArraysSupport.java:642)
      		at java.base/java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:100)
      		at java.base/java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:130)
      		at java.base/java.io.InputStream.transferTo(InputStream.java:783)
      		at java.base/java.nio.file.Files.copy(Files.java:3213)
      		at com.atlassian.dc.filestore.impl.filesystem.FilesystemPathImpl.lambda$copyFile$0(FilesystemPathImpl.java:163)
      		at com.atlassian.dc.filestore.api.FileStore$Writer.write(FileStore.java:350)
      

      Steps to Reproduce

      The problem is relevant for the big instances with a large backup file generated. A certain size of the backup that might trigger the issue is unclear. However, first, the issue was reported for a 26 GB max heap instance with 4.6 GB of the generated compressed XML backup (23 GB to 46 GB uncompressed, assuming a 5:1 to 10:1 compression ratio).

      1. Generate an XML backup (either manually or automatically based on the schedule) to verify that it can be completed successfully.
      2. Then, configure your instance to store XML backups at S3 following the steps from Storing backups in Amazon S3 user guide.
      3. Generate backup again.

      Expected Results

      Backup will be successfully saved at S3 same way as on the local storage

      Actual Results

      Generated backups do not appear in S3 and OutOfMemoryError exception is observed in catalina.out

      Workaround

      If you need to store XML backups at S3 and are affected by this issue, you can temporarily re-configure storage back to local and then upload generated backups to S3 outside of Jira using AWS CLI commands executed manually or as a part of automated scripts (Move objects):

      aws s3 mv backup.zip s3://my-jira-backups/2025-04-29/
      

            Assignee:
            Unassigned
            Reporter:
            Alexander Artemenko (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

              Created:
              Updated: