Uploaded image for project: 'Confluence Data Center'
  1. Confluence Data Center
  2. CONFSERVER-9930

Gzip filter (used for http-compression between client and server) creates very large temporary objects in memory

    XMLWordPrintable

Details

    Description

      From a heap dump we got on an out of memory error, it is quite clear that the GzipFilter is creating very large GzipResponseStream objects. The large part of the GzipResponseStream is its "baos" output stream field. One might expect these streams to be bounded in such a way that they would not be very large, but I had 4 streams over 50MB in size and 1 of those was 100MB.

      The heap dump did not include urls for most of them, but the largest request was to download a large, already compressed, attachment from Confluence. If we cannot set a smaller bound on the size of the GzipResponseStream, we should at least be able to stop using it for downloading already compressed attachments.

      A possible immediate workaround for anybody finding this problem is to remove or modify the gzip filter mapping in the web.xml:

          <filter-mapping>
              <filter-name>gzipFilter</filter-name>
              <url-pattern>/*</url-pattern>
          </filter-mapping>
      

      Such a workaround would likely have other, negative performance side-affects, but they may be preferable.

      Attachments

        Activity

          People

            don.willis@atlassian.com Don Willis
            don.willis@atlassian.com Don Willis
            Votes:
            1 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: