Uploaded image for project: 'Confluence Data Center'
  1. Confluence Data Center
  2. CONFSERVER-22695

Confluence incompatible with DB2 HADR (High Availability Disaster Recovery)

      Confluence overrides the Hibernate definition of CLOB and BLOB columns in the
      DB2Dialect class and turns off HADR logging using the "NOT LOGGED" option.

      This is not obvious to installation and support personnel as this is done behind the scenes.
      During a HADR switchover, any CLOB and BLOB field type is not transmitted to the standby
      DB2 database server, resulting in the Disaster Recovery site becoming in-consistent.

      Atlassian should provide a checkbox at installation time to optionally disable logging
      and leave logging on as the default condition (as does Hibernate).

      Failing that possibility, this behaviour should be documented in the installation guide
      under the DB2 section to warn installers that manual intervention is involved by a
      database administrator "after the fact" to re-enable logging for CLOB and BLOB fields
      once the Confluence database is created.

      Issue previously created under CSP-62596 and CSP-62913, moved here at Atlassian Support's request.

          Form Name

            [CONFSERVER-22695] Confluence incompatible with DB2 HADR (High Availability Disaster Recovery)

            An update:

            IBM changed the 1GB limit for CLOB/BLOB logging in release 9.7.2 and it is now possible to transmit
            values above this amount.

            We've finally obtained a workaround for the HADR CLOB/BLOB issue with the intervention of our DBA.
            This involved:

            • backing up the data in the tables affected by the limit
            • dropping the tables
            • re-creating the tables WITH logging enabled for those fields.
            • importing the data to the tables.

            David Rodgers added a comment - An update: IBM changed the 1GB limit for CLOB/BLOB logging in release 9.7.2 and it is now possible to transmit values above this amount. We've finally obtained a workaround for the HADR CLOB/BLOB issue with the intervention of our DBA. This involved: backing up the data in the tables affected by the limit dropping the tables re-creating the tables WITH logging enabled for those fields. importing the data to the tables.

            Matt Ryall added a comment -

            Thanks for raising this, David.

            Our experience in the past was that DB2 would fail to create the Confluence schema with a CLOB type like BODYCONTENT.BODY, which is 2 GB in length. This was the reason to add the 'NOT LOGGED' to the schema for CLOB columns, in order to fix CONF-6783.

            We could definitely add this to our DB2-specific documentation though. What were the required schema changes to get it working with HADR?

            Matt Ryall added a comment - Thanks for raising this, David. Our experience in the past was that DB2 would fail to create the Confluence schema with a CLOB type like BODYCONTENT.BODY, which is 2 GB in length. This was the reason to add the 'NOT LOGGED' to the schema for CLOB columns, in order to fix CONF-6783 . We could definitely add this to our DB2-specific documentation though. What were the required schema changes to get it working with HADR?

              matt@atlassian.com Matt Ryall
              626fe39da6b0 David Rodgers
              Affected customers:
              0 This affects my team
              Watchers:
              0 Start watching this issue

                Created:
                Updated:
                Resolved: