Uploaded image for project: 'Confluence Data Center'
  1. Confluence Data Center
  2. CONFSERVER-58944

Upgrade tasks to Confluence 7.0.1 will fail on MSSQL if dbo schema is not used

    XMLWordPrintable

Details

    Description

      Issue Summary

      During a Confluence upgrade to Confluence 7.0.1, when using SQL Server, if the schema used to store the Confluence tables is not dbo, the upgrade fails.

      Current documentation does not mention the dbo schema as a requirement for using Confluence with SQL Server: Database Setup for SQL Server.

      Steps to Reproduce

      1. Install a 6.x version of Confluence, using SQL Server, and store the Confluence tables on any schema other than dbo on the database
      2. Upgrade to 7.0.1

      Expected Results

      Confluence will get upgraded successfully

      Actual Results

      The below exception is thrown in the atlassian-confluence.log file:

      2019-09-26 02:46:29,884 ERROR [Catalina-utility-1] [atlassian.confluence.plugin.PluginFrameworkContextListener] launchUpgrades Upgrade failed, application will not start: Upgrade task com.atlassian.confluence.upgrade.upgradetask.SynchronyEvictionEventsPostSchemaUpgradeTask@30f045f1 failed during the SCHEMA_UPGRADE phase due to: StatementCallback; uncategorized SQLException for SQL [create unique  index e_h_r_idx on [EVENTS] ([history], [rev])]; SQL state [S0001]; error code [1913]; The operation failed because an index or statistics with name 'e_h_r_idx' already exists on table 'EVENTS'.; nested exception is com.microsoft.sqlserver.jdbc.SQLServerException: The operation failed because an index or statistics with name 'e_h_r_idx' already exists on table 'EVENTS'.
      

      Notes

      Although it's not mentioned on the error message, this error happens because of some earlier upgrade tasks which did not get executed:

      2019-09-26 02:46:28,431 INFO [Catalina-utility-1] [confluence.upgrade.ddl.HibernateDdlExecutor] executeDdlStatements Executing DDL: IF OBJECT_ID('dbo.EVENTS', 'U') IS NOT NULL   DROP TABLE dbo.EVENTS;
      2019-09-26 02:46:28,446 INFO [Catalina-utility-1] [confluence.upgrade.ddl.HibernateDdlExecutor] executeDdlStatements Executing DDL: IF OBJECT_ID('dbo.SNAPSHOTS', 'U') IS NOT NULL   DROP TABLE dbo.SNAPSHOTS;
      

      These statements reference those tables by "dbo.tablename" directly, and so they will silently fail if the schema used is anything other than dbo.

      This is caused by a particular new function from Confluence 7 which is called during the upgrade tasks, with the source code below:

      @Override
          public String getStatement() {
              if (config.isPostgreSql() || config.isMySql() || config.isH2()) {
                  final String maybeEscapedTable = (escapeTableName ? escapeHelper.escapeIdentifier(tableName) : tableName);
                  return "DROP TABLE IF EXISTS " + maybeEscapedTable;
              } else if (config.isSqlServer()) {
                  return "IF OBJECT_ID('dbo." + tableName + "', 'U') IS NOT NULL " +
                          "  DROP TABLE dbo." + tableName + ";";
              } else if (config.isOracle()) {
                  return "BEGIN " +
                          "      EXECUTE IMMEDIATE 'DROP TABLE " + tableName + "'; " +
                          "  EXCEPTION " +
                          "      WHEN OTHERS THEN NULL; " +
                          "  END;";
              } else {
                  throw new IllegalStateException("Unknown database provider");
              }
          }
      

      As seen on the source code, the statement referenced on the logs is formed by " DROP TABLE dbo." + tableName + ", with the dbo schema hardcoded.

      Workaround

      There are two possible workarounds to the problem.

      The first one is to move the Confluence tables to the dbo schema, before the upgrade, as that allows the upgrade to complete successfully.

      Second workaround is to manually remove EVENTS and SNAPSHOTS tables before the upgrade:

      drop table EVENTS;
      drop table SNAPSHOTS;
      

      This will allow the tables to be created automatically during the server startup and all subsequent upgrade tasks for indexes creation will be applied successfully.

      Attachments

        Issue Links

          Activity

            People

              epyshnograev Efim (Inactive)
              fmorais Felipe Morais (Inactive)
              Votes:
              1 Vote for this issue
              Watchers:
              12 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: