Details
-
Suggestion
-
Resolution: Unresolved
-
None
-
None
Description
We are taking the inspiration from this link: https://bitbucket.org/atlassian/dc-deployments-automation to automate both server and DC installations of Atlasssian applications. This has worked really well with JIRA but we are having some issues with confluence mainly because of the way information are stored in the file system. JIRA for example does not store the information like server id, license information and cluster details in the dbconfig.xml. Whereas confluence has all those information in the confluence.cfg.xml.
For that reason, we where able to automate the installation of JIRA server and DC easily. We were also able to automate the installation of confluence server with a special trick to not copy the confluence.cfg.xml file on every build of our code. Because a copy of the template without license and server id will assume a fresh install.
When doing the server installation,
<?xml version="1.0" encoding="UTF-8"?><confluence-configuration> <setupStep>setupstart</setupStep> <setupType>custom</setupType> <buildNumber>0</buildNumber> <properties> <property name="confluence.database.choice">postgresql</property> <property name="confluence.database.connection.type">database-type-standard</property> <property name="hibernate.dialect">com.atlassian.confluence.impl.hibernate.dialect.PostgreSQLDialect</property> <property name="webwork.multipart.saveDir">${localHome}/temp</property> <property name="attachments.dir">${confluenceHome}/attachments</property> <property name="hibernate.connection.driver_class">{{ atl_db_driver }}</property> <property name="hibernate.connection.url">{{ atl_jdbc_url }}</property> <property name="hibernate.connection.username">{{ atl_jdbc_user }}</property> <property name="hibernate.connection.password">{{ atl_jdbc_password }}</property> <property name="hibernate.c3p0.min_size">{{ atl_db_poolminsize }}</property> <property name="hibernate.c3p0.max_size">{{ atl_db_poolmaxsize }}</property> <property name="hibernate.c3p0.timeout">{{ atl_db_timeout }}</property> <property name="hibernate.c3p0.idle_test_period">{{ atl_db_idletestperiod }}</property> <property name="hibernate.c3p0.max_statements">{{ atl_db_maxstatements }}</property> <property name="hibernate.c3p0.validate">{{ atl_db_validate }}</property> <property name="hibernate.c3p0.acquire_increment">{{ atl_db_acquireincrement }}</property> <property name="hibernate.c3p0.preferredTestQuery">select version();</property> </properties> </confluence-configuration>
Now applying a variant of that template with DC configurations does not work and causes a crash
<confluence-configuration> <setupStep>setupstart</setupStep> <setupType>custom</setupType> <buildNumber>0</buildNumber> <properties> <property name="confluence.database.choice">postgresql</property> <property name="confluence.database.connection.type">database-type-standard</property> <property name="hibernate.dialect">com.atlassian.confluence.impl.hibernate.dialect.PostgreSQLDialect</property> <property name="webwork.multipart.saveDir">${localHome}/temp</property> <property name="attachments.dir">${confluenceHome}/attachments</property> <property name="hibernate.connection.driver_class">{{ atl_db_driver }}</property> <property name="hibernate.connection.url">{{ atl_jdbc_url }}</property> <property name="hibernate.connection.username">{{ atl_jdbc_user }}</property> <property name="hibernate.connection.password">{{ atl_jdbc_password }}</property> <property name="hibernate.c3p0.min_size">{{ atl_db_poolminsize }}</property> <property name="hibernate.c3p0.max_size">{{ atl_db_poolmaxsize }}</property> <property name="hibernate.c3p0.timeout">{{ atl_db_timeout }}</property> <property name="hibernate.c3p0.idle_test_period">{{ atl_db_idletestperiod }}</property> <property name="hibernate.c3p0.max_statements">{{ atl_db_maxstatements }}</property> <property name="hibernate.c3p0.validate">{{ atl_db_validate }}</property> <property name="hibernate.c3p0.acquire_increment">{{ atl_db_acquireincrement }}</property> <property name="hibernate.c3p0.preferredTestQuery">select version();</property> <property name="shared-home">{{ atl_product_home_shared }}</property> <property name="confluence.cluster">true</property> <property name="confluence.cluster.home">{{ atl_product_home_shared }}</property> <property name="confluence.cluster.aws.iam.role">{{ atl_hazelcast_network_aws_iam_role }}</property> <property name="confluence.cluster.aws.region">{{ atl_hazelcast_network_aws_iam_region }}</property> <property name="confluence.cluster.aws.host.header">{{ atl_hazelcast_network_aws_host_header }}</property> <property name="confluence.cluster.aws.tag.key">{{ atl_hazelcast_network_aws_tag_key }}</property> <property name="confluence.cluster.aws.tag.value">{{ atl_hazelcast_network_aws_tag_value }}</property> <property name="confluence.cluster.join.type">aws</property> <property name="confluence.cluster.name">{{ atl_aws_stack_name }}</property> <property name="confluence.cluster.ttl">1</property> </properties> </confluence-configuration>
This is the error message we are getting
com.atlassian.util.concurrent.LazyReference$InitializationException: java.lang.IllegalStateException: Spring Application context has not been set at com.atlassian.util.concurrent.LazyReference.getInterruptibly(LazyReference.java:149) at com.atlassian.util.concurrent.LazyReference.get(LazyReference.java:112) at com.atlassian.confluence.setup.webwork.ConfluenceXWorkTransactionInterceptor.getTransactionManager(ConfluenceXWorkTransactionInterceptor.java:29) at com.atlassian.xwork.interceptors.XWorkTransactionInterceptor.intercept(XWorkTransactionInterceptor.java:56) at com.opensymphony.xwork.DefaultActionInvocation.invoke(DefaultActionInvocation.java:165) at com.atlassian.confluence.xwork.SetupIncompleteInterceptor.intercept(SetupIncompleteInterceptor.java:52) at com.opensymphony.xwork.DefaultActionInvocation.invoke(DefaultActionInvocation.java:165) at com.atlassian.confluence.security.interceptors.SecurityHeadersInterceptor.intercept(SecurityHeadersInterceptor.java:39) at com.opensymphony.xwork.DefaultActionInvocation.invoke(DefaultActionInvocation.java:165) at com.opensymphony.xwork.interceptor.AroundInterceptor.intercept(AroundInterceptor.java:35)
My guess is that because the template does not have the license and server id, it fails to accept the cluster information and that caused the crash. Any idea how that template can be used for DC set up please?