-
Type:
Bug
-
Resolution: Unresolved
-
Priority:
Low
-
None
-
Affects Version/s: 9.2.0, 9.2.3
-
Component/s: Data Center - Core
-
15
-
Severity 3 - Minor
-
33
Issue Summary
In clustered instances, we see the following exception is recorded when the second (or more) node starts up:
2025-04-23 07:38:41,366 ERROR [hz.confluence.event-5] [atlassian.confluence.event.ConfluenceListenerInvoker] log java.lang.RuntimeException occurred dispat ching com.atlassian.confluence.cluster.hazelcast.HazelcastClusterEventWrapper to [com.atlassian.confluence.impl.cache.hazelcast.hibernate.LocalRegionCacheM axSizeAdjuster] java.lang.RuntimeException: Listener: com.atlassian.confluence.impl.cache.hazelcast.hibernate.LocalRegionCacheMaxSizeAdjuster event: com.atlassian.confluen ce.cluster.hazelcast.HazelcastClusterEventWrapper
The above seems to relate to the LocalRegionCacheMaxSizeAdjuster class, introduced to Listens to the MaxCacheSizeChangedEvent events and adjust max cache size for local region caches
Environment:
- Confluence 9.2.x (tested on 9.2.0 and 9.2.3)
- Clustered environment.
Steps to Reproduce
- Setup and startup a Confluence 9.2.0
- Spin up a second node as part of that cluster
Expected Results
When the second node starts up, the above exception should not be returned.
Actual Results
The following exception is thrown in the atlassian-confluence.log file:
2025-04-23 07:38:41,366 ERROR [hz.confluence.event-5] [atlassian.confluence.event.ConfluenceListenerInvoker] log java.lang.RuntimeException occurred dispat ching com.atlassian.confluence.cluster.hazelcast.HazelcastClusterEventWrapper to [com.atlassian.confluence.impl.cache.hazelcast.hibernate.LocalRegionCacheM axSizeAdjuster] java.lang.RuntimeException: Listener: com.atlassian.confluence.impl.cache.hazelcast.hibernate.LocalRegionCacheMaxSizeAdjuster event: com.atlassian.confluen ce.cluster.hazelcast.HazelcastClusterEventWrapper
The instance itself starts up, though the above exception creates doubts about whether the LocalRegionCacheMaxSizeAdjuster config is set correctly.
Workaround
Currently, there is no known workaround for this behavior. A workaround will be added here when available