-
Bug
-
Resolution: Obsolete
-
Medium
-
2.3, 2.4, 2.5, 2.7, 2.8, 2.9, 3.0, 5.1.5
When a node of a cluster starts it first fires up a cluster service and then initializes plugin subsystem. It means that for a period of time between the start up of the cluster service and initialization of plugin sysbsystem ( ConfluencePluginManager ) cluster does not have access to plugin classes. If during this interval a distributed cache was updated with class from a plugin on a different node and update was received we get ClassNotFoundException
2008-07-18 14:20:52,358 ERROR [Logger@9247854 3.3.1/389] [Coherence] log 2008-07-18 14:20:52.332 Oracle Coherence GE 3.3.1/389 <Error> (thread=ReplicatedCache, member=3): java.io.IOException: readObject failed: java.lang.ClassNotFoundException: com.atlassian.confluence.extra.jira.CacheKey at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1362) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1208) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:319) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:242) at java.io.ObjectInputStream.resolveClass(ObjectInputStream.java:585) at com.tangosol.io.ResolvingObjectInputStream.resolveClass(ResolvingObjectInputStream.java:68) at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1544) at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1466) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1699) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:348) at com.tangosol.util.ExternalizableHelper.readSerializable(ExternalizableHelper.java:2084) at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2202) at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:3) at com.tangosol.coherence.component.util.CacheHandler.populateCache(CacheHandler.CDB:23) at com.tangosol.coherence.component.util.daemon.queueProcessor.service.ReplicatedCache$CacheUpdate.onReceived(ReplicatedCache.CDB:5) at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.onMessage(Service.CDB:9) at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.onNotify(Service.CDB:123) at com.tangosol.coherence.component.util.daemon.queueProcessor.service.ReplicatedCache.onNotify(ReplicatedCache.CDB:3) at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:35) at java.lang.Thread.run(Thread.java:613) ClassLoader: com.atlassian.plugin.classloader.DelegationClassLoader@9ed2e4 at com.tangosol.util.ExternalizableHelper.readSerializable(ExternalizableHelper.java:2092) at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2202) at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:3) at com.tangosol.coherence.component.util.CacheHandler.populateCache(CacheHandler.CDB:23) at com.tangosol.coherence.component.util.daemon.queueProcessor.service.ReplicatedCache$CacheUpdate.onReceived(ReplicatedCache.CDB:5) at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.onMessage(Service.CDB:9) at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.onNotify(Service.CDB:123) at com.tangosol.coherence.component.util.daemon.queueProcessor.service.ReplicatedCache.onNotify(ReplicatedCache.CDB:3) at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:35) at java.lang.Thread.run(Thread.java:613)
Steps to reproduce:
1. start node1
2. access a page on node1 with a jira issues macro (wait until issues are loaded)
3. start node2 and observe the exception in the log file.
In answer to the question, would switching a Hazelcast cache suffer from the same bug:
Answer
When an object is put into a Hazelcast cache, it will be serialized to bytes, which are transferred to the master node and the backup node (assuming one configured). On these nodes, the object will be deserialized from the bytes.
Any request to get the object back will involve a request to the master node, which will serialize the master copy to bytes, which are transferred back to the requesting node, where the object will be deserialized from the bytes and returned to the caller.
Upshot is that Hazelcast will suffer from the same problem.