Atlassian-cache Caches configured with .replicateViaInvalidation() should be fully asynchronous

XMLWordPrintable

      The default configuration of atlassian-cache Cache's in Bitbucket Data Center is .replicateAsynchronously(), but the implementation of the .replicateViaInvalidation() configuration in Bitbucket versions up to and including 4.6.0 is not fully asynchronous. Instead, a Hazelcast IMap is used internally to manage invalidations between cluster nodes. This can cause synchronous (blocking) remote operations between cluster nodes which can limit performance and cause delays on one cluster node (e.g., due to very long GC pauses or OS/networking issues) to unnecessarily delay other nodes in the cluster.

      Beginning with Bitbucket Data Center 4.7, Cache's which have been configured with both the .replicateViaInvalidation() and .replicateAsynchronously() (which is the default) settings use fully asynchronous invalidation, based on a Hazelcast ITopic. This means cluster nodes do not wait unnecessarily for cache invalidations, and reduces the chance that delays on one node also affect other nodes in the cluster. In the rare event that a cluster node loses its connections to other nodes but remains up and connected to the load balancer, shared database, and shared filesystem (i.e., a "split brain" scenario), any cached data that may have become stale due to lost ITopic messages is also invalidated automatically when the node later rejoins the cluster.

      The use of fully asynchronous .replicateViaInvalidation() caches results in less coupling between cluster nodes and is more scalable when large amounts of data are cached from an external "source of truth" (e.g., the shared database or filesystem) that is available to all cluster nodes at the same or similar cost to a remote operation between cluster nodes.

            Assignee:
            Cristan Szmajda (Inactive)
            Reporter:
            Cristan Szmajda (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

              Created:
              Updated:
              Resolved: