-
Suggestion
-
Resolution: Unresolved
-
None
-
2
-
3
-
Problem Definition
Guava cache has some performance and memory problems which might create extra CPU usage or Memory pressure under high load for highly accessed caches (see JRASERVER-66356, JRASERVER-67807)
Specific tickets:
- https://github.com/google/guava/issues/2063
This is due to the cache's overhead, in particular the GC thrashing from ConcurrentLinkedQueue and a hot read counter
- https://github.com/google/guava/issues/2408
Guava LocalCache recencyQueue is 223M entires dominating 5.3GB of heap. ... The access rate has to exceed the drain (cleanUp) rate. If you have more threads than CPUs doing nothing but reading from the cache without any pause time then queue will grow excessively large
- https://github.com/google/guava/issues/1487
A CLQ may suffer contention at the tail due to all threads spinning in a CAS loop to append their element. This is reduced by having one per segment, by default 4, though hot entries will hash to the same segment and accessing threads may contend.
Code related to the problem
- Related variables
/** * The recency queue is used to record which entries were accessed for updating the access * list's ordering. It is drained as a batch operation when either the DRAIN_THRESHOLD is * crossed or a write occurs on the segment. */ final Queue<ReferenceEntry<K, V>> recencyQueue; /** * A counter of the number of reads since the last write, used to drain queues on a small * fraction of read operations. */ final AtomicInteger readCount = new AtomicInteger();
methods:
- postReadCleanup
/** * Performs routine cleanup following a read. Normally cleanup happens during writes. If cleanup * is not observed after a sufficient number of reads, try cleaning up from the read thread. */ void postReadCleanup() { if ((readCount.incrementAndGet() & DRAIN_THRESHOLD) == 0) { cleanUp(); } }
- recordRead()
/** * Records the relative order in which this read was performed by adding {@code entry} to the * recency queue. At write-time, or when the queue is full past the threshold, the queue will * be drained and the entries therein processed. * * <p>Note: locked reads should use {@link #recordLockedRead}. */ void recordRead(ReferenceEntry<K, V> entry, long now) { if (map.recordsAccess()) { entry.setAccessTime(now); } recencyQueue.add(entry); }
- drainRecencyQueue()
/** * Drains the recency queue, updating eviction metadata that the entries therein were read in * the specified relative order. This currently amounts to adding them to relevant eviction * lists (accounting for the fact that they could have been removed from the map since being * added to the recency queue). */ ---- void drainRecencyQueue() { 2590 ReferenceEntry<K, V> e; 2591 while ((e = recencyQueue.poll()) != null) { 2592 // An entry may be in the recency queue despite it being removed from 2593 // the map . This can occur when the entry was concurrently read while a 2594 // writer is removing it from the segment or after a clear has removed 2595 // all of the segment's entries. 2596 if (accessQueue.contains(e)) { 2597 accessQueue.add(e); ---- } ---- } ---- }
Suggested Solution
Possible solutions:
- Avoid using expiresAfterAccess() or evictsBySize()
- This will reduce contention for recencyQueue, contention for readCount will be still present
- Switch to another imlementation
- Caffeine looks promising - https://github.com/ben-manes/caffeine/wiki/Benchmarks
- Also it has the refresh semantics - https://github.com/ben-manes/caffeine/wiki/Refresh
Workaround
None
- is related to
-
JRASERVER-66356 JIRA becomes unresponsive because of growing recency queue in frequently used projectRoleActors cache.
- Closed
-
JRASERVER-66399 Increase number of cache segment for local cache
- Gathering Interest
- relates to
-
JRASERVER-67807 Increased CPU usage due to contention in docValuesCache
- Closed
-
JRASERVER-70518 Increase number of cache stripes for EHCache cache
- Closed
-
DCNG-185 Loading...
- is mentioned by
-
JSEV-2653 Loading...
- mentioned in
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...