Details
-
Bug
-
Resolution: Fixed
-
Low
-
6.4.14, 7.2.9, 7.5.1, 7.2.11
-
6.04
-
3
-
Severity 2 - Major
-
7
-
Description
Summary
CachingTaggingAvatarStore is doing a removal (and thus a replicated event) on every read request.
From Chris Fuller comment on this Answers thread, we have a piece of code we can make it better (point 2).
Chris Fuller Oct 25, 2016
This particular cache is doing a removal (and thus a replicated event) on every read request, which is just plain dumb.
com.atlassian.jira.avatar.CachingTaggingAvatarStore#getByIdTagged@Override public Avatar getByIdTagged(final Long avatarId) { try { return taggedAvatars.get(avatarId, () -> tagAndRetrieve(avatarId)).orElse(null); } finally { avatars.remove(avatarId); } }That `remove` probably isn't necessary at all. The point is that once an avatar is "tagged" with the metadata that confirms it came from JIRA, we don't need to hold onto the untagged version of it anymore. But on a cache hit, that untagged one is already gone, so the remove on the untagged avatar is just wasteful. It could be moved inside the loading function or possibly removed altogether, depending on what the surrounding code does (I didn't dig in further than that).
Steps to Reproduce
- Load URL: /secure/projectavatar?pid=<P_ID>&avatarId=<A_ID>
Expected Results
com.atlassian.jira.avatar.CachingTaggingAvatarStore.getByIdTagged should not perform removal (and thus a replicated event) on every read request.
Actual Results
com.atlassian.jira.avatar.CachingTaggingAvatarStore.getByIdTagged removes avatarId from avatars cache on every read.
Thread dumps from customer instances have shown where multiple threads are waiting on cache replication resulting from getByIdTagged > ehcache.Cache.remove. Example stacktrace:
"http-nio-8080-exec-371 url:/secure/projectavatar username:user1 " #89237 daemon prio=5 tid=0x00007f44e0066000 nid=0x1a86f waiting on condition [0x00007f4eb5929000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x00007f56bcc7b2d8> (a java.util.concurrent.CompletableFuture$Signaller) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) ... at com.atlassian.jira.cluster.distribution.JiraCacheManagerPeerProvider.waitForAll(JiraCacheManagerPeerProvider.java:95) at com.atlassian.jira.cluster.distribution.JiraCacheManagerPeerProvider.listRemoteCachePeers(JiraCacheManagerPeerProvider.java:63) ... at net.sf.ehcache.constructs.EhcacheDecoratorAdapter.remove(EhcacheDecoratorAdapter.java:155) at com.atlassian.cache.ehcache.LoadingCache.remove(LoadingCache.java:208) at com.atlassian.cache.ehcache.DelegatingCache.remove(DelegatingCache.java:137) at com.atlassian.jira.avatar.CachingTaggingAvatarStore.getByIdTagged(CachingTaggingAvatarStore.java:107) at com.atlassian.jira.avatar.AvatarManagerImpl.getByIdTagged(AvatarManagerImpl.java:104) ... Locked ownable synchronizers: - <0x00007f5134a83a00> (a java.util.concurrent.ThreadPoolExecutor$Worker) - <0x00007f5194be96b0> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
Notes
Also please note, since avatarId is removed from avatars cache it will take some time to populate it back and also all threads which need it will wait for data.
Thread dumps from customer instances have shown where 6 threads are waiting on cache population:
"http-nio-8080-exec-280 url:/secure/QuickCreateIssue!...a username:user2 " #69636 daemon prio=5 tid=0x00007f4434086800 nid=0x1ce2f waiting on condition [0x00007f4ec3214000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x00007f5194be96b0> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:967) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1283) at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(ReentrantReadWriteLock.java:727) at net.sf.ehcache.concurrent.ReadWriteLockSync.lock(ReadWriteLockSync.java:50) at net.sf.ehcache.constructs.blocking.BlockingCache.acquiredLockForKey(BlockingCache.java:196) at net.sf.ehcache.constructs.blocking.BlockingCache.get(BlockingCache.java:158) at com.atlassian.cache.ehcache.LoadingCache.get(LoadingCache.java:75) at com.atlassian.cache.ehcache.DelegatingCache.get(DelegatingCache.java:99) at com.atlassian.jira.avatar.CachingTaggingAvatarStore.getById(CachingTaggingAvatarStore.java:99) at com.atlassian.jira.avatar.AvatarManagerImpl.getById(AvatarManagerImpl.java:99) at com.atlassian.jira.project.ProjectImpl.getAvatar(ProjectImpl.java:132) ...
Note threads are waiting for CachingTaggingAvatarStore.getById
Also note that following Jira actions will trigger massive amount of /secure/projectavatar calls:
- /secure/ViewProjects.jspa
- /secure/project/ViewProjects.jspa
Workaround
None
Attachments
Issue Links
- is duplicated by
-
JRASERVER-66138 JIRA mail queue is invalidating avatars cache
- Closed
- relates to
-
JSDCLOUD-4502 Improve how JIRA handle avatar synchronous cache
- Closed