Lucene uses org.apache.lucene.index.SegmentNorms for Fulltext search.
Initially, those data structures are created as files on disk:
and then later Lucene loads them into memory and stores in SegmentNorms.
That means the bigger the Lucene Norm size the more memory will be required for them in JVM heap
- For large Jira instances (4+ mln issues) SegmentNorms might use up to 6GB+, which adds extra pressure for the JVM
- They're recalculated on each write to the index, due to
JRASERVER-67125, so it's highly advisable to update to the corresponding version.
Example of histogram:
|Class Name||Objects||Shallow Heap||Retained Heap|
reader: org.apache.lucene.index.ReadOnlyDirectoryReader which loads org.apache.lucene.index.ReadOnlySegmentReader
which uses org.apache.lucene.index.SegmentNorms
Optimize memory usage of the SegmentNorms.
According to Lucene docs, those structures are gone in latest versions, so upgrade of Lucene should fix the problem.
Since memory usage is proportional to number of Lucene docuements and size of the Norms, there are a couple of workarounds here, in the order of preference:
- Disable Lucene search index for unused CustomFields
- Reduce scope for CustomFields (limit for specific projects)
- Keep Norms smalls by running full reindex periodically