-
Type:
Bug
-
Resolution: Timed out
-
Priority:
Low
-
None
-
Affects Version/s: 4.1
-
Component/s: JQL
-
4.01
-
1
-
Severity 3 - Minor
-
If you give JIRA "too much" heap (e.g. 2 or 3 GB) then you may get a stack trace that looks like:
HTTP Status 404 - Could not execute action [ViewIssue]:OutOfMemoryError likely caused by the Sun VM Bug described in https://issues.apache.org/jira/browse/LUCENE-1566; try calling FSDirectory.setReadChunkSize with a a value smaller than the current chunk size (104857600)<p><small><small><pre>java.lang.OutOfMemoryError: OutOfMemoryError likely caused by the Sun VM Bug described in https://issues.apache.org/jira/browse/LUCENE-1566; try calling FSDirectory.setReadChunkSize with a a value smaller than the current chunk size (104857600) at
If you look at LUCENE-1566 you can see that the actual problem is a bug in Sun's 32-bit JVMs that Lucene trips over. In Lucene 2.9 (which is in JIRA 4.1) they have added a workaround the handles most heap sizes but if you go too large (say 3GB) then you still trip over the problem.
From what I can tell this could happen on any version of JIRA. Since we haven't had users complaining then it must not be a very common occurence. We tripped over it on SAC mostly by accident.
I think trying to have JIRA handle this would add Yet Another Configuration Knob for what appears to be a very rare case. As Chris pointed out, it isn't even clear what kind of instance you would have that would require that much heap and whether JIRA works at all with instances that large.
It seems like a better solution would to add a SystemEnvironmentCheck that looks for this scenario and gives the user a warning and a link to a knowledge base article explaining the situation.