Resolution: Support Request
We have a customer with a large JIRA (1M issues) and every two weeks they have to restart JIRA because it is running out of file descriptors. We increased the system setting from 30K to 300K descriptors but it still happens eventually. We're using SSD for the JIRA home directory
I've done some investigation and found that on the production JIRA using lsof and the counts on the Lucene issue index files increase every 5 minutes or so. On staging it's only once or twice per day. The rate is lower at weekends, so it is human-related. What happens is that the same 60 different index files have new file descriptors added, but the old file descriptors are not closed. There is also an error that appears at the same time in the JIRA log file:
"Tried to reopen the IndexReader, but it threw AlreadyClosedException. Opening a fresh IndexReader."
This comes from DefaultIndexEngine and is where a new Lucene IndexReader is created if the previous one is believed closed. I suspect that the previous IndexReader may not have been really closed.
There is a comment right in that place in the source code. It's the "don't worry" that makes me think there is still a problem around here!
// JRADEV-7825: Really this shouldn't happen unless someone closes the reader from outside all
// the inscrutable code in this class (and its friends) but
// don't worry, we will just open a new one in that case.
This is similar symptoms but apparently a different cause than https://confluence.atlassian.com/display/JIRAKB/Loss+of+Functionality+due+to+Too+Many+Open+Files+Error
- is related to
JRASERVER-38039 JIRA IndexReader is leaking file descriptors