• 8
    • 51
    • We collect Jira feedback from various sources, and we evaluate what we've collected when planning our product roadmap. To understand how this piece of feedback will be reviewed, see our Implementation of New Features Policy.

      NOTE: This suggestion is for JIRA Server. Using JIRA Cloud? See the corresponding suggestion.

      As per this documentation: http://lucene.apache.org/core/3_2_0/api/all/org/apache/lucene/index/IndexWriter.html#deletionPolicy

      Lucene does not "out of the box" support NFS, though it could be possible at a very low level to get this to work given the instructions on the IndexWriter page.

      Currently as this is not implemented JIRA's Lucene Index directlry running over an NFS mount is not supported:
      https://confluence.atlassian.com/display/JIRA/Supported+Platforms

            [JRASERVER-33887] Implement JIRA Lucene Index NFS Support

            would it be possible to implement it?

            Dibyandu Roy added a comment - would it be possible to implement it?

            April added a comment -

            We encounter this error frequently when using Configuration Manager to back up / import project data, which is no great surprise when a project is relatively large.

            However, I'd like to point out that Unix has no magical ointment for this issue; all of our systems are on Linux.

            April added a comment - We encounter this error frequently when using Configuration Manager to back up / import project data, which is no great surprise when a project is relatively large. However, I'd like to point out that Unix has no magical ointment for this issue; all of our systems are on Linux.

            We are also running into this issue where we have everything on EFS, and don't see any performance issues in our 8.2.5 Jira server running on a container.

            However the healthcheck keeps reporting this, it would be great if we wouldn't have this error.

            Roland Vermeulen added a comment - We are also running into this issue where we have everything on EFS, and don't see any performance issues in our 8.2.5 Jira server running on a container. However the healthcheck keeps reporting this, it would be great if we wouldn't have this error.

            What's the issue to finally approve at least Linux NFS clients? Having an Jira, Confluence and Bitbucket server installation using NFS volumes without problems for over 4 years. Topics like desaster recovery, backups, scalability can be handled so much better using NFS. I'm waiting for an update that removes that warning when using NFS.

            Ulrich Seidl added a comment - What's the issue to finally approve at least Linux NFS clients? Having an Jira, Confluence and Bitbucket server installation using NFS volumes without problems for over 4 years. Topics like desaster recovery, backups, scalability can be handled so much better using NFS. I'm waiting for an update that removes that warning when using NFS.

            Start

            Simon Poortman added a comment - Start

            Rob Thomas added a comment - - edited

            There appears to be a bit of cargo-cult debugging happening here.

            After digging into this, the HISTORY of this issue is that people were having issues with the number of open files allowed via NFS, and with dangling file handles back at Jira 5 in the indexing job.

            Lucerne is concerned about one specific thing – "This is necessary on filesystems like NFS that do not support "delete on last close" semantics, which Lucene's "point in time" search normally relies on."   Someone has misread that, and is assuming that ALL NFS clients are broken. This is wrong. Only the WINDOWS NFS client does not handle that correctly (or didn't, last time I looked).

            All Unix-based NFS clients handle this correctly (by renaming the file out of the way and then ACTUALLY deleting it when closed, which provides the correct experience to the application).

            Is it possible to remove this warning UNLESS the application is running on Windows, please?

            Additionally, even the NFS issue is resolved in recent versions of Docker, as the default files limit is now 65536 on Docker 17 and higher. 

             

            Rob Thomas added a comment - - edited There appears to be a bit of cargo-cult debugging happening here. After digging into this, the HISTORY of this issue is that people were having issues with the number of open files allowed via NFS, and with dangling file handles back at Jira 5 in the indexing job. Lucerne is concerned about one specific thing – "This is necessary on filesystems like NFS that do not support "delete on last close" semantics, which Lucene's "point in time" search normally relies on."   Someone has misread that, and is assuming that ALL NFS clients are broken. This is wrong. Only the WINDOWS NFS client does not handle that correctly (or didn't, last time I looked). All Unix-based NFS clients handle this correctly (by renaming the file out of the way and then ACTUALLY deleting it when closed, which provides the correct experience to the application). Is it possible to remove this warning UNLESS the application is running on Windows, please? Additionally, even the NFS issue is resolved in recent versions of Docker, as the default files limit is now 65536 on Docker 17 and higher.   

            I believe Bamboo may have this behavior also.

            Tyler Mace added a comment - I believe Bamboo may have this behavior also.

            What is completely missing from this issue is what the problem is - i.e. how does this problem show up practically? We have our indexes on NFS storage in both Jira and Confluence and do not see any problems. What do we need to look out for?

            Intel CHD Jira Admin added a comment - What is completely missing from this issue is what the problem is - i.e. how does this problem show up practically? We have our indexes on NFS storage in both Jira and Confluence and do not see any problems. What do we need to look out for?

            Petr Musil added a comment - - edited

            As wrote SPence, Confluence 5.4.3 has the same behavior.

            Petr Musil added a comment - - edited As wrote SPence, Confluence 5.4.3 has the same behavior.

            SPence added a comment - - edited

            As a note, it looks like Confluence 5.4.3 just bit us with the same issue.

            SPence added a comment - - edited As a note, it looks like Confluence 5.4.3 just bit us with the same issue.

            Sander D added a comment -

            Same here. NFS is the de facto standard in enterprise storage solutions. Our only solution, with over 10.000 VM's is to fake local storage using our VM management software. It's still NFS but it behaves as ext4. Which solves the problem.

            However, this comes with several draw backs, including the problem that a 'local' drive, embedded in the VM is harder to backup and next to impossible to access through conventional means. Shared network storage is back-upped by the NFS host several times a day. The local storage is managed by the VM and is "embedded" in the virtual machine image. The only available solution is to backup the entire machine, which consists of several gigabytes of junk (ie. the OS itself). Such a procedure can only be done once a day given the expense of constantly backing up complete virtual machines. Mounting them and restoring files from them is a pain in the butt as well since only the entire machine is available, not the requested files.

            A NFS-snapshot can be created over incremental differences in the files on them, excluding caches and indexes, which means that we can restore anything that happened on that machine almost within a one hour window.

            Forcing us to move to local storage means we have to either (a) make hourly backups of 15GB worth of data just to make sure we keep our SLA's, (b) implement and build a custom solution that will copy the entire work-directory to NFS daily, while keeping files consistent during this backup and after, while mimicking our existing backup solution (NFS) so at least restoring will not be manual labor too or (c) accept a huge fallback in our JIRA's reliability and backup procedures which will ultimately hurt the SLA's we have with our customers.

            Or a LuceneDeletionPolicy gets implemented.

            Sander D added a comment - Same here. NFS is the de facto standard in enterprise storage solutions. Our only solution, with over 10.000 VM's is to fake local storage using our VM management software. It's still NFS but it behaves as ext4. Which solves the problem. However, this comes with several draw backs, including the problem that a 'local' drive, embedded in the VM is harder to backup and next to impossible to access through conventional means. Shared network storage is back-upped by the NFS host several times a day. The local storage is managed by the VM and is "embedded" in the virtual machine image. The only available solution is to backup the entire machine, which consists of several gigabytes of junk (ie. the OS itself). Such a procedure can only be done once a day given the expense of constantly backing up complete virtual machines. Mounting them and restoring files from them is a pain in the butt as well since only the entire machine is available, not the requested files. A NFS-snapshot can be created over incremental differences in the files on them, excluding caches and indexes, which means that we can restore anything that happened on that machine almost within a one hour window. Forcing us to move to local storage means we have to either (a) make hourly backups of 15GB worth of data just to make sure we keep our SLA's, (b) implement and build a custom solution that will copy the entire work-directory to NFS daily, while keeping files consistent during this backup and after, while mimicking our existing backup solution (NFS) so at least restoring will not be manual labor too or (c) accept a huge fallback in our JIRA's reliability and backup procedures which will ultimately hurt the SLA's we have with our customers. Or a LuceneDeletionPolicy gets implemented.

            Hi,

            It is very important that I get some answers to the questions above.

            Regards,
            Michael Danielsson

            Michael Danielsson added a comment - Hi, It is very important that I get some answers to the questions above. Regards, Michael Danielsson

            Hi,

            Is there someone that can answer the above questions?

            Regards,
            Michael Danielsson

            Michael Danielsson added a comment - Hi, Is there someone that can answer the above questions? Regards, Michael Danielsson

            Hi,

            We have some questions about Lucene and JIRA.
            The questions are related to JIRA versions that does not support NFS.

            The information the search engine handles, is that from DB or attachments or both?

            Is it possible to store the search engine data ( indices, cache and such ) on "local" disk separated from JIRA_HOME?

            And in the case above, what happens if there is no search engine data ? Will it be automatically created and how long time will that take?

            Regards,
            Michael Danielsson

            Michael Danielsson added a comment - Hi, We have some questions about Lucene and JIRA. The questions are related to JIRA versions that does not support NFS. The information the search engine handles, is that from DB or attachments or both? Is it possible to store the search engine data ( indices, cache and such ) on "local" disk separated from JIRA_HOME? And in the case above, what happens if there is no search engine data ? Will it be automatically created and how long time will that take? Regards, Michael Danielsson

              Unassigned Unassigned
              mandreacchio Michael Andreacchio
              Votes:
              103 Vote for this issue
              Watchers:
              81 Start watching this issue

                Created:
                Updated: