Uploaded image for project: 'Jira Data Center'
  1. Jira Data Center
  2. JRASERVER-68653

Asynchronous cache replication queue - leaking file descriptor when queue file corrupted


    • Icon: Bug Bug
    • Resolution: Fixed
    • Icon: Low Low
    • 7.13.1, 8.0.0
    • 7.9.2, 7.7.4, 7.8.4, 7.10.2, 7.11.2, 7.13.0, 7.6.9, 7.12.3, 7.6.10
    • Data Center - Other


      If a cache replication queue is corrupted when a node is shutting down, then on the next node start Jira will try to open this queue file every time it is required (a cache replication message is being send to another node on this particular channel = file). If the existing file is corrupted it fails with the following error:

      ERROR      [c.a.j.c.distribution.localq.LocalQCacheManager] Error when creating cache replication queue for node: [node_name]. This node will be inconsistent. Error: File is corrupt; length stored in header is 0.

      This results in:

      • cache replication message not being delivered
      • leak of file descriptor
        • Jira hits into "Too many open files" error. Reviewing lsof output points to many localq entries.

      Desired Jira behaviour

      If the file is corrupted backup this file (copy with corrupted_ prefix) and create a new file.


      Delete the corrupted queue file. Step to identify the corrupted file can be found following comment-1917799
      It should not be necessary to shut down this node. It should recreate this queue file automatically. 

            mswinarski Maciej Swinarski (Inactive)
            mswinarski Maciej Swinarski (Inactive)
            2 Vote for this issue
            13 Start watching this issue