Uploaded image for project: 'Jira Software Data Center'
  1. Jira Software Data Center
  2. JSWSERVER-16542

During rebalancing ranking operation fails to save new rank due to slow DB operation

      Not a Jira bug

      This is not a Jira core bug, ticket created to track the problem and for transparency.
      At this stage, we believe the problem is related to MySQL: for some reason MySQL chooses the wrong execution plan and doesn't use an index in optimal way. That makes SQL execution slow. That being said, we will keep the ticket open for some time to monitor the problem.

      Summary

      As part of Lexorank rebalancing, each rank value needs to be updated in AO_60DB71_LEXORANK table. At the same time, individual ranking operation has 1500 ms timeout for rank lock.
      Due to slow DB operation, issue ranking operation fails to save new rank since rank lock expires. That makes Lexorank rebalancing process stall, usually you will see no progress in UI. Please note that situation is different to JSWSERVER-15703

      Environment

      • Large JIRA instance: 1M+ issue
        • more specific large AO_60DB71_LEXORANK table
      • The issue was initially reported with Jira connected to MySQL databases, but there were other database engines affected as well, such as Oracle and MS SQL Server.

      Steps to Reproduce

      1. Run Lexorank rebalancing
      2. Monitor progress

      Expected Results

      • Lexorank rebalancing runs continuously and finishes successful.
      • SQL executions is fast:
        SELECT FIELD_ID,ID,ISSUE_ID,LOCK_HASH,LOCK_TIME,RANK,TYPE FROM AO_60DB71_LEXORANK WHERE FIELD_ID = ? AND BUCKET = ? ORDER BY RANK ASC LIMIT 1; 
        SELECT FIELD_ID,ID,ISSUE_ID,LOCK_HASH,LOCK_TIME,RANK,TYPE FROM AO_60DB71_LEXORANK WHERE FIELD_ID = ? AND BUCKET = ? ORDER BY RANK DESC LIMIT 1;
        

      Actual Results

      • Lexorank rebalancing stalls
      • Issue ranking operation fails to save new rank since rank lock expires
      • SQL executions is slow

      Notes

      • To enable logging set DEBUG for com.atlassian.greenhopper.service.lexorank and com.atlassian.greenhopper.manager.lexorank
      • Example of logging:
        2018-02-27 10:21:58,046 lexorank-executor-thread-0 DEBUG ServiceRunner     [c.a.g.s.lexorank.balance.LexoRankBalanceOperation] Acquired lock on rows
        2018-02-27 10:21:58,046 lexorank-executor-thread-0 DEBUG ServiceRunner     [c.a.g.manager.lexorank.LexoRankDaoImpl]  LexoRank Query : SELECT FIELD_ID,ID,ISSUE_ID,LOCK_HASH,LOCK_TIME,RANK,TYPE FROM AO_60DB71_LEXORANK WHERE FIELD_ID = ? AND BUCKET = ? ORDER BY RANK ASC LIMIT 1; params[15530, 2]
        2018-02-27 10:21:59,190 lexorank-executor-thread-0 DEBUG ServiceRunner     [c.a.g.manager.lexorank.LexoRankDaoImpl]  LexoRank Query : SELECT FIELD_ID,ID,ISSUE_ID,LOCK_HASH,LOCK_TIME,RANK,TYPE FROM AO_60DB71_LEXORANK WHERE FIELD_ID = ? AND BUCKET = ? ORDER BY RANK DESC LIMIT 1; params[15530, 0]
        2018-02-27 10:22:00,648 lexorank-executor-thread-0 DEBUG ServiceRunner     [c.a.g.s.lexorank.balance.LexoRankBalanceOperation] Balancing rank row [type=ISSUE_RANK_ROW, oldRank=2|hx8yj3:, newRank=0|i1sicv:]
        2018-02-27 10:22:00,650 lexorank-executor-thread-0 DEBUG ServiceRunner     [c.a.g.s.lexorank.balance.LexoRankBalanceOperation] Failed to save rank row, retry
        2018-02-27 10:22:00,650 lexorank-executor-thread-0 DEBUG ServiceRunner     [c.a.g.s.lexorank.balance.LexoRankBalanceOperation] Releasing lock
        
      • explain SELECT
        explain SELECT FIELD_ID,ID,ISSUE_ID,LOCK_HASH,LOCK_TIME,RANK,TYPE FROM AO_60DB71_LEXORANK WHERE FIELD_ID = 11500 AND BUCKET = 2 ORDER BY RANK ASC LIMIT 1;
        
        speed id select_type table type possible_keys key key_len ref
        slow 1 SIMPLE AO_60DB71_LEXORANK ref index_ao_60db71_lex1569533973,index_ao_60db71_lex1694305086 index_ao_60db71_lex1569533973 8 const
        fast 1 SIMPLE AO_60DB71_LEXORANK range index_ao_60db71_lex1694305086,index_ao_60db71_lex1569533973 index_ao_60db71_lex1569533973 775 NULL
      • Note that in case of slow select key_len is 8.
      • To help investigate problem like this in the future, please see KB based on this bug: Issue ranking operation fails - JIRA Software cannot execute the rank operation at this time

      Workaround

      • Resolving the issue involves making MySQL select the right index, to do that, please recreate index (there will be small downtime during that operation):
        DROP INDEX index_ao_60db71_lex1569533973 ON AO_60DB71_LEXORANK;
        CREATE INDEX index_ao_60db71_lex1569533973 on AO_60DB71_LEXORANK (FIELD_ID,RANK);
        
        • If you can allow longer downtime, you can run at mysql: "ANALYZE TABLE AO_60DB71_LEXORANK" (During the analysis, the table is locked with a read lock for InnoDB and MyISAM. see analyze-table)

            [JSWSERVER-16542] During rebalancing ranking operation fails to save new rank due to slow DB operation

            We are running 8.20.11 and currently have over 700k issues to balance. Noticed the balance was completing 1 issue every 2 seconds. This meant the balance may have taken 14-16 days at that rate.

            Following this workaround worked immediately and the greenhopper log reflects the speed if debugging is enabled for LexoRank.

            I've had an open issue with Atlassian and have notified them of this.

            Thanks

            Before fix (note the timings):

            2023-02-17 09:55:24,957 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalancer] Balancing next rank row for rank field [id=12501]
            2023-02-17 09:55:24,957 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation] Balancing next rank row to migrate from bucket[2] to bucket[0] for rank field[id=12501]
            2023-02-17 09:55:24,957 lexorank-executor-thread-0 DEBUG      [greenhopper.manager.lexorank.LexoRankDaoImpl]  LexoRank Query : SELECT FIELD_ID,ID,ISSUE_ID,LOCK_HASH,LOCK_TIME,`RANK`,TYPE FROM AO_60DB71_LEXORANK WHERE FIELD_ID = ? AND BUCKET = ? ORDER BY `RANK` ASC LIMIT 1; params[12501, 2]
            2023-02-17 09:55:24,994 LexoRankReindexer:thread-1 DEBUG      [greenhopper.service.lexorank.LexoRankOperation] RANK_INITIAL start [fieldId=12501, issueToRankIssueId=632016, issueToRankAroundIssueId=null
            2023-02-17 09:55:24,994 LexoRankReindexer:thread-1 DEBUG      [greenhopper.service.lexorank.LexoRankOperation] Ranking issue [id=632016] initially for rank field[id=12501]
            2023-02-17 09:55:24,994 LexoRankReindexer:thread-1 DEBUG      [greenhopper.manager.lexorank.LexoRankDaoImpl]  LexoRank Query : SELECT FIELD_ID,ID,ISSUE_ID,LOCK_HASH,LOCK_TIME,`RANK`,TYPE FROM AO_60DB71_LEXORANK WHERE FIELD_ID = ? AND ISSUE_ID = ? AND TYPE = ?; params[12501, 632016, 1]
            2023-02-17 09:55:24,994 LexoRankReindexer:thread-1 DEBUG      [greenhopper.service.lexorank.LexoRankOperation] Issue [id=632016] already has a rank[0|i0mclz:] for rank field[id=12501]
            2023-02-17 09:55:24,994 LexoRankReindexer:thread-1 DEBUG      [greenhopper.service.lexorank.LexoRankOperation] RANK_INITIAL end   [fieldId=12501, issueToRankIssueId=632016, issueToRankAroundIssueId=null
            2023-02-17 09:55:26,058 lexorank-executor-thread-0 DEBUG      [greenhopper.manager.lexorank.LexoRankDaoImpl]  LexoRank Query : SELECT FIELD_ID,ID,ISSUE_ID,LOCK_HASH,LOCK_TIME,`RANK`,TYPE FROM AO_60DB71_LEXORANK WHERE FIELD_ID = ? AND BUCKET = ? ORDER BY `RANK` DESC LIMIT 1; params[12501, 0]
            2023-02-17 09:55:26,059 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation] Fetched rows at balance boundary
            2023-02-17 09:55:26,059 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation] Fetched rows at balance boundary
            2023-02-17 09:55:26,059 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation]       rowToMigrate : LexoRankRow{id=1207648, fieldId=12501, issueId=632018, lockHash='null', lockTime=null, bucket=2, rank='2|hywh7b:', type=ISSUE_RANK_ROW}
            2023-02-17 09:55:26,059 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation]       rowLastMigrated : LexoRankRow{id=1207644, fieldId=12501, issueId=632016, lockHash='null', lockTime=null, bucket=0, rank='0|i0mclz:', type=ISSUE_RANK_ROW}
            2023-02-17 09:55:26,062 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation] Acquired lock on rows
            2023-02-17 09:55:26,062 lexorank-executor-thread-0 DEBUG      [greenhopper.manager.lexorank.LexoRankDaoImpl]  LexoRank Query : SELECT FIELD_ID,ID,ISSUE_ID,LOCK_HASH,LOCK_TIME,`RANK`,TYPE FROM AO_60DB71_LEXORANK WHERE FIELD_ID = ? AND BUCKET = ? ORDER BY `RANK` ASC LIMIT 1; params[12501, 2]
            2023-02-17 09:55:27,109 lexorank-executor-thread-0 DEBUG      [greenhopper.manager.lexorank.LexoRankDaoImpl]  LexoRank Query : SELECT FIELD_ID,ID,ISSUE_ID,LOCK_HASH,LOCK_TIME,`RANK`,TYPE FROM AO_60DB71_LEXORANK WHERE FIELD_ID = ? AND BUCKET = ? ORDER BY `RANK` DESC LIMIT 1; params[12501, 0]
            2023-02-17 09:55:27,110 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation] Balancing rank row [type=ISSUE_RANK_ROW, oldRank=2|hywh7b:, newRank=0|i0mcm7:]
            2023-02-17 09:55:27,116 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation] Successfully saved rank row
            2023-02-17 09:55:27,116 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation] Releasing lock
            2023-02-17 09:55:27,123 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalancer] ReIndexing issue[id=632018]
             

            After fix

            2023-02-17 10:44:58,917 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalancer] Balancing next rank row for rank field [id=12501]
            2023-02-17 10:44:58,917 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation] Balancing next rank row to migrate from bucket[2] to bucket[0] for rank field[id=12501]
            2023-02-17 10:44:58,917 lexorank-executor-thread-0 DEBUG      [greenhopper.manager.lexorank.LexoRankDaoImpl]  LexoRank Query : SELECT FIELD_ID,ID,ISSUE_ID,LOCK_HASH,LOCK_TIME,`RANK`,TYPE FROM AO_60DB71_LEXORANK WHERE FIELD_ID = ? AND BUCKET = ? ORDER BY `RANK` ASC LIMIT 1; params[12501, 2]
            2023-02-17 10:44:58,918 lexorank-executor-thread-0 DEBUG      [greenhopper.manager.lexorank.LexoRankDaoImpl]  LexoRank Query : SELECT FIELD_ID,ID,ISSUE_ID,LOCK_HASH,LOCK_TIME,`RANK`,TYPE FROM AO_60DB71_LEXORANK WHERE FIELD_ID = ? AND BUCKET = ? ORDER BY `RANK` DESC LIMIT 1; params[12501, 0]
            2023-02-17 10:44:58,925 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation] Fetched rows at balance boundary
            2023-02-17 10:44:58,925 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation] Fetched rows at balance boundary
            2023-02-17 10:44:58,925 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation]       rowToMigrate : LexoRankRow{id=1226685, fieldId=12501, issueId=641089, lockHash='null', lockTime=null, bucket=2, rank='2|hyxruf:', type=ISSUE_RANK_ROW}
            2023-02-17 10:44:58,925 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation]       rowLastMigrated : LexoRankRow{id=1226683, fieldId=12501, issueId=641615, lockHash='null', lockTime=null, bucket=0, rank='0|i0nn93:', type=ISSUE_RANK_ROW}
            2023-02-17 10:44:58,950 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation] Acquired lock on rows
            2023-02-17 10:44:58,950 lexorank-executor-thread-0 DEBUG      [greenhopper.manager.lexorank.LexoRankDaoImpl]  LexoRank Query : SELECT FIELD_ID,ID,ISSUE_ID,LOCK_HASH,LOCK_TIME,`RANK`,TYPE FROM AO_60DB71_LEXORANK WHERE FIELD_ID = ? AND BUCKET = ? ORDER BY `RANK` ASC LIMIT 1; params[12501, 2]
            2023-02-17 10:44:58,951 lexorank-executor-thread-0 DEBUG      [greenhopper.manager.lexorank.LexoRankDaoImpl]  LexoRank Query : SELECT FIELD_ID,ID,ISSUE_ID,LOCK_HASH,LOCK_TIME,`RANK`,TYPE FROM AO_60DB71_LEXORANK WHERE FIELD_ID = ? AND BUCKET = ? ORDER BY `RANK` DESC LIMIT 1; params[12501, 0]
            2023-02-17 10:44:58,952 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation] Balancing rank row [type=ISSUE_RANK_ROW, oldRank=2|hyxruf:, newRank=0|i0nn9b:]
            2023-02-17 10:44:59,066 LexoRankReindexer:thread-1 DEBUG      [greenhopper.service.lexorank.LexoRankOperation] RANK_INITIAL start [fieldId=12501, issueToRankIssueId=641615, issueToRankAroundIssueId=null
            2023-02-17 10:44:59,066 LexoRankReindexer:thread-1 DEBUG      [greenhopper.service.lexorank.LexoRankOperation] Ranking issue [id=641615] initially for rank field[id=12501]
            2023-02-17 10:44:59,066 LexoRankReindexer:thread-1 DEBUG      [greenhopper.manager.lexorank.LexoRankDaoImpl]  LexoRank Query : SELECT FIELD_ID,ID,ISSUE_ID,LOCK_HASH,LOCK_TIME,`RANK`,TYPE FROM AO_60DB71_LEXORANK WHERE FIELD_ID = ? AND ISSUE_ID = ? AND TYPE = ?; params[12501, 641615, 1]
            2023-02-17 10:44:59,066 LexoRankReindexer:thread-1 DEBUG      [greenhopper.service.lexorank.LexoRankOperation] Issue [id=641615] already has a rank[0|i0nn93:] for rank field[id=12501]
            2023-02-17 10:44:59,066 LexoRankReindexer:thread-1 DEBUG      [greenhopper.service.lexorank.LexoRankOperation] RANK_INITIAL end   [fieldId=12501, issueToRankIssueId=641615, issueToRankAroundIssueId=null
            2023-02-17 10:44:59,378 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation] Successfully saved rank row
            2023-02-17 10:44:59,378 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation] Releasing lock
            2023-02-17 10:44:59,382 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalancer] ReIndexing issue[id=641089]
             

            James Beagrie added a comment - We are running 8.20.11 and currently have over 700k issues to balance. Noticed the balance was completing 1 issue every 2 seconds. This meant the balance may have taken 14-16 days at that rate. Following this workaround worked immediately and the greenhopper log reflects the speed if debugging is enabled for LexoRank. I've had an open issue with Atlassian and have notified them of this. Thanks Before fix (note the timings): 2023-02-17 09:55:24,957 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalancer] Balancing next rank row for rank field [id=12501] 2023-02-17 09:55:24,957 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation] Balancing next rank row to migrate from bucket[2] to bucket[0] for rank field[id=12501] 2023-02-17 09:55:24,957 lexorank-executor-thread-0 DEBUG      [greenhopper.manager.lexorank.LexoRankDaoImpl]  LexoRank Query : SELECT FIELD_ID,ID,ISSUE_ID,LOCK_HASH,LOCK_TIME,`RANK`,TYPE FROM AO_60DB71_LEXORANK WHERE FIELD_ID = ? AND BUCKET = ? ORDER BY `RANK` ASC LIMIT 1; params[12501, 2] 2023-02-17 09:55:24,994 LexoRankReindexer:thread-1 DEBUG      [greenhopper.service.lexorank.LexoRankOperation] RANK_INITIAL start [fieldId=12501, issueToRankIssueId=632016, issueToRankAroundIssueId= null 2023-02-17 09:55:24,994 LexoRankReindexer:thread-1 DEBUG      [greenhopper.service.lexorank.LexoRankOperation] Ranking issue [id=632016] initially for rank field[id=12501] 2023-02-17 09:55:24,994 LexoRankReindexer:thread-1 DEBUG      [greenhopper.manager.lexorank.LexoRankDaoImpl]  LexoRank Query : SELECT FIELD_ID,ID,ISSUE_ID,LOCK_HASH,LOCK_TIME,`RANK`,TYPE FROM AO_60DB71_LEXORANK WHERE FIELD_ID = ? AND ISSUE_ID = ? AND TYPE = ?; params[12501, 632016, 1] 2023-02-17 09:55:24,994 LexoRankReindexer:thread-1 DEBUG      [greenhopper.service.lexorank.LexoRankOperation] Issue [id=632016] already has a rank[0|i0mclz:] for rank field[id=12501] 2023-02-17 09:55:24,994 LexoRankReindexer:thread-1 DEBUG      [greenhopper.service.lexorank.LexoRankOperation] RANK_INITIAL end   [fieldId=12501, issueToRankIssueId=632016, issueToRankAroundIssueId= null 2023-02-17 09:55:26,058 lexorank-executor-thread-0 DEBUG      [greenhopper.manager.lexorank.LexoRankDaoImpl]  LexoRank Query : SELECT FIELD_ID,ID,ISSUE_ID,LOCK_HASH,LOCK_TIME,`RANK`,TYPE FROM AO_60DB71_LEXORANK WHERE FIELD_ID = ? AND BUCKET = ? ORDER BY `RANK` DESC LIMIT 1; params[12501, 0] 2023-02-17 09:55:26,059 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation] Fetched rows at balance boundary 2023-02-17 09:55:26,059 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation] Fetched rows at balance boundary 2023-02-17 09:55:26,059 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation]       rowToMigrate : LexoRankRow{id=1207648, fieldId=12501, issueId=632018, lockHash= ' null ' , lockTime= null , bucket=2, rank= '2|hywh7b:' , type=ISSUE_RANK_ROW} 2023-02-17 09:55:26,059 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation]       rowLastMigrated : LexoRankRow{id=1207644, fieldId=12501, issueId=632016, lockHash= ' null ' , lockTime= null , bucket=0, rank= '0|i0mclz:' , type=ISSUE_RANK_ROW} 2023-02-17 09:55:26,062 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation] Acquired lock on rows 2023-02-17 09:55:26,062 lexorank-executor-thread-0 DEBUG      [greenhopper.manager.lexorank.LexoRankDaoImpl]  LexoRank Query : SELECT FIELD_ID,ID,ISSUE_ID,LOCK_HASH,LOCK_TIME,`RANK`,TYPE FROM AO_60DB71_LEXORANK WHERE FIELD_ID = ? AND BUCKET = ? ORDER BY `RANK` ASC LIMIT 1; params[12501, 2] 2023-02-17 09:55:27,109 lexorank-executor-thread-0 DEBUG      [greenhopper.manager.lexorank.LexoRankDaoImpl]  LexoRank Query : SELECT FIELD_ID,ID,ISSUE_ID,LOCK_HASH,LOCK_TIME,`RANK`,TYPE FROM AO_60DB71_LEXORANK WHERE FIELD_ID = ? AND BUCKET = ? ORDER BY `RANK` DESC LIMIT 1; params[12501, 0] 2023-02-17 09:55:27,110 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation] Balancing rank row [type=ISSUE_RANK_ROW, oldRank=2|hywh7b:, newRank=0|i0mcm7:] 2023-02-17 09:55:27,116 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation] Successfully saved rank row 2023-02-17 09:55:27,116 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation] Releasing lock 2023-02-17 09:55:27,123 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalancer] ReIndexing issue[id=632018] After fix 2023-02-17 10:44:58,917 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalancer] Balancing next rank row for rank field [id=12501] 2023-02-17 10:44:58,917 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation] Balancing next rank row to migrate from bucket[2] to bucket[0] for rank field[id=12501] 2023-02-17 10:44:58,917 lexorank-executor-thread-0 DEBUG      [greenhopper.manager.lexorank.LexoRankDaoImpl]  LexoRank Query : SELECT FIELD_ID,ID,ISSUE_ID,LOCK_HASH,LOCK_TIME,`RANK`,TYPE FROM AO_60DB71_LEXORANK WHERE FIELD_ID = ? AND BUCKET = ? ORDER BY `RANK` ASC LIMIT 1; params[12501, 2] 2023-02-17 10:44:58,918 lexorank-executor-thread-0 DEBUG      [greenhopper.manager.lexorank.LexoRankDaoImpl]  LexoRank Query : SELECT FIELD_ID,ID,ISSUE_ID,LOCK_HASH,LOCK_TIME,`RANK`,TYPE FROM AO_60DB71_LEXORANK WHERE FIELD_ID = ? AND BUCKET = ? ORDER BY `RANK` DESC LIMIT 1; params[12501, 0] 2023-02-17 10:44:58,925 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation] Fetched rows at balance boundary 2023-02-17 10:44:58,925 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation] Fetched rows at balance boundary 2023-02-17 10:44:58,925 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation]       rowToMigrate : LexoRankRow{id=1226685, fieldId=12501, issueId=641089, lockHash= ' null ' , lockTime= null , bucket=2, rank= '2|hyxruf:' , type=ISSUE_RANK_ROW} 2023-02-17 10:44:58,925 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation]       rowLastMigrated : LexoRankRow{id=1226683, fieldId=12501, issueId=641615, lockHash= ' null ' , lockTime= null , bucket=0, rank= '0|i0nn93:' , type=ISSUE_RANK_ROW} 2023-02-17 10:44:58,950 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation] Acquired lock on rows 2023-02-17 10:44:58,950 lexorank-executor-thread-0 DEBUG      [greenhopper.manager.lexorank.LexoRankDaoImpl]  LexoRank Query : SELECT FIELD_ID,ID,ISSUE_ID,LOCK_HASH,LOCK_TIME,`RANK`,TYPE FROM AO_60DB71_LEXORANK WHERE FIELD_ID = ? AND BUCKET = ? ORDER BY `RANK` ASC LIMIT 1; params[12501, 2] 2023-02-17 10:44:58,951 lexorank-executor-thread-0 DEBUG      [greenhopper.manager.lexorank.LexoRankDaoImpl]  LexoRank Query : SELECT FIELD_ID,ID,ISSUE_ID,LOCK_HASH,LOCK_TIME,`RANK`,TYPE FROM AO_60DB71_LEXORANK WHERE FIELD_ID = ? AND BUCKET = ? ORDER BY `RANK` DESC LIMIT 1; params[12501, 0] 2023-02-17 10:44:58,952 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation] Balancing rank row [type=ISSUE_RANK_ROW, oldRank=2|hyxruf:, newRank=0|i0nn9b:] 2023-02-17 10:44:59,066 LexoRankReindexer:thread-1 DEBUG      [greenhopper.service.lexorank.LexoRankOperation] RANK_INITIAL start [fieldId=12501, issueToRankIssueId=641615, issueToRankAroundIssueId= null 2023-02-17 10:44:59,066 LexoRankReindexer:thread-1 DEBUG      [greenhopper.service.lexorank.LexoRankOperation] Ranking issue [id=641615] initially for rank field[id=12501] 2023-02-17 10:44:59,066 LexoRankReindexer:thread-1 DEBUG      [greenhopper.manager.lexorank.LexoRankDaoImpl]  LexoRank Query : SELECT FIELD_ID,ID,ISSUE_ID,LOCK_HASH,LOCK_TIME,`RANK`,TYPE FROM AO_60DB71_LEXORANK WHERE FIELD_ID = ? AND ISSUE_ID = ? AND TYPE = ?; params[12501, 641615, 1] 2023-02-17 10:44:59,066 LexoRankReindexer:thread-1 DEBUG      [greenhopper.service.lexorank.LexoRankOperation] Issue [id=641615] already has a rank[0|i0nn93:] for rank field[id=12501] 2023-02-17 10:44:59,066 LexoRankReindexer:thread-1 DEBUG      [greenhopper.service.lexorank.LexoRankOperation] RANK_INITIAL end   [fieldId=12501, issueToRankIssueId=641615, issueToRankAroundIssueId= null 2023-02-17 10:44:59,378 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation] Successfully saved rank row 2023-02-17 10:44:59,378 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalanceOperation] Releasing lock 2023-02-17 10:44:59,382 lexorank-executor-thread-0 DEBUG      [service.lexorank.balance.LexoRankBalancer] ReIndexing issue[id=641089]

            arochere added a comment - - edited

            currently on a 1 million instance, on our sandbox we went from 30% in 8 days to 30% in 3 hours after applying this fix.

            I hope it will help on our production because the performance is good for like a month and a half and then drops and needs restarts.

            len was 8 indeed.

             

            will confirm if it helps our production as well (and also fix the concurrent subtask creations generating max lexorank errors)

             

            (our production is currently stuck and not moving anymore)

            arochere added a comment - - edited currently on a 1 million instance, on our sandbox we went from 30% in 8 days to 30% in 3 hours after applying this fix. I hope it will help on our production because the performance is good for like a month and a half and then drops and needs restarts. len was 8 indeed.   will confirm if it helps our production as well (and also fix the concurrent subtask creations generating max lexorank errors)   (our production is currently stuck and not moving anymore)

            Hi,

            Thanks ayakovlev@atlassian.com, now it's all clear .

            Cheers.

            Ignat (Inactive) added a comment - Hi, Thanks ayakovlev@atlassian.com , now it's all clear  . Cheers.

            Hi ialexeyenko
            Thanks for your follow-up.

            Clarification regarding tickets your mentioned:

            • JSWSERVER-12542 - this looks like a less detailed duplicate. I will close the other one in favour of current ticket.
            • JSWSERVER-15917 - that ticket is about MSSQL, so it's different problem with same symptom.

            To answer your questions:

            Is there a reason we keep 3 separate issues, instead clarifying the case in a single bug?

            see above.

            Is the current bug JSWSERVER-16542 a recent regression or has it exceeded in previous 7.2.x, 7.6.x versions?

            At this stage, we don't believe it's a regression of recent changes. The problem was present before.

            Please let me know if you'd like any further clarification.
            Cheer.

            Andriy Yakovlev [Atlassian] added a comment - Hi ialexeyenko Thanks for your follow-up. Clarification regarding tickets your mentioned: JSWSERVER-12542 - this looks like a less detailed duplicate. I will close the other one in favour of current ticket. JSWSERVER-15917 - that ticket is about MSSQL, so it's different problem with same symptom. To answer your questions: Is there a reason we keep 3 separate issues, instead clarifying the case in a single bug? see above. Is the current bug JSWSERVER-16542 a recent regression or has it exceeded in previous 7.2.x, 7.6.x versions? At this stage, we don't believe it's a regression of recent changes. The problem was present before. Please let me know if you'd like any further clarification. Cheer.

            Hi ayakovlev@atlassian.com, the issue description is very close to those in JSWSERVER-12542 and JSWSERVER-15917. From what I see the reason seems to be the same across all the issue: the database performance is too slow for the issue rank operation to finish.

            Two question from my side:

            • Is there a reason we keep 3 separate issues, instead clarifying the case in a single bug?
            • Is the current bug JSWSERVER-16542 a recent regression or has it exceeded in previous 7.2.x, 7.6.x versions?

            Thanks!

            Cheers,
            Ignat Alexeyenko
            Jira Bugmaster

            Ignat (Inactive) added a comment - Hi ayakovlev@atlassian.com , the issue description is very close to those in  JSWSERVER-12542 and  JSWSERVER-15917 . From what I see the reason seems to be the same across all the issue: the database performance is too slow for the issue rank operation to finish. Two question from my side: Is there a reason we keep 3 separate issues, instead clarifying the case in a single bug? Is the current bug  JSWSERVER-16542 a recent regression or has it exceeded in previous 7.2.x, 7.6.x versions? Thanks! Cheers, Ignat Alexeyenko Jira Bugmaster

              Unassigned Unassigned
              ayakovlev@atlassian.com Andriy Yakovlev [Atlassian]
              Affected customers:
              24 This affects my team
              Watchers:
              31 Start watching this issue

                Created:
                Updated:
                Resolved: