Uploaded image for project: 'Crucible'
  1. Crucible
  2. CRUC-5833

Problem creating FRXDO from frx: Index: 395

    XMLWordPrintable

Details

    • Bug
    • Resolution: Unresolved
    • Medium
    • None
    • 2.6.1, 3.0.0, 3.6.2, 3.8.0, 4.3.1, 4.8.0
    • Code reviews

    Description

      Problem

      There is no bounds checking when mapping gutter comments to lines of code.
      This means an entire review explodes with the following error, when the comments somehow don't match the lines in the file.
      We should handle this more gracefully.

      2011-07-13 23:57:34,151 ERROR [btpool0-6324 ] fisheye.app ViewFRXAction-execute - Problem creating FRXDO from frx
      java.lang.IndexOutOfBoundsException: Index: 395, Size: 386
      	at java.util.ArrayList.RangeCheck(ArrayList.java:547)
      	at java.util.ArrayList.get(ArrayList.java:322)
      	at com.cenqua.crucible.view.FRXDO.mapGutterComments(FRXDO.java:861)
      	at com.cenqua.crucible.view.FRXDO.mapInlineComments(FRXDO.java:768)
      	at com.cenqua.crucible.view.FRXDO.<init>(FRXDO.java:198)
      	at com.cenqua.crucible.revision.managers.DefaultContentManager.makeFRXDO(DefaultContentManager.java:104)
      	at com.atlassian.crucible.actions.ReviewBaseAction.makeFRXDO(ReviewBaseAction.java:596)
      	at com.atlassian.crucible.actions.ViewFRXAction.execute(ViewFRXAction.java:184)
              ...
      

      Most probable reason

      The root cause is not in line calculation algorithm, but in an inconsistent data store.
      When users move/restore backup of Crucible instance incorrectly, we end up with DB out of sync with the files. This can result in db reusing unique file identifiers and Crucible overwriting a file in its cache. Where we used to have file A, now we have file B. Yet we still think we have file A. In effect, when we load a review for file A, we show contents of file B. If there has been a comment previously added to file A, say on line 395, and file B is only 386 lines long, we throw IndexOutOfBoundsException when we try to place that comment on file B.

      Workaround

      This workaround will only work if the source repository is available and running. It will fail if repository is no longer available (including stopped/disabled) or the file originally came from an attachment.

      On *nix you can perform this fix while the instance is running (not sure about Windows).

      Find CFR number

      Go to the file that doesn't load properly. CFR is displayed on the address bar after the '#'. Copy this number.

      SQL query to find the file names

      Run the following query, substituting <CFR> with the number you have just copied:

      SELECT
          r.cru_revision AS revision,
          r.cru_upload_item AS upload_item
      FROM
          cru_frx_revision fr,
          cru_revision r
      WHERE
          fr.cru_revision = r.cru_revision_id
      AND fr.cru_frx_id = <CFR>
      ORDER BY r.cru_upload_item
      Delete cache files

      The last column (upload_item) gives a list of cache file ids used to display this file (one per revision). Delete files named <upload_item>.dat from these folders:

      • $FISHEYE_INST/var/tmp/encodedcontent/uploaditem
      • $FISHEYE_INST/var/data/uploads
      Reload review page

      Refresh the review twice. You should see the correct files loaded from the repository.

      Workaround - example

      In my case the address bar shows http://localhost:8080/cru/CR-3#CFR-7, so CFR = 7. The SQL query returns

      revision upload_item 
      -------- ----------- 
      2000     19          
      1999     20          
      31       23          
      30       24          
      210      25          
      211      26

      Then I look for these files:

      $ for CACHE_FOLDER in ./var/tmp/encodedcontent/uploaditem ./var/data/uploads/; do for UPLOAD_ITEM in 19 20 23 24 25 26 ; do find $CACHE_FOLDER -name $UPLOAD_ITEM.dat; done; done
      ./var/tmp/encodedcontent/uploaditem/00/00/19.dat
      ./var/tmp/encodedcontent/uploaditem/00/00/20.dat
      ./var/tmp/encodedcontent/uploaditem/00/00/23.dat
      ./var/tmp/encodedcontent/uploaditem/00/00/24.dat
      ./var/tmp/encodedcontent/uploaditem/00/00/25.dat
      ./var/tmp/encodedcontent/uploaditem/00/00/26.dat
      ./var/data/uploads//00/00/19.dat
      ./var/data/uploads//00/00/20.dat
      ./var/data/uploads//00/00/23.dat
      ./var/data/uploads//00/00/24.dat
      ./var/data/uploads//00/00/25.dat
      ./var/data/uploads//00/00/26.dat

      and delete all of them. Refresh browser twice and voilà!

      Proper fix

      Rather than using consecutive numbers as file names, use their hash instead. When loading a file, check the hash matches. If not, invalidate the file store and pull the file from the repository.

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              npellow Nick
              Votes:
              11 Vote for this issue
              Watchers:
              21 Start watching this issue

              Dates

                Created:
                Updated: