git lfs fetch has been failing on specific files after fresh clone.

      It is failing consistently on specific files so as an experiment I {{git rm}}ed the failing files, but now it's failing on a different file. There's already an issue open with git-lfs, but this feels like an LFS server issue (the first three comments on the issue mention that they use bitbucket).

      Concretely,

      $ GIT_LFS_SKIP_SMUDGE=1 git clone git@bitbucket.org:<repo>.git succeeds

      but
      git lfs fetch fails with

      $ git lfs fetch
      fetch: Fetching reference refs/heads/master
      2019/01/30 09:27:46 Unsolicited response received on idle HTTP channel starting with "<part-of-file>"; err=<nil>
      Downloading LFS objects:  36% (1198/3313), 1.7 GB | 4.6 MB/s   Expected OID <some-hash>, got <other-hash> after 83429376 bytes written                                     
      error: failed to fetch some objects from &#x27;https://bitbucket.org/<repo>.git/info/lfs&#x27;
      

      Rerunning with GIT_TRACE produces

      #!log
      $ GIT_TRACE=1 git lfs fetch
      09:40:57.915993 trace git-lfs: exec: git &#x27;-c&#x27; &#x27;filter.lfs.smudge=&#x27; &#x27;-c&#x27; &#x27;filter.lfs.clean=&#x27; &#x27;-c&#x27; &#x27;filter.lfs.process=&#x27; &#x27;-c&#x27; &#x27;filter.lfs.required=false&#x27; &#x27;rev-parse&#x27; &#x27;HEAD&#x27; &#x27;--symbolic-full-name&#x27; &#x27;HEAD&#x27;             
      09:40:57.917093 trace git-lfs: tq: running as batched queue, batch size of 100
      09:40:57.917103 trace git-lfs: fetch <file-name> [<id>]
      09:40:57.917124 trace git-lfs: tq: sending batch of size 1
      09:40:57.917166 trace git-lfs: run_command: ssh -- git@bitbucket.org git-lfs-authenticate <repo> download
      09:40:59.010400 trace git-lfs: api: batch 1 files
      09:40:59.010777 trace git-lfs: HTTP: POST https://bitbucket.org/.../.../info/lfs/objects/batch
      09:40:59.404101 trace git-lfs: HTTP: 200
      09:40:59.404185 trace git-lfs: HTTP: {"objects": [{"oid": "<id>", "actions": {"download": {"header": {"X-Client-ID": "<hash-A>", "Authorization": "Bearer <hash-B>"}, "href": "ht
      09:40:59.404287 trace git-lfs: HTTP: tps://api.media.atlassian.com/file/<hash-C>/binary"}}, "size": 83429376}]}
      09:40:59.404469 trace git-lfs: tq: starting transfer adapter "basic"
      09:40:59.629873 trace git-lfs: xfer: Attempting to resume download of "<id>" from byte 83429376                                                          
      09:40:59.630036 trace git-lfs: HTTP: GET https://api.media.atlassian.com/file/<hash-C>/binary
      09:41:00.239427 trace git-lfs: HTTP: 400
      09:41:00.239605 trace git-lfs: HTTP: {"error":{"code":"InvalidArgumentError","title":"range first index must not be greater than last index","href":"https://api.media.atlassian.com#InvalidArgumentError"}}
      09:41:00.239809 trace git-lfs: tq: retrying object <id>: LFS: Client error: https://api.media.atlassian.com/file/<hash-C>/binary
      

            [BCLOUD-18064] git lfs fetch fails with InvalidArgumentError

            Hey - I have been traveling for work. I will have to check this next week.

            Xander Sereda added a comment - Hey - I have been traveling for work. I will have to check this next week.

            how this issue is get resolved?

            Manisha.Natambe added a comment - how this issue is get resolved?

            Apo_ added a comment -

            Apo_ added a comment - BCLOUD-18064

            Apo_ added a comment -

            Seems to have solved it for us as well

            Apo_ added a comment - Seems to have solved it for us as well

            I contacted support directly and they issues a fix which has done the trick for me.

            I contacted bitbucket support and they've fixed the issue for me! Here is their response:

            We have been able to reproduce this issue on our end. We have identified that this issue is triggered by slow network connections at the time of pulling down the LFS files. This leads to a timeout on our backend.
            
            I would like to inform that our team have identified the issue and have deployed a fix in regards to this.
            

            Matthew Shaile added a comment - I contacted support directly and they issues a fix which has done the trick for me. I contacted bitbucket support and they've fixed the issue for me! Here is their response: We have been able to reproduce this issue on our end. We have identified that this issue is triggered by slow network connections at the time of pulling down the LFS files. This leads to a timeout on our backend. I would like to inform that our team have identified the issue and have deployed a fix in regards to this .

            I'm stuck on this issue as well.

            Xander Sereda added a comment - I'm stuck on this issue as well.

              Unassigned Unassigned
              9c4ec19fd288 Apo_
              Affected customers:
              2 This affects my team
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: