• Icon: Suggestion Suggestion
    • Resolution: Answered
    • None
    • None
    • None
    • We collect Bitbucket feedback from various sources, and we evaluate what we've collected when planning our product roadmap. To understand how this piece of feedback will be reviewed, see our Implementation of New Features Policy.

      We have mirror of https://chromium.googlesource.com/chromium/src in our Stash. With all heads and tags. Entire mirror, updated continuesly.

      Today this repo start throws errors on git clone, like this:

      aefimov@aefimov-macbook:~/projects/sandbox $ git clone https://stash.acme.com/scm/chromium/src.git
      Cloning into 'src'...
      remote: Counting objects: 3125562, done.
      remote: Compressing objects: 100% (542218/542218), done.
      fatal: The remote end hung up unexpectedly 1.00 GiB | 11.05 MiB/s
      fatal: early EOF
      fatal: index-pack failed
      

      Or like this (it's random, i guess):

      aefimov@aefimov-macbook:~/projects/sandbox $ git clone https://stash.acme.com/scm/chromium/src.git
      Cloning into 'src'...
      remote: Counting objects: 3125562, done.
      remote: Compressing objects: 100% (542218/542218), done.
      fatal: protocol error: bad line length character: Q�r 11.13 MiB/s
      error: inflate: data stream error (invalid distance too far back)
      fatal: pack has bad object at offset 1100761893: inflate returned -3
      fatal: index-pack failed
      

      Why this happened. We used nginx as reverse proxy. Then we try to clone directly from java, all works fine:

      aefimov@aefimov-macbook:~/projects/sandbox $ git clone http://stash.acme.com:7990/scm/chromium/src.git
      Cloning into 'src'...
      remote: Counting objects: 3125562, done.
      remote: Compressing objects: 100% (542218/542218), done.
      remote: Total 3125562 (delta 2544811), reused 3092910 (delta 2512475)
      Receiving objects: 100% (3125562/3125562), 2.19 GiB | 11.12 MiB/s, done.
      Resolving deltas: 100% (2544811/2544811), done.
      Checking connectivity... done.
      Checking out files: 100% (66640/66640), done.
      

      The nginx server have temporary buffer on disk, where it server response. In some large repos, this buffer is overflowed and shrinked by default capacity. As result you see wrong packets in Git clone error message.

      To avoid this we make such commit in or nginx:
      https://bitbucket.org/lelik/atlassian-stash-deb/commits/b958979dfbaebeaa385af3a92a3df5081f4f410e?at=master

      Maybe this capacity overflow forced by upgrading to Stash 3.3.0, maybe updates into with repo from Google with big files.

      So, will be nice to have check, that you does not change Smart Git protocol in 3.3 release (i see no changes in code around Git Protocol exactly), or have a docs to avoid this problem on customers side – simple define minimum buffer capacity for nginx reverse proxy depending on you Smart HTTP implementation (10gb in our case has solved problem).

      We used git 2.1.0 on server and client too.
      After changing nginx config:

      aefimov@aefimov-macbook:~/projects/sandbox $ rm -rf src/ && git clone https://stash.acme.com/scm/chromium/src.git
      Cloning into 'src'...
      remote: Counting objects: 3125732, done.
      remote: Compressing objects: 100% (518254/518254), done.
      remote: Total 3125732 (delta 2543666), reused 3118527 (delta 2536589)
      Receiving objects: 100% (3125732/3125732), 2.20 GiB | 11.13 MiB/s, done.
      Resolving deltas: 100% (2543666/2543666), done.
      Checking connectivity... done.
      Checking out files: 100% (66651/66651), done.
      

          Form Name

            [BSERV-5235] Nginx as frontend to Stash can break Git clone

            There are no changes to the HTTP processing in scm-cache between 1.4.2 (bundled with Stash 3.2) and 1.5.0 (bundled with Stash 3.3)

            Roger Barnes (Inactive) added a comment - There are no changes to the HTTP processing in scm-cache between 1.4.2 (bundled with Stash 3.2) and 1.5.0 (bundled with Stash 3.3)

            Hello, sorry for delay.
            Nginx will corrupt upstream response on temp file overflow, see:
            https://github.com/danghvu/nginx-1.4.0/blob/137a30d7132743e968b59aac44c0deeea07c94f5/src/event/ngx_event_pipe.c#L720

            Alexey Efimov added a comment - Hello, sorry for delay. Nginx will corrupt upstream response on temp file overflow, see: https://github.com/danghvu/nginx-1.4.0/blob/137a30d7132743e968b59aac44c0deeea07c94f5/src/event/ngx_event_pipe.c#L720

            Hi Alexey,

            I'm tempted to assume that this is unrelated to Stash. I'm going to double check that no other change could have possibly caused this though.

            Here is another thought.
            I noticed that the chromium repo is 2.6 GiB in pack file/pack index alone (as of 2014-09-24):

            $> du -hs .git/objects/pack
            2.6G	.git/objects/pack
            

            With a 2.6GiB packfile and the default for proxy_max_temp_file_size being a mere [http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_max_temp_file_size|1 GiB] I'd assume that the downstream clients previously would have had to consume the packfile fast enough that the buffer doesn't fill up as it wouldn't have been big enough to hold that kind of packfile (and the default for the in-memory buffers are in the KiB range) anyway. Did maybe something change on the consuming end?

            Stefan Saasen (Inactive) added a comment - Hi Alexey, I'm tempted to assume that this is unrelated to Stash. I'm going to double check that no other change could have possibly caused this though. Here is another thought. I noticed that the chromium repo is 2.6 GiB in pack file/pack index alone (as of 2014-09-24): $> du -hs .git/objects/pack 2.6G .git/objects/pack With a 2.6GiB packfile and the default for proxy_max_temp_file_size being a mere [http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_max_temp_file_size|1 GiB] I'd assume that the downstream clients previously would have had to consume the packfile fast enough that the buffer doesn't fill up as it wouldn't have been big enough to hold that kind of packfile (and the default for the in-memory buffers are in the KiB range) anyway. Did maybe something change on the consuming end?

              Unassigned Unassigned
              3652ed9ede2e Alexey Efimov
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: