Uploaded image for project: 'Bitbucket Cloud'
  1. Bitbucket Cloud
  2. BCLOUD-6331

ssh_exchange_identification: Connection closed by remote host (BB-10785)

      I have a cron job that pulls from a bitbucket repository once an hour. Every so often, the cron job emails me

      ssh_exchange_identification: Connection closed by remote host
      fatal: The remote end hung up unexpectedly
      

      This started a few weeks ago with this happening perhaps once every few days. Now it's getting worse, happening almost every hour.

      To check, I set up the same cron job on a completely separate host, and it's showing the same behavior. So I suspect the problem is on the server's end.

            [BCLOUD-6331] ssh_exchange_identification: Connection closed by remote host (BB-10785)

            I am getting the same errors too. I have a satis installation and it needs the cron to check packages very frequently. My cron runs every 10 mins.

            roshangautam added a comment - I am getting the same errors too. I have a satis installation and it needs the cron to check packages very frequently. My cron runs every 10 mins.

            We received it again today too: Error: Command failed: ssh_exchange_identification: Connection closed by remote host

            First time in about a week for us.

            Andrew Kerr added a comment - We received it again today too: Error: Command failed: ssh_exchange_identification: Connection closed by remote host First time in about a week for us.

            Subtle difference this time. We've implemented 3 retries due to this issue and that was working around the problem before this last occurence. We've had 11 times overnight where 3 consecutive tries failed. All but 1 failure was the ssh_exchange_identification failure. The other was the "Permision denied" noted by jsashi.

            Good luck!

            Legacy Bitbucket Cloud User (Inactive) added a comment - Subtle difference this time. We've implemented 3 retries due to this issue and that was working around the problem before this last occurence. We've had 11 times overnight where 3 consecutive tries failed. All but 1 failure was the ssh_exchange_identification failure. The other was the "Permision denied" noted by jsashi. Good luck!

            Same problem here:
            $ git fetch --all
            Fetching origin
            ssh_exchange_identification: Connection closed by remote host
            fatal: The remote end hung up unexpectedly
            error: Could not fetch origin

            Legacy Bitbucket Cloud User (Inactive) added a comment - Same problem here: $ git fetch --all Fetching origin ssh_exchange_identification: Connection closed by remote host fatal: The remote end hung up unexpectedly error: Could not fetch origin

            @evzijst Ok, it happened again. Within the last 15 min, it happened 8 times. Looks like it's intermittent.

            Legacy Bitbucket Cloud User (Inactive) added a comment - @evzijst Ok, it happened again. Within the last 15 min, it happened 8 times. Looks like it's intermittent.

            @evzijst I have edited my original comment saying it's not happening anymore now. But just for your info, it's the same error message with another variation at times:

            Permission denied (publickey).
            fatal: The remote end hung up unexpectedly

            However, most of the times, it's:
            ssh_exchange_identification: Connection closed by remote host
            fatal: The remote end hung up unexpectedly

            Legacy Bitbucket Cloud User (Inactive) added a comment - @evzijst I have edited my original comment saying it's not happening anymore now. But just for your info, it's the same error message with another variation at times: Permission denied (publickey). fatal: The remote end hung up unexpectedly However, most of the times, it's: ssh_exchange_identification: Connection closed by remote host fatal: The remote end hung up unexpectedly

            Erik - we may have experienced two recurrences of this issue, approximately 15 minutes and 30 minutes before jsashi commented. My evidence for this is some unexpected script failures - unfortunately, the script in question discards stderr, so I can't say for sure whether it's the same error, nor can I offer any specifics.

            ethankaminski added a comment - Erik - we may have experienced two recurrences of this issue, approximately 15 minutes and 30 minutes before jsashi commented. My evidence for this is some unexpected script failures - unfortunately, the script in question discards stderr, so I can't say for sure whether it's the same error, nor can I offer any specifics.

            evzijst added a comment -

            Absolutely. How sever is it though? What's the succeas/failure ratio? And
            are you looking at the exact same error message?

            evzijst added a comment - Absolutely. How sever is it though? What's the succeas/failure ratio? And are you looking at the exact same error message?

            This issue is back again after a week. Can we re-open this bug?

            Edit: It's not happening anymore now. Will update if it happens again.

            Legacy Bitbucket Cloud User (Inactive) added a comment - This issue is back again after a week. Can we re-open this bug? Edit: It's not happening anymore now. Will update if it happens again.

            evzijst added a comment -

            Excellent! I was hoping to hear that. After the initial reports of some lingering issues last week we found another issue that we addressed. Since then we've been unable to reproduce the issue ourselves and so it's encouraging to read your feedback.

            I'm going to close this issue again, however if anyone is still seeing systemic issues, feel tree to reopen once more.

            evzijst added a comment - Excellent! I was hoping to hear that. After the initial reports of some lingering issues last week we found another issue that we addressed. Since then we've been unable to reproduce the issue ourselves and so it's encouraging to read your feedback. I'm going to close this issue again, however if anyone is still seeing systemic issues, feel tree to reopen once more.

            @evzijst, just so you know we've faced no such incident so far (this week.)

            Shamasis Bhattacharya added a comment - @evzijst, just so you know we've faced no such incident so far (this week.)

            evzijst added a comment -

            Right. So yeah, that looks like the same issue. I'm looking into it and will let you know how I go.

            evzijst added a comment - Right. So yeah, that looks like the same issue. I'm looking into it and will let you know how I go.

            same as Mark, better but still not resolved.
            and to answer to Erik :
            $git pull
            ssh_exchange_identification: Connection closed by remote host
            fatal: The remote end hung up unexpectedly

            Legacy Bitbucket Cloud User (Inactive) added a comment - same as Mark, better but still not resolved. and to answer to Erik : $git pull ssh_exchange_identification: Connection closed by remote host fatal: The remote end hung up unexpectedly

            evzijst added a comment -

            That's discouraging, but thanks for letting me know. I'll dive back in.

            What errors are you seeing exactly (to confirm we're not looking at a different issue)? You're seeing ssh_exchange_identification errors?

            evzijst added a comment - That's discouraging, but thanks for letting me know. I'll dive back in. What errors are you seeing exactly (to confirm we're not looking at a different issue)? You're seeing ssh_exchange_identification errors?

            From 00:00 to 8:00 am EST this morning I had 20 failures of a pull that runs every minute.

            It's better, but not resolved.

            Legacy Bitbucket Cloud User (Inactive) added a comment - From 00:00 to 8:00 am EST this morning I had 20 failures of a pull that runs every minute. It's better, but not resolved.

            Thanks.

            Shamasis Bhattacharya added a comment - Thanks.

            evzijst added a comment -

            We found the culprit and fixed the problem.

            Should anyone still see this, feel free to reopen the issue.

            evzijst added a comment - We found the culprit and fixed the problem. Should anyone still see this, feel free to reopen the issue.

            robclancy added a comment -

            Guys they have reproduced and are working on it. Can we save comments for actual updates on the issue and not "me too"?

            robclancy added a comment - Guys they have reproduced and are working on it. Can we save comments for actual updates on the issue and not "me too"?

            Same issue and it disappeared for 2 days for me and it's back now.

            Legacy Bitbucket Cloud User (Inactive) added a comment - Same issue and it disappeared for 2 days for me and it's back now.

            cpook added a comment -

            I've been seeing these periodically whilst running Composer install/update commands. The clones fail with

            #!bash
            [RuntimeException]
              Failed to execute git clone --no-checkout "git@bitbucket.org:company
              /project-dependency.git" "C:\wamp\www\project\vendor\company/dependency" && cd /D "C:\wamp\www\project\vendor\company/dependency"
              && git remote add composer "git@bitbucket.org:company/project-dependency.git" && git fetch composer
            
              ssh_exchange_identification: Connection closed by remote host
              fatal: Could not read from remote repository.
            
              Please make sure you have the correct access rights
              and the repository exists.
            
            
            

            cpook added a comment - I've been seeing these periodically whilst running Composer install/update commands. The clones fail with #!bash [RuntimeException] Failed to execute git clone --no-checkout "git@bitbucket.org:company /project-dependency.git " " C:\wamp\www\project\vendor\company/dependency " && cd /D " C:\wamp\www\project\vendor\company/dependency" && git remote add composer "git@bitbucket.org:company/project-dependency.git" && git fetch composer ssh_exchange_identification: Connection closed by remote host fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists.

            tersmitten added a comment -

            Same issue here (ssh port 22).

            tersmitten added a comment - Same issue here (ssh port 22).

            Same issue here (ssh port 22). Not sure if the timing is significant but I appeared to be seeing more failures than usual running at around 3am ET (though my cron entry is limited to when it runs). Prior to June 6th, we never saw any of these issues at all.

            Steve Muskiewicz added a comment - Same issue here (ssh port 22). Not sure if the timing is significant but I appeared to be seeing more failures than usual running at around 3am ET (though my cron entry is limited to when it runs). Prior to June 6th, we never saw any of these issues at all.

            Here's the same problem in hg.
            Details :

            running ssh hg@bitbucket.org 'hg -R uzabase/speenya serve --stdio'
            sending hello command
            sending between command
            remote: ssh_exchange_identification: Connection closed by remote host
            abort: no suitable response from remote hg!
            

            rishibashi added a comment - Here's the same problem in hg. Details : running ssh hg@bitbucket.org 'hg -R uzabase/speenya serve --stdio' sending hello command sending between command remote: ssh_exchange_identification: Connection closed by remote host abort: no suitable response from remote hg!

            Here's the same problem in hg.
            Details :

            running ssh hg@bitbucket.org 'hg -R uzabase/speenya serve --stdio'
            sending hello command
            sending between command
            remote: ssh_exchange_identification: Connection closed by remote host
            abort: no suitable response from remote hg!
            

            rishibashi added a comment - Here's the same problem in hg. Details : running ssh hg@bitbucket.org 'hg -R uzabase/speenya serve --stdio' sending hello command sending between command remote: ssh_exchange_identification: Connection closed by remote host abort: no suitable response from remote hg!

            Thanks for the help everyone. We now see clearly how to reproduce this and are looking at what we need to do next. Keep an eye on this issue for more info/questions.

            aMarcus (Inactive) added a comment - Thanks for the help everyone. We now see clearly how to reproduce this and are looking at what we need to do next. Keep an eye on this issue for more info/questions.

            Here's what I get from ssh with verbosity turned up:

            #!python
            
            OpenSSH_6.0p1 Debian-4, OpenSSL 1.0.1e 11 Feb 2013
            debug1: Reading configuration data /etc/ssh/ssh_config
            debug1: /etc/ssh/ssh_config line 19: Applying options for *
            debug2: ssh_connect: needpriv 0
            debug1: Connecting to bitbucket.com [131.103.20.172] port 22.
            debug1: Connection established.
            debug1: permanently_set_uid: 0/0
            debug3: Incorrect RSA1 identifier
            debug3: Could not load "/root/.ssh/id_rsa" as a RSA1 public key
            debug1: identity file /root/.ssh/id_rsa type 1
            debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048
            debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048
            debug1: identity file /root/.ssh/id_rsa-cert type -1
            debug1: identity file /root/.ssh/id_dsa type -1
            debug1: identity file /root/.ssh/id_dsa-cert type -1
            debug1: identity file /root/.ssh/id_ecdsa type -1
            debug1: identity file /root/.ssh/id_ecdsa-cert type -1
            ssh_exchange_identification: Connection closed by remote host
            fatal: The remote end hung up unexpectedly
            
            

            Legacy Bitbucket Cloud User (Inactive) added a comment - Here's what I get from ssh with verbosity turned up: #!python OpenSSH_6.0p1 Debian-4, OpenSSL 1.0.1e 11 Feb 2013 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to bitbucket.com [131.103.20.172] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug3: Incorrect RSA1 identifier debug3: Could not load "/root/.ssh/id_rsa" as a RSA1 public key debug1: identity file /root/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: identity file /root/.ssh/id_rsa-cert type -1 debug1: identity file /root/.ssh/id_dsa type -1 debug1: identity file /root/.ssh/id_dsa-cert type -1 debug1: identity file /root/.ssh/id_ecdsa type -1 debug1: identity file /root/.ssh/id_ecdsa-cert type -1 ssh_exchange_identification: Connection closed by remote host fatal: The remote end hung up unexpectedly

            Marcus, It fails immediately because SSH is refusing the initial exchange. Retrying immediately almost always works. I suspect the association with cron means there's a bunch of connections at 5, 10, 15, etc, minute intervals so you're getting spikes of many ssh handshakes.

            I recommend reviewing the MaxSessions and MaxStartups SSHD options. The default of MaxSessions is only 10. It doesn't take many time-sync'ed cron jobs to max that out.

            Legacy Bitbucket Cloud User (Inactive) added a comment - Marcus, It fails immediately because SSH is refusing the initial exchange. Retrying immediately almost always works. I suspect the association with cron means there's a bunch of connections at 5, 10, 15, etc, minute intervals so you're getting spikes of many ssh handshakes. I recommend reviewing the MaxSessions and MaxStartups SSHD options. The default of MaxSessions is only 10. It doesn't take many time-sync'ed cron jobs to max that out.

            Issue BCLOUD-9706 was marked as a duplicate of this issue.

            aMarcus (Inactive) added a comment - Issue BCLOUD-9706 was marked as a duplicate of this issue.

            Issue BCLOUD-9719 was marked as a duplicate of this issue.

            aMarcus (Inactive) added a comment - Issue BCLOUD-9719 was marked as a duplicate of this issue.

            Issue BCLOUD-9685 was marked as a duplicate of this issue.

            aMarcus (Inactive) added a comment - Issue BCLOUD-9685 was marked as a duplicate of this issue.

            mohanjith added a comment -

            We are using 22 and it happens at random. We have now switched to git submodule foreach 'git pull origin master || :' and it might fail for one submodule but will continue for others.

            Thanks!

            mohanjith added a comment - We are using 22 and it happens at random. We have now switched to git submodule foreach 'git pull origin master || :' and it might fail for one submodule but will continue for others. Thanks!

            stanhu added a comment -

            I am using the SSH protocol (port 22) to pull from private repositories. The failure feels random, and the command fails pretty quickly when it does. Connections usually work after I retry again.

            stanhu added a comment - I am using the SSH protocol (port 22) to pull from private repositories. The failure feels random, and the command fails pretty quickly when it does. Connections usually work after I retry again.

            Also, what port is everyone here using? 22 or our 443 alternate

            aMarcus (Inactive) added a comment - Also, what port is everyone here using? 22 or our 443 alternate

            For everyone following this issue, can we get a little more detail? We are struggling to reproduce this. Does it happen immediately or does the command run for a while then fail? Also, when it happens, do all connections then continue to fail until a period of time has passed with no attempts? Or does it feel random?

            aMarcus (Inactive) added a comment - For everyone following this issue, can we get a little more detail? We are struggling to reproduce this. Does it happen immediately or does the command run for a while then fail? Also, when it happens, do all connections then continue to fail until a period of time has passed with no attempts? Or does it feel random?

            im having the same issue, teamcity goes kaplooy at least 3 times a day.

            its hard for me to tell you what teamcity was doing at the times of failure...ill turn up logging to see if that helps.

            what i can tell you though is that i have a script that run git pull on 53 repos using 4 concurrent threads and it appears that 1 of these concurrent requests fails with this error (remote end hung up). if you change the script to pull the repos linearly, it almost never fails. from my end, it feels concurrent volume related.

            ill turn up the verbosity of my script to see what that yields at the ssh layer.

            Legacy Bitbucket Cloud User (Inactive) added a comment - im having the same issue, teamcity goes kaplooy at least 3 times a day. its hard for me to tell you what teamcity was doing at the times of failure...ill turn up logging to see if that helps. what i can tell you though is that i have a script that run git pull on 53 repos using 4 concurrent threads and it appears that 1 of these concurrent requests fails with this error (remote end hung up). if you change the script to pull the repos linearly, it almost never fails. from my end, it feels concurrent volume related. ill turn up the verbosity of my script to see what that yields at the ssh layer.

            langeuh added a comment -

            I have the same issue. Composer update fails randomly on a random bitbucket repo. Fails 4/10 times.

            langeuh added a comment - I have the same issue. Composer update fails randomly on a random bitbucket repo. Fails 4/10 times.

            christiaan added a comment -

            I'm pulling in 10 repositories in a script and after 3 or 4 I get this error on almost every try.

            christiaan added a comment - I'm pulling in 10 repositories in a script and after 3 or 4 I get this error on almost every try.

            I am experiencing the same problem; several repositories fail to be pulled regularly and I have to retry several times before the operation completes successfully.

            Hannes Ebner added a comment - I am experiencing the same problem; several repositories fail to be pulled regularly and I have to retry several times before the operation completes successfully.

            We are reopening this issue to investigate further. Sorry about any inconvenience this may be causing your processes.

            aMarcus (Inactive) added a comment - We are reopening this issue to investigate further. Sorry about any inconvenience this may be causing your processes.

            Hi,

            We at FusionCharts are experiencing the same problem too. Since last week we are having one in three automated builds fail with error ssh_exchange_identification: Connection closed by remote host while cloning submodules. (We have quite a few of them in a single repository. That's a different story!)

            No doubt that the frequency has reduced. But we are still facing this while cloning submodules during our CI builds at a rate of 1 in 10. Interestingly, not all submodules fail to clone. Random ones fail in builds.

            Shamasis Bhattacharya added a comment - Hi, We at FusionCharts are experiencing the same problem too. Since last week we are having one in three automated builds fail with error ssh_exchange_identification: Connection closed by remote host while cloning submodules. (We have quite a few of them in a single repository. That's a different story!) No doubt that the frequency has reduced. But we are still facing this while cloning submodules during our CI builds at a rate of 1 in 10. Interestingly, not all submodules fail to clone. Random ones fail in builds.

            kniziol added a comment -

            Same problem here. Details:

            #!sh
            
            Failed to update git@bitbucket.org:<name-of-repository>.git, package information from this repository may be outdated (ssh_exchange_identification: Connection closed by remote host
            fatal: Could not read from remote repository.
            
            Please make sure you have the correct access rights
            and the repository exists.
            error: Could not fetch origin
            )
            
            

            When it wil be fixed?

            kniziol added a comment - Same problem here. Details: #!sh Failed to update git@bitbucket.org:<name-of-repository>.git, package information from this repository may be outdated (ssh_exchange_identification: Connection closed by remote host fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. error: Could not fetch origin ) When it wil be fixed?

            Looks like someone has opened issue BCLOUD-9685 for the new recurrence of this fault.

            I've seen this twice in manually-triggered script runs this week. We have a test server that's maintained using scripts, and those scripts have generated error messages about ssh_exchange_identification. I've not seen the issue (yet) when working locally.

            ethankaminski added a comment - Looks like someone has opened issue BCLOUD-9685 for the new recurrence of this fault. I've seen this twice in manually-triggered script runs this week. We have a test server that's maintained using scripts, and those scripts have generated error messages about ssh_exchange_identification. I've not seen the issue (yet) when working locally.

            We had this error for maybe half an hour, however 'git pull' is now working again.

            Legacy Bitbucket Cloud User (Inactive) added a comment - We had this error for maybe half an hour, however 'git pull' is now working again.

            mohanjith added a comment -

            Having the same issue here too.

            mohanjith added a comment - Having the same issue here too.

            stanhu added a comment -

            Having the same problem here too.

            stanhu added a comment - Having the same problem here too.

            I have cron-ed pulls every few minutes. This morning I came into over 1500 failure emails.

            These pulls are how our production deployments are triggered. The need to be much more reliable.

            Legacy Bitbucket Cloud User (Inactive) added a comment - I have cron-ed pulls every few minutes. This morning I came into over 1500 failure emails. These pulls are how our production deployments are triggered. The need to be much more reliable.

            I used to see this maybe once or twice a day on a job that pulls via ssh every 15 minutes. Now I am getting this almost every fifteen minutes.

            andrewspiers added a comment - I used to see this maybe once or twice a day on a job that pulls via ssh every 15 minutes. Now I am getting this almost every fifteen minutes.

            I am experiencing this same thing. My script resolves branches (using git ls-remote) every minute from github via ssh and this issue happens in interesting points of time:

            #!python
            
            Date: Wed, 12 Mar 2014 14:00:02 +0000 (UTC)
            Date: Wed, 12 Mar 2014 15:00:02 +0000 (UTC)
            Date: Wed, 12 Mar 2014 16:30:03 +0000 (UTC)
            Date: Wed, 12 Mar 2014 18:00:03 +0000 (UTC)
            Date: Wed, 12 Mar 2014 18:00:03 +0000 (UTC)
            Date: Wed, 12 Mar 2014 22:00:02 +0000 (UTC)
            Date: Thu, 13 Mar 2014 00:00:02 +0000 (UTC)
            Date: Thu, 13 Mar 2014 02:30:02 +0000 (UTC)
            Date: Thu, 13 Mar 2014 03:00:02 +0000 (UTC)
            Date: Thu, 13 Mar 2014 03:00:02 +0000 (UTC)
            Date: Thu, 13 Mar 2014 04:20:02 +0000 (UTC)
            Date: Thu, 13 Mar 2014 04:30:02 +0000 (UTC)
            Date: Thu, 13 Mar 2014 05:00:02 +0000 (UTC)
            Date: Thu, 13 Mar 2014 05:30:02 +0000 (UTC)
            Date: Thu, 13 Mar 2014 06:00:02 +0000 (UTC)
            Date: Thu, 13 Mar 2014 06:00:02 +0000 (UTC)
            

            Legacy Bitbucket Cloud User (Inactive) added a comment - I am experiencing this same thing. My script resolves branches (using git ls-remote) every minute from github via ssh and this issue happens in interesting points of time: #!python Date: Wed, 12 Mar 2014 14:00:02 +0000 (UTC) Date: Wed, 12 Mar 2014 15:00:02 +0000 (UTC) Date: Wed, 12 Mar 2014 16:30:03 +0000 (UTC) Date: Wed, 12 Mar 2014 18:00:03 +0000 (UTC) Date: Wed, 12 Mar 2014 18:00:03 +0000 (UTC) Date: Wed, 12 Mar 2014 22:00:02 +0000 (UTC) Date: Thu, 13 Mar 2014 00:00:02 +0000 (UTC) Date: Thu, 13 Mar 2014 02:30:02 +0000 (UTC) Date: Thu, 13 Mar 2014 03:00:02 +0000 (UTC) Date: Thu, 13 Mar 2014 03:00:02 +0000 (UTC) Date: Thu, 13 Mar 2014 04:20:02 +0000 (UTC) Date: Thu, 13 Mar 2014 04:30:02 +0000 (UTC) Date: Thu, 13 Mar 2014 05:00:02 +0000 (UTC) Date: Thu, 13 Mar 2014 05:30:02 +0000 (UTC) Date: Thu, 13 Mar 2014 06:00:02 +0000 (UTC) Date: Thu, 13 Mar 2014 06:00:02 +0000 (UTC)

            benjam_es added a comment -

            Anadi - This worked a treat for me

            benjam_es added a comment - Anadi - This worked a treat for me

            thanks Anadi .. that worked for me

            Legacy Bitbucket Cloud User (Inactive) added a comment - thanks Anadi .. that worked for me

            amisra added a comment -

            Hi!

            I am experiencing the same issue here, though not sure if it is misconfiguration on my side or anything else. port 22 is blocked in my office so this is what I did to

            #!shell
            
            ~/.ssh/confg
            

            file

            #!shell
              Host bitbucket.org
              User git
              Hostname altssh.bitbucket.org
              IdentityFile ~/.ssh/git_id_rsa
              IdentitiesOnly yes
              Port 443
            

            Saw this in bitbucket confluence site that we can use URL of the form

            #!shell
            
            ssh://git@altssh.bitbucket.org:443/username/gitrepo
            

            hence these changes

            here is the output

            #!shell
            
            $ ssh -vT git@bitbucket.org
            OpenSSH_6.2p2, OpenSSL 1.0.1e 11 Feb 2013
            debug1: Reading configuration data /home/misraa3/.ssh/config
            debug1: /home/misraa3/.ssh/config line 7: Applying options for bitbucket.org
            debug1: Connecting to bitbucket.org [131.103.20.168] port 443.
            debug1: Connection established.
            debug1: identity file /home/misraa3/.ssh/git_id_rsa type 1
            debug1: identity file /home/misraa3/.ssh/git_id_rsa-cert type -1
            debug1: Enabling compatibility mode for protocol 2.0
            debug1: Local version string SSH-2.0-OpenSSH_6.2
            debug1: ssh_exchange_identification: <html>
            debug1: ssh_exchange_identification: <head><title>400 Bad Request</title></head>
            debug1: ssh_exchange_identification: <body bgcolor="white">
            debug1: ssh_exchange_identification: <center><h1>400 Bad Request</h1></center>
            debug1: ssh_exchange_identification: <hr><center>nginx/1.5.3</center>
            debug1: ssh_exchange_identification: </body>
            debug1: ssh_exchange_identification: </html>
            ssh_exchange_identification: Connection closed by remote host
            
            

            amisra added a comment - Hi! I am experiencing the same issue here, though not sure if it is misconfiguration on my side or anything else. port 22 is blocked in my office so this is what I did to #!shell ~/.ssh/confg file #!shell Host bitbucket.org User git Hostname altssh.bitbucket.org IdentityFile ~/.ssh/git_id_rsa IdentitiesOnly yes Port 443 Saw this in bitbucket confluence site that we can use URL of the form #!shell ssh: //git@altssh.bitbucket.org:443/username/gitrepo hence these changes here is the output #!shell $ ssh -vT git@bitbucket.org OpenSSH_6.2p2, OpenSSL 1.0.1e 11 Feb 2013 debug1: Reading configuration data /home/misraa3/.ssh/config debug1: /home/misraa3/.ssh/config line 7: Applying options for bitbucket.org debug1: Connecting to bitbucket.org [131.103.20.168] port 443. debug1: Connection established. debug1: identity file /home/misraa3/.ssh/git_id_rsa type 1 debug1: identity file /home/misraa3/.ssh/git_id_rsa-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.2 debug1: ssh_exchange_identification: <html> debug1: ssh_exchange_identification: <head><title>400 Bad Request</title></head> debug1: ssh_exchange_identification: <body bgcolor= "white" > debug1: ssh_exchange_identification: <center><h1>400 Bad Request</h1></center> debug1: ssh_exchange_identification: <hr><center>nginx/1.5.3</center> debug1: ssh_exchange_identification: </body> debug1: ssh_exchange_identification: </html> ssh_exchange_identification: Connection closed by remote host

            evzijst added a comment -

            We found a faulty SSH daemon that we killed, which I suspect has been the cause for these issues.

            Please reopen this issue if you still see the problem going forward.

            evzijst added a comment - We found a faulty SSH daemon that we killed, which I suspect has been the cause for these issues. Please reopen this issue if you still see the problem going forward.

            bhrebec added a comment -

            I haven't turned up anything interesting with git (pull|fetch) --verbose (The error occurs first thing, and no additional messages are shown), so I've added ssh -vvv git@bitbucket.org to the crontab in hopes of getting more detail. Will report back if it triggers the same problem.

            To answer some of the above questions:

            • It happens randomly over every repository I've tried.
            • Some repositories are quite small and have only been updated once or twice since being moved to bitbucket, another is medium-sized and is updated many times per day.
            • For me, it happens with a pull; I don't use clone in crontab and haven't noticed it causing a problem.
            • Based on limited testing, https doesn't have any problems, but I won't be sure until it's ran a bit longer.

            bhrebec added a comment - I haven't turned up anything interesting with git (pull|fetch) --verbose (The error occurs first thing, and no additional messages are shown), so I've added ssh -vvv git@bitbucket.org to the crontab in hopes of getting more detail. Will report back if it triggers the same problem. To answer some of the above questions: It happens randomly over every repository I've tried. Some repositories are quite small and have only been updated once or twice since being moved to bitbucket, another is medium-sized and is updated many times per day. For me, it happens with a pull; I don't use clone in crontab and haven't noticed it causing a problem. Based on limited testing, https doesn't have any problems, but I won't be sure until it's ran a bit longer.

            evzijst added a comment -

            Any extra information @bhrebec @petere?

            Out of curiousity, have you tried switching to HTTPS (just for the sake of troubleshooting, I'm not saying you should prefer http over ssh)?

            evzijst added a comment - Any extra information @bhrebec @petere? Out of curiousity, have you tried switching to HTTPS (just for the sake of troubleshooting, I'm not saying you should prefer http over ssh)?

            Hi Peter,

            Unfortunately, I am not able to diagnose the issue with the information at hand. It would be handy to see exactly where in the process the connection is being severed. Does git pull --verbose work?

            Just to get a better idea of the environment, could you answer a few questions:

            • Does it happen for any of your other repositories?
            • How large is the repository, and how frequently is it updated?
            • Does the cron job pull from the repository or does it create a clone?

            Cheers,
            Brian

            Brian Nguyen (Inactive) added a comment - Hi Peter, Unfortunately, I am not able to diagnose the issue with the information at hand. It would be handy to see exactly where in the process the connection is being severed. Does git pull --verbose work? Just to get a better idea of the environment, could you answer a few questions: Does it happen for any of your other repositories? How large is the repository, and how frequently is it updated? Does the cron job pull from the repository or does it create a clone? Cheers, Brian

            bhrebec added a comment -

            Just wanted to chime in that I've been seeing an identical error; it happened very frequently (almost every other request) a few weeks ago, but has decreased to the point where it only happens a couple of times per day.

            Curiously enough, I haven't been able to trigger the problem while running git manually. It only seems to occur in a cron job.

            bhrebec added a comment - Just wanted to chime in that I've been seeing an identical error; it happened very frequently (almost every other request) a few weeks ago, but has decreased to the point where it only happens a couple of times per day. Curiously enough, I haven't been able to trigger the problem while running git manually. It only seems to occur in a cron job.

            petere added a comment -

            GIT_CURL_VERBOSE isn't going to show anything, because I'm using the SSH URL.

            petere added a comment - GIT_CURL_VERBOSE isn't going to show anything, because I'm using the SSH URL.

            petere added a comment -

            Still happening

            petere added a comment - Still happening

            Hi Peter,

            I'm closing this issue, since it has been open for a while now. Please reopen this if you are still experiencing issues.

            Cheers,
            Brian

            Brian Nguyen (Inactive) added a comment - Hi Peter, I'm closing this issue, since it has been open for a while now. Please reopen this if you are still experiencing issues. Cheers, Brian

            Hey,

            Are you still seeing this problem? We have been experiencing some performance issues in the last couple of days that may be causing this.

            If you are still experiencing this, would you be able to run this?

            GIT_CURL_VERBOSE=1 git pull origin
            

            It should give us a better idea of where the problem is occurring.

            Cheers,
            Brian

            Brian Nguyen (Inactive) added a comment - Hey, Are you still seeing this problem? We have been experiencing some performance issues in the last couple of days that may be causing this. If you are still experiencing this, would you be able to run this? GIT_CURL_VERBOSE=1 git pull origin It should give us a better idea of where the problem is occurring. Cheers, Brian

              6995b9ed1710 evzijst
              79adfd7c60e9 petere
              Affected customers:
              34 This affects my team
              Watchers:
              74 Start watching this issue

                Created:
                Updated:
                Resolved: