Uploaded image for project: 'Bitbucket Cloud'
  1. Bitbucket Cloud
  2. BCLOUD-15062

Pipelines randomly fail with: "Error occurred whilst uploading artifact". (BP-1174)

      Bitbucket pipeline is often failing because of a "System error" saying "Error occurred whilst uploading artifact":

      Here is the bitbucket-pipeline section of the step where the System error occurred:

      - step:
            name: Generate APIBlueprint documentation
            image:
              name: <custom docker image hosted on aws ecr>
              aws:
                access-key: $AWS_ACCESS_KEY_ID
                secret-key: $AWS_SECRET_ACCESS_KEY
            script:
              - <generate documentation.apib file>
              - cp documentation.apib dist/documentation.apib
            artifacts:
              - dist/documentation.apib
      

      Re-running the pipeline usually solves the problem, letting me think it's not an issue in our bitbucket-pipeline.yml file.

            [BCLOUD-15062] Pipelines randomly fail with: "Error occurred whilst uploading artifact". (BP-1174)

            Pierre B added a comment -

            Attachment 543547393-Capture%20d%E2%80%99%C3%A9cran%202017-10-20%20%C3%A0%2010.59.31.png has been added with description: Originally embedded in Bitbucket issue #15062 in site/master

            Pierre B added a comment - Attachment 543547393-Capture%20d%E2%80%99%C3%A9cran%202017-10-20%20%C3%A0%2010.59.31.png has been added with description: Originally embedded in Bitbucket issue #15062 in site/master

            Attachment 3725227603-bitbucket-pipelines-artifact-error.png has been added with description: Originally embedded in Bitbucket issue #15062 in site/master

            Assaf Aloni added a comment - Attachment 3725227603-bitbucket-pipelines-artifact-error.png has been added with description: Originally embedded in Bitbucket issue #15062 in site/master

            e98cuenc added a comment -

            Attachment 500928793-Captura%20de%20pantalla%202017-12-11%20a%20las%2023.22.01.png has been added with description: Originally embedded in Bitbucket issue #15062 in site/master

            e98cuenc added a comment - Attachment 500928793-Captura%20de%20pantalla%202017-12-11%20a%20las%2023.22.01.png has been added with description: Originally embedded in Bitbucket issue #15062 in site/master

            Martin added a comment -

            Attachment 3405605023-Screen%20Shot%202017-12-15%20at%2009.29.54.png has been added with description: Originally embedded in Bitbucket issue #15062 in site/master

            Martin added a comment - Attachment 3405605023-Screen%20Shot%202017-12-15%20at%2009.29.54.png has been added with description: Originally embedded in Bitbucket issue #15062 in site/master

            To explain my use case a bit: I use the caches to cache git repos containing build configurations (the sources), downloaded source files (downloads) and re-usable build artifacts (yocto-cache). These are used by the Yocto build system to build embedded Linux systems. I enabled pipelines on the “root” repository, making sure the build always works from a clean state (without changes to local configuration).

            I also want to store these images so they can be retrieved later. I did this in a seperate step because my build image did not have cURL installed (but now it does) and because I wanted this step to be optional in the future. This is why I need the artifacts.

            Using my own storage solution requires me to manage it and using AWS in the same region sounds a bit like leaky abstraction. Either way the data would still be sent past the edge routers which is not that great from a performance and security perspective.

            Wouldn’t it be possible to enforce that some steps are ran on the same instance? I think the main thing missing to accomplish this is some sort of “required-artifacts” property for subsequent steps.

            Seppe Stas added a comment - To explain my use case a bit: I use the caches to cache git repos containing build configurations (the sources), downloaded source files (downloads) and re-usable build artifacts (yocto-cache). These are used by the Yocto build system to build embedded Linux systems. I enabled pipelines on the “root” repository, making sure the build always works from a clean state (without changes to local configuration). I also want to store these images so they can be retrieved later. I did this in a seperate step because my build image did not have cURL installed (but now it does) and because I wanted this step to be optional in the future. This is why I need the artifacts. Using my own storage solution requires me to manage it and using AWS in the same region sounds a bit like leaky abstraction. Either way the data would still be sent past the edge routers which is not that great from a performance and security perspective. Wouldn’t it be possible to enforce that some steps are ran on the same instance? I think the main thing missing to accomplish this is some sort of “required-artifacts” property for subsequent steps.

            Hi Seppe,

            Indeed the error message should be more meaningful. I will look in to improving that.

            I don't fully understand your use case but caches and artifacts are separate things. Caches are designed to persist dependencies between pipelines so that subsequent builds are faster whereas artifacts are designed to pass files between steps so that they are available in subsequent steps (i.e. not necessarily as a performance improvement but to achieve build once semantics). You can use both artifacts and caches in a single pipeline to achieve different goals depending on your requirements and each artifact/cache has a separate 1GB limit so defining several smaller artifacts is a possible solution to your problem.

            Also note that storing artifacts in your own s3 bucket should not come with a significant performance impact over the built in artifacts as our build cluster runs in AWS (us-east-1) and is similar to the solution we employ for built in artifacts.

            We explored using Docker volumes for persistent state but it comes with added complexity as different steps are not necessarily run on the same underlying host instance.

            StannousBaratheon added a comment - Hi Seppe, Indeed the error message should be more meaningful. I will look in to improving that. I don't fully understand your use case but caches and artifacts are separate things. Caches are designed to persist dependencies between pipelines so that subsequent builds are faster whereas artifacts are designed to pass files between steps so that they are available in subsequent steps (i.e. not necessarily as a performance improvement but to achieve build once semantics). You can use both artifacts and caches in a single pipeline to achieve different goals depending on your requirements and each artifact/cache has a separate 1GB limit so defining several smaller artifacts is a possible solution to your problem. Also note that storing artifacts in your own s3 bucket should not come with a significant performance impact over the built in artifacts as our build cluster runs in AWS (us-east-1) and is similar to the solution we employ for built in artifacts. We explored using Docker volumes for persistent state but it comes with added complexity as different steps are not necessarily run on the same underlying host instance.

            Seppe Stas added a comment -

            Hmm, in that case it would be nice to have a more useful error like “failed to upload cache: cache exceeds maximum size of 1GB” instead of the generic “SYSTEM ERROR”.

            Using an external storage solution does not make a lot of sense to me. I thought the whole purpose of caching was to reduce internet traffic. Using the bitbucket pipelines cache allows everything to stay in the same data center making it quicker and cheaper to receive. Uploading caches to an external storage solution kind of negates this.

            Also note that the only reason for me to use artifacts is to be able to use an image that can actually upload artifacts. The image I use for building can’t do this. For my usecases having a more short lived artifact that is efficiently passed between containers with different images makes way more sense.

            Using a Docker volume that gets removed after the pipeline completes sounds like a no-brainer solution to me...

            Seppe Stas added a comment - Hmm, in that case it would be nice to have a more useful error like “failed to upload cache: cache exceeds maximum size of 1GB” instead of the generic “SYSTEM ERROR”. Using an external storage solution does not make a lot of sense to me. I thought the whole purpose of caching was to reduce internet traffic. Using the bitbucket pipelines cache allows everything to stay in the same data center making it quicker and cheaper to receive. Uploading caches to an external storage solution kind of negates this. Also note that the only reason for me to use artifacts is to be able to use an image that can actually upload artifacts. The image I use for building can’t do this. For my usecases having a more short lived artifact that is efficiently passed between containers with different images makes way more sense. Using a Docker volume that gets removed after the pipeline completes sounds like a no-brainer solution to me...

            Hi Seppe,

            Artifacts and caches are both restricted to 1GB. There are several options if you are hitting this limit, one is to reconsider if you need to pass on your entire build directory as an artifact as Pipelines clones the source in every step and dependencies can be downloaded again if needed. Another is to use your own storage solution as recommended in our documentation:

            "If you need artifact storage for longer than 7 days (or more than 1 GB), we recommend using your own storage solution, like Amazon S3 or a hosted artifact repository like JFrog Artifactory." - https://confluence.atlassian.com/bitbucket/using-artifacts-in-steps-935389074.html

            StannousBaratheon added a comment - Hi Seppe, Artifacts and caches are both restricted to 1GB. There are several options if you are hitting this limit, one is to reconsider if you need to pass on your entire build directory as an artifact as Pipelines clones the source in every step and dependencies can be downloaded again if needed. Another is to use your own storage solution as recommended in our documentation: "If you need artifact storage for longer than 7 days (or more than 1 GB), we recommend using your own storage solution, like Amazon S3 or a hosted artifact repository like JFrog Artifactory." - https://confluence.atlassian.com/bitbucket/using-artifacts-in-steps-935389074.html

            Seppe Stas added a comment -

            @mryall_atlassian Last time I saw the issue was 18/01. I'll rerun a build and report the issue if it occurs again.

            Note that I'm caching quite a lot of stuff (sources is ~86M, downloads ~6GB and the cache is ~3GB on my local machine, but the CI build should be smaller).
            Could it be I'm hitting some sort of upload quota causing the upload of the artifacts to fail?

            Seppe Stas added a comment - @mryall_atlassian Last time I saw the issue was 18/01. I'll rerun a build and report the issue if it occurs again. Note that I'm caching quite a lot of stuff (sources is ~86M, downloads ~6GB and the cache is ~3GB on my local machine, but the CI build should be smaller). Could it be I'm hitting some sort of upload quota causing the upload of the artifacts to fail?

            Matt Ryall added a comment -

            Resolving as this issue is currently considered fixed. If you're still experiencing problems with artifact uploads in Pipelines, please raise a support ticket at https://support.atlassian.com.

            Matt Ryall added a comment - Resolving as this issue is currently considered fixed. If you're still experiencing problems with artifact uploads in Pipelines, please raise a support ticket at https://support.atlassian.com .

              Unassigned Unassigned
              f76863a3db9e Pierre B
              Affected customers:
              8 This affects my team
              Watchers:
              16 Start watching this issue

                Created:
                Updated:
                Resolved: