Uploaded image for project: 'Bitbucket Cloud'
  1. Bitbucket Cloud
  2. BCLOUD-15317

Allow building multi-architecture Docker images (e.g. ARM images)

    • Our product teams collect and evaluate feedback from a number of different sources. To learn more about how we use customer feedback in the planning process, check out our new feature policy.

      Please allow --privileged flag to build multiarch docker images.
      According to this article, it is possible with Github + Travis :
      http://blog.hypriot.com/post/setup-simple-ci-pipeline-for-arm-images/

      Register qemu-*-static for all supported processors except the current one
      docker run --rm --privileged multiarch/qemu-user-static:register

      Currently, the following error is return when running the pipeline:

      • docker run --rm --privileged multiarch/qemu-user-static:register --reset
        docker: Error response from daemon: authorization denied by plugin pipelines: Command not supported.
        See 'docker run --help'.

      Thanks

            [BCLOUD-15317] Allow building multi-architecture Docker images (e.g. ARM images)

            Pinned comments

            Pinned by Edmund Munday

            Edmund Munday added a comment - - edited

            Hi all - as promised, we're excited to announce the release of ARM builds in the Pipelines cloud runtime.

            Head over to our announcement blog for all the details: 

            https://www.atlassian.com/blog/software-teams/announcing-arm-builds-in-cloud-for-bitbucket-pipelines

            Important note regarding multi-arch support:

            As mentioned, this initial release does make it possible to create multi-arch images using the `docker manifest` method, but does not support privileged containers or `buildx`.

            While less ergonomic than `buildx`, it should be noted that the `docker manifest` method can be significantly more performant than using `buildx` due to being able to leverage native runtimes for both architecture builds rather than qemu-based emulation which can be very slow.

            Stay tuned for future updates re: `buildx` support.

            Edmund Munday added a comment - - edited Hi all - as promised, we're excited to announce the release of ARM builds in the Pipelines cloud runtime. Head over to our announcement blog for all the details:  https://www.atlassian.com/blog/software-teams/announcing-arm-builds-in-cloud-for-bitbucket-pipelines Important note regarding multi-arch support: As mentioned, this initial release does make it possible to create multi-arch images using the `docker manifest` method , but does not support privileged containers or `buildx`. While less ergonomic than `buildx`, it should be noted that the `docker manifest` method can be significantly more performant than using `buildx` due to being able to leverage native runtimes for both architecture builds rather than qemu-based emulation which can be very slow. Stay tuned for future updates re: `buildx` support.

            All comments

            It's also possible to build multi-arch images with buildx via Bitbucket cloud pipeline. It's based on
            https://docs.docker.com/build-cloud/ci/.
             
            This step definition works well for us:

                      - step:
                          name: Build Docker image
                          script:
                            # Enable buildx
                            - mkdir -vp ~/.docker/cli-plugins/
                            - curl --silent -L --output ~/.docker/cli-plugins/docker-buildx "https://github.com/docker/buildx-desktop/releases/download/v0.18.0-desktop.2/buildx-v0.18.0-desktop.2.linux-amd64"
                            - chmod a+x ~/.docker/cli-plugins/docker-buildx                
                            - docker login --username $DOCKER_USER --password $DOCKER_PASSWORD
                            - docker buildx create --driver cloud flexifyio/flexify
                            # Build
                            - docker buildx build --builder cloud-flexifyio-flexify --platform linux/amd64,linux/arm64 --pull --tag flexifyio/engine:edge --push .
                          services:
                            - docker 

            It runs the actual Docker builds in Docker Build Cloud, but our Dockerfile just copies artifacts that we've built on the previous pipeline steps. It is completed in seconds and is not a big deal. 

            Sergey Kandaurov added a comment - It's also possible to build multi-arch images with buildx via Bitbucket cloud pipeline. It's based on https://docs.docker.com/build-cloud/ci/.   This step definition works well for us:         - step:               name: Build Docker image               script:                 # Enable buildx                 - mkdir -vp ~/.docker/cli-plugins/                 - curl --silent -L --output ~/.docker/cli-plugins/docker-buildx "https: //github.com/docker/buildx-desktop/releases/download/v0.18.0-desktop.2/buildx-v0.18.0-desktop.2.linux-amd64"                 - chmod a+x ~/.docker/cli-plugins/docker-buildx                                 - docker login --username $DOCKER_USER --password $DOCKER_PASSWORD                 - docker buildx create --driver cloud flexifyio/flexify                 # Build                 - docker buildx build --builder cloud-flexifyio-flexify --platform linux/amd64,linux/arm64 --pull --tag flexifyio/engine:edge --push .               services:                 - docker It runs the actual Docker builds in Docker Build Cloud, but our Dockerfile just copies artifacts that we've built on the previous pipeline steps. It is completed in seconds and is not a big deal. 

            Thanks 57465700c4e1! buildx seems to be much easier to use and is not less performant when used with Docker Build Cloud native builders, such as:
             

            docker buildx create --driver cloud flexifyio/flexify 
            docker buildx build --builder cloud-flexifyio-flexify ....

            Since such builds are performed in a cloud, they may actually not need privileged access.

            You may look at providing a service/builder specifically for buildx that would not require qemu-based emulation.

             

            Sergey Kandaurov added a comment - Thanks 57465700c4e1 ! buildx seems to be much easier to use and is not less performant when used with Docker Build Cloud native builders, such as:   docker buildx create --driver cloud flexifyio/flexify docker buildx build --builder cloud-flexifyio-flexify .... Since such builds are performed in a cloud, they may actually not need privileged access. You may look at providing a service/builder specifically for buildx that would not require qemu-based emulation.  

            Pinned by Edmund Munday

            Edmund Munday added a comment - - edited

            Hi all - as promised, we're excited to announce the release of ARM builds in the Pipelines cloud runtime.

            Head over to our announcement blog for all the details: 

            https://www.atlassian.com/blog/software-teams/announcing-arm-builds-in-cloud-for-bitbucket-pipelines

            Important note regarding multi-arch support:

            As mentioned, this initial release does make it possible to create multi-arch images using the `docker manifest` method, but does not support privileged containers or `buildx`.

            While less ergonomic than `buildx`, it should be noted that the `docker manifest` method can be significantly more performant than using `buildx` due to being able to leverage native runtimes for both architecture builds rather than qemu-based emulation which can be very slow.

            Stay tuned for future updates re: `buildx` support.

            Edmund Munday added a comment - - edited Hi all - as promised, we're excited to announce the release of ARM builds in the Pipelines cloud runtime. Head over to our announcement blog for all the details:  https://www.atlassian.com/blog/software-teams/announcing-arm-builds-in-cloud-for-bitbucket-pipelines Important note regarding multi-arch support: As mentioned, this initial release does make it possible to create multi-arch images using the `docker manifest` method , but does not support privileged containers or `buildx`. While less ergonomic than `buildx`, it should be noted that the `docker manifest` method can be significantly more performant than using `buildx` due to being able to leverage native runtimes for both architecture builds rather than qemu-based emulation which can be very slow. Stay tuned for future updates re: `buildx` support.

            Hi all - we will be releasing the ability to run steps on ARM via the cloud runtime in the next few weeks.

            This will make it possible to do multi-arch images via `docker manifest` upon release. To be clear, this will not initially include support for `buildx` - however we plan to ship `buildx` support relatively soon after shipping the ARM cloud runtime.

            This ticket will remain "in progress" until both ARM and `buildx` support are shipped.

            Edmund Munday added a comment - Hi all - we will be releasing the ability to run steps on ARM via the cloud runtime in the next few weeks. This will make it possible to do multi-arch images via `docker manifest` upon release. To be clear, this will not initially include support for `buildx` - however we plan to ship `buildx` support relatively soon after shipping the ARM cloud runtime. This ticket will remain "in progress" until both ARM and `buildx` support are shipped.

            +1

             

            Adding this to the "unbelievable that Bitbucket is not yet supporting it despite being nearly in 2025", right next to lacking OIDC support for Azure.

            Apparently Bitbucket keeps expecting people to express support for common-sense features, while the people who need them have gotten used to just employing workarounds.

            Andrei Dascalu added a comment - +1   Adding this to the "unbelievable that Bitbucket is not yet supporting it despite being nearly in 2025", right next to lacking OIDC support for Azure. Apparently Bitbucket keeps expecting people to express support for common-sense features, while the people who need them have gotten used to just employing workarounds.

            at this point, gitlab is the way...

            Oleg Tarassov added a comment - at this point, gitlab is the way...

            +1

            Ropelatto added a comment -

            +1

            Ropelatto added a comment - +1

            +1

            Rohit Gautam added a comment - +1

            +1

            Leonardo Rojas added a comment - +1

            +1

            Matthew Lee added a comment - +1

            Nima Zamani added a comment - - edited

            +1

            we really need this feature. How is this different from this link: [Building multi-architecture docker images with Bitbucket Pipelines | Bitbucket Cloud | Atlassian Documentation|https://confluence.atlassian.com/bbkb/building-multi-architecture-docker-images-with-bitbucket-pipelines-1252329371.html]

            Nima Zamani added a comment - - edited +1 we really need this feature. How is this different from this link: [Building multi-architecture docker images with Bitbucket Pipelines | Bitbucket Cloud | Atlassian Documentation|https://confluence.atlassian.com/bbkb/building-multi-architecture-docker-images-with-bitbucket-pipelines-1252329371.html]

            +1

            57465700c4e1 is there a provisional or estimated date for when this functionality will be available (or more generally, the use of `–privileged` in Bitbucket pipelines)? I know you've said that the goal is to enable it in the medium-term, just wondering if you could speak to roughly when it might be available as it's blocking a number of tasks for us. It'd be great to know whether we can expect it by the end of the year, Q1 next year, at some point in 2025, etc.

            Pieter Rombauts added a comment - 57465700c4e1 is there a provisional or estimated date for when this functionality will be available (or more generally, the use of `–privileged` in Bitbucket pipelines)? I know you've said that the goal is to enable it in the medium-term, just wondering if you could speak to roughly when it might be available as it's blocking a number of tasks for us. It'd be great to know whether we can expect it by the end of the year, Q1 next year, at some point in 2025, etc.

            +1

            +1

            +1

            André Déo added a comment - +1

            +1

            +1

            Lucas Santos added a comment - +1

            +1

            Marcelo Primo added a comment - +1

            +1

            Hi all - we're in the process of a major architectural upgrade to Pipelines right now.

            One of the things we intend for this to enable (in the medium term) is multi-arch image builds. Full disclosure, this will not be available at launch, but it's a critical step in the direction we need to go to enable this.

            Edmund Munday added a comment - Hi all - we're in the process of a major architectural upgrade to Pipelines right now. One of the things we intend for this to enable (in the medium term) is multi-arch image builds. Full disclosure, this will not be available at launch, but it's a critical step in the direction we need to go to enable this.

            When is this going to get solved ? Is preventing me from onboarding like 30 services. Insane this isn't getting more priority.

            pablo.hendrickx added a comment - When is this going to get solved ? Is preventing me from onboarding like 30 services. Insane this isn't getting more priority.

            Hi all - just letting you know that yesterday we shipped ARM Support on Pipelines Runners: https://bitbucket.org/blog/announcing-support-for-linux-arm-runners-in-bitbucket-pipelines

            Understandably this is not exactly what this ticket is about, but it's related so sharing this info here as this is a required step towards us supporting multi-arch Docker Images.

            Edmund Munday added a comment - Hi all - just letting you know that yesterday we shipped ARM Support on Pipelines Runners: https://bitbucket.org/blog/announcing-support-for-linux-arm-runners-in-bitbucket-pipelines Understandably this is not exactly what this ticket is about, but it's related so sharing this info here as this is a required step towards us supporting multi-arch Docker Images.

            31994d4a81b0,
             
            I agree with you on everything except the private pipe images.
            We had no choice but to create a private pipe and run it on managed Bitbucket runners to make multi-arch builds work.

            e.g

             

            definitions:
              services:
                docker-hosted:
                  type: docker
                  image: docker:dind
            ...
                - step: &build_publish_pipe
                    name: Build and Publish Docker Latest Image
                    runs-on:
                      - self.hosted
                      - acme.hosted
                    services:
                      - docker-hosted
                    caches:
                      - docker
                    script:
                      - cat ${BITBUCKET_CLONE_DIR}/aws_docker_token | docker login --username AWS --password-stdin ${ECR_REPO}
                      - pipe: acme/pipe-multiarch-ecr-push-image:master
                        variables:
                          IMAGE_NAME: "<string>"
                          DOCKER_IMAGE_TAG: "<string>"
            
            

             

            ref: see step 11 in: https://support.atlassian.com/bitbucket-cloud/docs/write-a-pipe-for-bitbucket-pipelines/

             

             

            Oleg Tarassov added a comment - 31994d4a81b0 ,   I agree with you on everything except the private pipe images. We had no choice but to create a private pipe and run it on managed Bitbucket runners to make multi-arch builds work. e.g   definitions: services: docker-hosted: type: docker image: docker:dind ... - step: &build_publish_pipe name: Build and Publish Docker Latest Image runs-on: - self.hosted - acme.hosted services: - docker-hosted caches: - docker script: - cat ${BITBUCKET_CLONE_DIR}/aws_docker_token | docker login --username AWS --password-stdin ${ECR_REPO} - pipe: acme/pipe-multiarch-ecr-push-image:master variables: IMAGE_NAME: "<string>" DOCKER_IMAGE_TAG: "<string>"   ref: see step 11 in: https://support.atlassian.com/bitbucket-cloud/docs/write-a-pipe-for-bitbucket-pipelines/    

            Sam added a comment - - edited

            @Vitaliy Zabolotskyy They've been hard at work "Reviewing" it for over 5.5 months!

            Sam added a comment - - edited @Vitaliy Zabolotskyy They've been hard at work "Reviewing" it for over 5.5 months!

            Vitaliy Zabolotskyy added a comment - - edited

            Sorry to say, but Bitbucket is pretty much unusable in 2023. No CI scripts reusability, no loops in pipelines yaml, no WIP PRs, no support for private pipe images, no multi-arch builds. While existing feature set is OK for a 10-person startup - this is not an industrial-grade solution, not in the current decade.

            We are now thinking, how we move away from Bitbucket. Great job, Atlassian!

            Vitaliy Zabolotskyy added a comment - - edited Sorry to say, but Bitbucket is pretty much unusable in 2023. No CI scripts reusability, no loops in pipelines yaml, no WIP PRs, no support for private pipe images, no multi-arch builds. While existing feature set is OK for a 10-person startup - this is not an industrial-grade solution, not in the current decade. We are now thinking, how we move away from Bitbucket. Great job, Atlassian!

            No possibility to build ARM images on Bitbucket Pipelines is a showstopper for us. This almost 6 years old issue is about to go to the school

            Bohdan Astapov added a comment - No possibility to build ARM images on Bitbucket Pipelines is a showstopper for us. This almost 6 years old issue is about to go to the school

            pklos added a comment -

            Looking forward to seeing this done too. Currently if we want to save a little the environment while using cloud computing, the ARM architecture is the way to go. It's a shame we can't build ARM docker image in pipelines.

            pklos added a comment - Looking forward to seeing this done too. Currently if we want to save a little the environment while using cloud computing, the ARM architecture is the way to go. It's a shame we can't build ARM docker image in pipelines.

            Way overdue, look forward to seeing this prioritised in the near future. We want to switch out workloads to graviton instances so have started looking into alternative build systems

            Aidan de Graaf added a comment - Way overdue, look forward to seeing this prioritised in the near future. We want to switch out workloads to graviton instances so have started looking into alternative build systems

            We are saved by the Bitbucket host-runners to be able to build multiarch but it was a pain to have setup such workflow in our pipelines.

            Oleg Tarassov added a comment - We are saved by the Bitbucket host-runners to be able to build multiarch but it was a pain to have setup such workflow in our pipelines.

            Phil Gooch added a comment -

            It's good to see that this is now at Reviewing status. Unfortunately, it has come too late for us, and we have moved away from Bitbucket Cloud for now

            Phil Gooch added a comment - It's good to see that this is now at Reviewing status. Unfortunately, it has come too late for us, and we have moved away from Bitbucket Cloud for now

            This issue has gathered enough interest to be moved automatically to Reviewing status, where it will be reviewed to someone in the relevant product development team and moved on to the appropriate status.

            Mike Howells added a comment - This issue has gathered enough interest to be moved automatically to Reviewing status, where it will be reviewed to someone in the relevant product development team and moved on to the appropriate status.

            Clearly a huge drawback for Bitbucket. ARM64 is the go to for a variety of scenarios and using a self-hosted runner feels like moving a step back.

            Fotis Papadamis added a comment - Clearly a huge drawback for Bitbucket. ARM64 is the go to for a variety of scenarios and using a self-hosted runner feels like moving a step back.

            We also really need this, else we are forced to move away from Bitbucket...

            Thomas Berthold added a comment - We also really need this, else we are forced to move away from Bitbucket...

            Really need this feature. All ml-powered edge devices are now ARM. It's critical we are able to build images to target ARM platforms.

            Will Marsman added a comment - Really need this feature. All ml-powered edge devices are now ARM. It's critical we are able to build images to target ARM platforms.

            This is quite crucial functionality, especially with growing number of M1 users.
            Are there any plans to move forward with handling arm without having to use self-hosted runner?

            Jakub Badecki added a comment - This is quite crucial functionality, especially with growing number of M1 users. Are there any plans to move forward with handling arm without having to use self-hosted runner?

            Alex Bailey added a comment - - edited

            This is becoming really critical to us.

            Most of our services are hosted in AWS ECS. There are substantial performance improvements and cost savings we would like to take advantage of but we're being held back by the lack of ARM support from Bitbucket Pipelines.

            Not only is it key to support for embedded systems, ARM is becoming increasingly prevalent in both local and cloud computing. Apple are going 'all in' with the M1 chip and AWS are pushing strong benefits on their ARM chips in the cloud. 

            By choosing not to support this, Atlassian is really showing how out-of-touch they are becoming with the development community. 

             

            Alex Bailey added a comment - - edited This is becoming really critical to us. Most of our services are hosted in AWS ECS. There are substantial performance improvements and cost savings we would like to take advantage of but we're being held back by the lack of ARM support from Bitbucket Pipelines. Not only is it key to support for embedded systems, ARM is becoming increasingly prevalent in both local and cloud computing. Apple are going 'all in' with the M1 chip and AWS are pushing strong benefits on their ARM chips in the cloud.  By choosing not to support this, Atlassian is really showing how out-of-touch they are becoming with the development community.   

            This is a business requirement and will need to look at moving away from Bitbucket if not resolved. Please let us know if its going to be.

            xtremecontrols added a comment - This is a business requirement and will need to look at moving away from Bitbucket if not resolved. Please let us know if its going to be.

            Appreciate that there are probably very good reasons to block this.

             

            But to throw our two cents in as well - we really need this as well. More and more of our development is done on M1 Macs and deployments are on ARM64. 

            Stewart Ritchie added a comment - Appreciate that there are probably very good reasons to block this.   But to throw our two cents in as well - we really need this as well. More and more of our development is done on M1 Macs and deployments are on ARM64. 

            If not possible now at least an estimate when one can expect it? At that point, to each their own but at least those choices were made appropriately.

             

            thanks.

            Oleg Tarassov added a comment - If not possible now at least an estimate when one can expect it? At that point, to each their own but at least those choices were made appropriately.   thanks.

            Phil Gooch added a comment -

            We really need this - Bitbucket, please could you fix this, we'd love to stay using Bitbucket Pipelines but we need support for Gravitron!

            Phil Gooch added a comment - We really need this - Bitbucket, please could you fix this, we'd love to stay using Bitbucket Pipelines but we need support for Gravitron!

            Github Actions work as well

            Mike Nacey added a comment - Github Actions work as well

            Andrey added a comment - - edited

            Just found AWS CodeBuild has no issues building multi-architecture images so we'll be moving that part of the (or whole) pipeline to AWS.

            Andrey added a comment - - edited Just found AWS CodeBuild has no issues building multi-architecture images so we'll be moving that part of the (or whole) pipeline to AWS.

            Mike Nacey added a comment - - edited

            Guys, 

            The time for this is definitely yesterday. EKS is now saving loads of cash on Graviton instances, and we are handcuffed?

            MacBooks are all coming M1 from now on.

             

             

            Mike Nacey added a comment - - edited Guys,  The time for this is definitely yesterday. EKS is now saving loads of cash on Graviton instances, and we are handcuffed? MacBooks are all coming M1 from now on.    

            Andrey added a comment - - edited

            Created in 2017 and still gathering interest

            Andrey added a comment - - edited Created in 2017 and still gathering interest

            It is a very important feature to enable non amd64 docker containers to run. I hope this gets supported soon.

            ahmed.sobhy added a comment - It is a very important feature to enable non amd64 docker containers to run. I hope this gets supported soon.

            I certainly agree with @Jon Link.

            That being said, I did get it working with the following pipeline. The trick was to use a different dind image that has experimental features enabled.

            pipelines:
              default:
                - step:
                    name: Build for arm64
                    size: 4x # 1x = 4GB, 2x = 8GB, 4x = 16GB, 8x = 32GB RAM
                    runs-on:
                      - self.hosted
                    services:
                      - docker
                    script:
                      - docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
                      - docker buildx create --use
                      - docker buildx build --platform linux/arm64 ...
            
            definitions:
              services:
                docker: # can only be used with a self-hosted runner
                  image: igoratencompass/docker-dind:19.03.0 # a dind image with experimental features enabled in the daemon
            

            Depending on your self hosted runner machine, building emulated images is very slow though. And since I'm forced to use my own machines anyway, I'd rather just use an ARM device directly.
            This turned out to be quite easy to do, using the SSH pipe, with which I can run my commands on any device I want:

            pipelines:
              default:
                - step:
                    name: Build for ARM64
                    script:
                      - pipe: atlassian/ssh-run:0.4.0
                        variables:
                          SSH_USER: $ARM_BUILD_USER
                          SERVER: $ARM_BUILD_SERVER
                          PORT: $ARM_BUILD_SSH_PORT
                          MODE: "script"
                          COMMAND: "tools/ci-build.sh"
                          ENV_VARS: >-
                            BITBUCKET_BRANCH='${BITBUCKET_BRANCH}'
                            BITBUCKET_COMMIT='${BITBUCKET_COMMIT}'
            

            Nick de Palézieux added a comment - I certainly agree with @Jon Link. That being said, I did get it working with the following pipeline. The trick was to use a different dind image that has experimental features enabled. pipelines: default : - step: name: Build for arm64 size: 4x # 1x = 4GB, 2x = 8GB, 4x = 16GB, 8x = 32GB RAM runs-on: - self.hosted services: - docker script: - docker run --rm --privileged multiarch/qemu-user- static --reset -p yes - docker buildx create --use - docker buildx build --platform linux/arm64 ... definitions: services: docker: # can only be used with a self-hosted runner image: igoratencompass/docker-dind:19.03.0 # a dind image with experimental features enabled in the daemon Depending on your self hosted runner machine, building emulated images is very slow though. And since I'm forced to use my own machines anyway, I'd rather just use an ARM device directly. This turned out to be quite easy to do, using the SSH pipe, with which I can run my commands on any device I want: pipelines: default : - step: name: Build for ARM64 script: - pipe: atlassian/ssh-run:0.4.0 variables: SSH_USER: $ARM_BUILD_USER SERVER: $ARM_BUILD_SERVER PORT: $ARM_BUILD_SSH_PORT MODE: "script" COMMAND: "tools/ci-build.sh" ENV_VARS: >- BITBUCKET_BRANCH= '${BITBUCKET_BRANCH}' BITBUCKET_COMMIT= '${BITBUCKET_COMMIT}'

            Jon Link added a comment -

            With all due respect, self hosted runners are not the solution here. We're not paying to run this on a second machine. I get the issue with running it as privileged, but by saying run it on your own machine, you're really saying the pipelines is basically useless when it comes to docker.

            Jon Link added a comment - With all due respect, self hosted runners are not the solution here. We're not paying to run this on a second machine. I get the issue with running it as privileged, but by saying run it on your own machine, you're really saying the pipelines is basically useless when it comes to docker.

            Nick de Palézieux added a comment - - edited

            I'm also trying to get this to work and am having problems. I have configured a self hosted runner and am trying to get this pipeline to work:

            definitions:
             services:
               docker: # can only be used with a self-hosted runner
                 image: docker:dind 
            
            pipelines: 
              default: 
                - step: 
                  name: Compile 
                  runs-on: 
                    - 'self.hosted'
                  clone: skip-ssl-verify: true 
                  size: 2x 
                  services:
                    - docker 
                  script: 
                    - docker version 
                    - docker info 
                    - docker run --rm --privileged multiarch/qemu-user-static --reset -p yes; docker buildx create --use  

            The pipeline fails at the last step due to `–privileged`.
             
            Here is the log:

            Runner matching labels:
                - linux
                - self.hosted
            Runner name: dontthinkpad
            Runner labels: self.hosted, linux
            Runner version:
                current: 1.252
                latest: 1.252
            + umask 000
            
            ...
            
            Images used:
                build: atlassian/default-image@sha256:3a09dfec7e36fe99e3910714c5646be6302ccbca204d38539a07f0c2cb5902d4
                docker: docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-docker-daemon@sha256:5f95befdbd73f8a85ec3b7fb5a88d52a651979aff97b1355efc18df8a9811aef
            
            + docker version
            Client: Docker Engine - Community
             Version:           19.03.15
             API version:       1.40
             Go version:        go1.13.15
             Git commit:        99e3ed8
             Built:             Sat Jan 30 03:11:43 2021
             OS/Arch:           linux/amd64
             Experimental:      false
            
            Server: Docker Engine - Community
             Engine:
              Version:          20.10.5
              API version:      1.41 (minimum version 1.12)
              Go version:       go1.13.15
              Git commit:       363e9a8
              Built:            Tue Mar  2 20:18:31 2021
              OS/Arch:          linux/amd64
              Experimental:     false
             containerd:
              Version:          v1.4.3
              GitCommit:        269548fa27e0089a8b8278fc4fc781d7f65a939b
             runc:
              Version:          1.0.0-rc93
              GitCommit:        12644e614e25b05da6fd08a38ffa0cfe1903fdec
             docker-init:
              Version:          0.19.0
              GitCommit:        de40ad0
            
            + docker info
            Client:
             Debug Mode: false
            
            Server:
             Containers: 0
              Running: 0
              Paused: 0
              Stopped: 0
             Images: 0
             Server Version: 20.10.5
             Storage Driver: overlay2
              Backing Filesystem: extfs
              Supports d_type: true
              Native Overlay Diff: true
             Logging Driver: json-file
             Cgroup Driver: cgroupfs
             Plugins:
              Volume: local
              Network: bridge host ipvlan macvlan null overlay
              Authorization: pipelines
              Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
             Swarm: inactive
             Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
             Default Runtime: runc
             Init Binary: docker-init
             containerd version: 269548fa27e0089a8b8278fc4fc781d7f65a939b
             runc version: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
             init version: de40ad0
             Security Options:
              apparmor
              seccomp
               Profile: default
              userns
             Kernel Version: 5.8.0-59-generic
             Operating System: Alpine Linux v3.13 (containerized)
             OSType: linux
             Architecture: x86_64
             CPUs: 8
             Total Memory: 15.37GiB
             Name: fe0fc7a1fae5
             ID: YE3P:ABKU:L4T5:FFN3:YH6R:74DT:Q7EG:NEZV:SJV5:E7WM:76PB:YD2K
             Docker Root Dir: /var/lib/docker/165536.165536
             Debug Mode: false
             Registry: https://index.docker.io/v1/
             Labels:
             Experimental: false
             Insecure Registries:
              127.0.0.0/8
             Registry Mirrors:
              http://localhost:5000/
             Live Restore Enabled: false
             Product License: Community Engine
            
            WARNING: API is accessible on http://0.0.0.0:2375 without encryption.
                     Access to the remote API is equivalent to root access on the host. Refer
                     to the 'Docker daemon attack surface' section in the documentation for
                     more information: https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
            
            + docker run --rm --privileged multiarch/qemu-user-static --reset -p yes; docker buildx create --use
            docker: Error response from daemon: authorization denied by plugin pipelines: --privileged=true is not allowed.
            See 'docker run --help'.
            Searching for test report files in directories named [test-reports, test-results, surefire-reports, failsafe-reports] down to a depth of 4
            Finished scanning for test reports. Found 0 test report files.
            Merged test suites, total number tests is 0, with 0 failures and 0 errors.
            

            I already tried running the docker container that I'm running on the my machine with `--privileged`, but that didn't help:

            docker container run -it --rm --privileged ... docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-runner:1
            

            @Justin Thomas, how do I allow running `docker run --privileged` inside the runner pipeline?

            Nick de Palézieux added a comment - - edited I'm also trying to get this to work and am having problems. I have configured a self hosted runner and am trying to get this pipeline to work: definitions: services: docker: # can only be used with a self-hosted runner image: docker:dind pipelines: default : - step: name: Compile runs-on: - 'self.hosted' clone: skip-ssl-verify: true size: 2x services: - docker script: - docker version - docker info - docker run --rm --privileged multiarch/qemu-user- static --reset -p yes; docker buildx create --use The pipeline fails at the last step due to `–privileged`.   Here is the log: Runner matching labels: - linux - self.hosted Runner name: dontthinkpad Runner labels: self.hosted, linux Runner version: current: 1.252 latest: 1.252 + umask 000 ... Images used: build: atlassian/ default -image@sha256:3a09dfec7e36fe99e3910714c5646be6302ccbca204d38539a07f0c2cb5902d4 docker: docker- public .packages.atlassian.com/sox/atlassian/bitbucket-pipelines-docker-daemon@sha256:5f95befdbd73f8a85ec3b7fb5a88d52a651979aff97b1355efc18df8a9811aef + docker version Client: Docker Engine - Community Version: 19.03.15 API version: 1.40 Go version: go1.13.15 Git commit: 99e3ed8 Built: Sat Jan 30 03:11:43 2021 OS/Arch: linux/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 20.10.5 API version: 1.41 (minimum version 1.12) Go version: go1.13.15 Git commit: 363e9a8 Built: Tue Mar 2 20:18:31 2021 OS/Arch: linux/amd64 Experimental: false containerd: Version: v1.4.3 GitCommit: 269548fa27e0089a8b8278fc4fc781d7f65a939b runc: Version: 1.0.0-rc93 GitCommit: 12644e614e25b05da6fd08a38ffa0cfe1903fdec docker-init: Version: 0.19.0 GitCommit: de40ad0 + docker info Client: Debug Mode: false Server: Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 20.10.5 Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Authorization: pipelines Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc Default Runtime : runc Init Binary: docker-init containerd version: 269548fa27e0089a8b8278fc4fc781d7f65a939b runc version: 12644e614e25b05da6fd08a38ffa0cfe1903fdec init version: de40ad0 Security Options: apparmor seccomp Profile: default userns Kernel Version: 5.8.0-59- generic Operating System : Alpine Linux v3.13 (containerized) OSType: linux Architecture: x86_64 CPUs: 8 Total Memory: 15.37GiB Name: fe0fc7a1fae5 ID: YE3P:ABKU:L4T5:FFN3:YH6R:74DT:Q7EG:NEZV:SJV5:E7WM:76PB:YD2K Docker Root Dir: / var /lib/docker/165536.165536 Debug Mode: false Registry: https: //index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Registry Mirrors: http: //localhost:5000/ Live Restore Enabled: false Product License: Community Engine WARNING: API is accessible on http: //0.0.0.0:2375 without encryption. Access to the remote API is equivalent to root access on the host. Refer to the 'Docker daemon attack surface' section in the documentation for more information: https: //docs.docker.com/engine/security/security/#docker-daemon-attack-surface + docker run --rm --privileged multiarch/qemu-user- static --reset -p yes; docker buildx create --use docker: Error response from daemon: authorization denied by plugin pipelines: --privileged= true is not allowed. See 'docker run --help' . Searching for test report files in directories named [test-reports, test-results, surefire-reports, failsafe-reports] down to a depth of 4 Finished scanning for test reports. Found 0 test report files. Merged test suites, total number tests is 0, with 0 failures and 0 errors. I already tried running the docker container that I'm running on the my machine with `--privileged`, but that didn't help: docker container run -it --rm --privileged ... docker- public .packages.atlassian.com/sox/atlassian/bitbucket-pipelines-runner:1 @Justin Thomas, how do I allow running `docker run --privileged` inside the runner pipeline?

            bc42126cf290 Can you please get in touch with Atlassian Support, they should be able to help you with the error.

            Justin Thomas added a comment - bc42126cf290  Can you please get in touch with Atlassian Support , they should be able to help you with the error.

            Hi Justin Thomas,

            Sorry for the late message, but I have a need to run just one step as self-hosted for arm architecture build.

            The other steps would normally run in bitbucket using your structure.

            However, when I put in "definitions" the image docker:dind in the docker service it gives an error when compiling the other steps, as this option only works in self-hosted.

            I even tried to create a "docker-custom" and include the image only in it, but I gave a memory error when running, not getting the definition of 3072.

            I can't isolate this setting only for the "master" branch, as the homologation machine I have is ARM64 (which is not supported by Bitbucket for compilation), but the development machine is a normal AMD64.

            The only way I thought was to have two bitbucket-pipelines.yml for each branch, however when merging it overwrites the content even though I include it in .gitignore and setting the merge method in .gitaatributes. Everything is ignored.

            How to solve this problem?

            I'm doing automated deploy with Rancher and Continuous Delivery (Fleet integrated), which reads a specific branch that I change at the end of the build in the pipeline.

             

            Below my bitbucket-pipelines.yml

            options:
            
              docker: true
             
            
            pipelines:
            
              branches:
            
                develop:
            
                  - step:
            
                      name: Build
            
                      image:
            
                        name: golang:stretch
            
                        username: $DOCKER_HUB_USERNAME
            
                        password: $DOCKER_HUB_PASSWORD
            
                        email: $DOCKER_HUB_EMAIL
            
                      services:
            
                        - docker
            
                      condition:
            
                        changesets:
            
                          includePaths:
            
                          - "app/**"
            
                      script:
            
                      - export APP_NAME=`echo ${BITBUCKET_REPO_SLUG} | sed 's/_/-/ig'`
            
                      - echo $APP_NAME
            
                      - export DEPLOY_BRANCH="deploy-dev"
            
                      - export DEPLOY_TYPE="DEV"
            
                      - export DEPLOY_VERSION="$BITBUCKET_BUILD_NUMBER"
            
                      - export DEPLOY_TAG="$DEPLOY_TYPE-$DEPLOY_VERSION"
            
                      - export IMAGE_NAME=$DOCKER_HUB_USERNAME/$APP_NAME:$DEPLOY_TAG
            
                      - echo "Deploying to environment"
            
                      - cd app/
            
                      - ls -l
            
                      - docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
            
                      - docker build -t $IMAGE_NAME .
            
                      - docker push $IMAGE_NAME
            
                      - git config --global user.email email@dominio
            
                      - git config --global user.name "Desenvolvimento"
            
                      - echo ''$BITBUCKET_GIT_SSH_ORIGIN''
            
                      - git remote set-url origin ${BITBUCKET_GIT_SSH_ORIGIN}
            
                      - cd /opt/atlassian/pipelines/agent/build
            
                      - git clone --branch="$DEPLOY_BRANCH" --depth 5 ${BITBUCKET_GIT_SSH_ORIGIN} /$DEPLOY_BRANCH
            
                      - cd /$DEPLOY_BRANCH
            
                      - sed -i 's/image:\ dockerhubuser.*$/image:\ dockerhubuser\/'$APP_NAME':'$DEPLOY_TAG'/' $APP_NAME-deployment.yaml
            
                      - git add --all
            
                      - git commit -m 'Deploy '$APP_NAME' '$DEPLOY_TAG''
            
                      - git push --set-upstream origin $DEPLOY_BRANCH
            
                master:
            
                  - step:
            
                      name: Build
            
                      image:
            
                        name: guglio/dind-buildx:latest
            
                        username: $DOCKER_HUB_USERNAME
            
                        password: $DOCKER_HUB_PASSWORD
            
                        email: $DOCKER_HUB_EMAIL
            
                      runs-on: self.hosted
            
                      services:
            
                        - docker
            
                      condition:
            
                        changesets:
            
                          includePaths:
            
                          - "app/**"
            
                      script:
            
                      - export APP_NAME=`echo ${BITBUCKET_REPO_SLUG} | sed 's/_/-/ig'`
            
                      - echo $APP_NAME
            
                      - export DEPLOY_BRANCH="deploy-dev"
            
                      - export DEPLOY_TYPE="DEV"
            
                      - export DEPLOY_VERSION="$BITBUCKET_BUILD_NUMBER"
            
                      - export DEPLOY_TAG="$DEPLOY_TYPE-$DEPLOY_VERSION"
            
                      - export IMAGE_NAME=$DOCKER_HUB_USERNAME/$APP_NAME:$DEPLOY_TAG
            
                      - echo "Deploying to environment"
            
                      - cd app/
            
                      - ls -l
            
                      - docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
            
                      - docker run --rm --privileged multiarch/qemu-user-static --reset -p yes; docker buildx create --use --name $APP_NAME
            
                      - docker buildx build -t "$IMAGE_NAME" --platform linux/amd64,linux/arm64 --push .
            
                      - docker buildx imagetools inspect "$IMAGE_NAME"
            
                      - git config --global user.email email@dominio
            
                      - git config --global user.name "Desenvolvimento"
            
                      - echo ''$BITBUCKET_GIT_SSH_ORIGIN''
            
                      - git remote set-url origin ${BITBUCKET_GIT_SSH_ORIGIN}
            
                      - cd /opt/atlassian/pipelines/agent/build
            
                      - git clone --branch="$DEPLOY_BRANCH" --depth 5 ${BITBUCKET_GIT_SSH_ORIGIN} /$DEPLOY_BRANCH
            
                      - cd /$DEPLOY_BRANCH
            
                      - sed -i 's/image:\ dockerhubuser.*$/image:\ dockerhubuser\/'$APP_NAME':'$DEPLOY_TAG'/' $APP_NAME-deployment.yaml
            
                      - git add --all
            
                      - git commit -m 'Deploy '$APP_NAME' '$DEPLOY_TAG''
            
                      - git push --set-upstream origin $DEPLOY_BRANCH
            
            definitions:
            
              services:
            
                docker:
            
                  image: docker:dind
            
                  memory: 3072

             

            I appreciate any help.

            Best regards,

            Carlos

            Carlos Augusto added a comment - Hi Justin Thomas, Sorry for the late message, but I have a need to run just one step as self-hosted for arm architecture build. The other steps would normally run in bitbucket using your structure. However, when I put in "definitions" the image docker:dind in the docker service it gives an error when compiling the other steps, as this option only works in self-hosted. I even tried to create a "docker-custom" and include the image only in it, but I gave a memory error when running, not getting the definition of 3072. I can't isolate this setting only for the "master" branch, as the homologation machine I have is ARM64 (which is not supported by Bitbucket for compilation), but the development machine is a normal AMD64. The only way I thought was to have two bitbucket-pipelines.yml for each branch, however when merging it overwrites the content even though I include it in .gitignore and setting the merge method in .gitaatributes. Everything is ignored. How to solve this problem? I'm doing automated deploy with Rancher and Continuous Delivery (Fleet integrated), which reads a specific branch that I change at the end of the build in the pipeline.   Below my  bitbucket-pipelines.yml options:   docker: true   pipelines:   branches:     develop:       - step:           name: Build           image:             name: golang:stretch             username: $DOCKER_HUB_USERNAME             password: $DOCKER_HUB_PASSWORD             email: $DOCKER_HUB_EMAIL           services:             - docker           condition:             changesets:               includePaths:               - "app/**"           script:           - export APP_NAME=`echo ${BITBUCKET_REPO_SLUG} | sed 's/_/-/ig' `           - echo $APP_NAME           - export DEPLOY_BRANCH= "deploy-dev"           - export DEPLOY_TYPE= "DEV"           - export DEPLOY_VERSION= "$BITBUCKET_BUILD_NUMBER"           - export DEPLOY_TAG= "$DEPLOY_TYPE-$DEPLOY_VERSION"           - export IMAGE_NAME=$DOCKER_HUB_USERNAME/$APP_NAME:$DEPLOY_TAG           - echo "Deploying to environment"           - cd app/           - ls -l           - docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD           - docker build -t $IMAGE_NAME .           - docker push $IMAGE_NAME           - git config --global user.email email@dominio           - git config --global user.name "Desenvolvimento"           - echo ''$BITBUCKET_GIT_SSH_ORIGIN' '           - git remote set-url origin ${BITBUCKET_GIT_SSH_ORIGIN}           - cd /opt/atlassian/pipelines/agent/build           - git clone --branch= "$DEPLOY_BRANCH" --depth 5 ${BITBUCKET_GIT_SSH_ORIGIN} /$DEPLOY_BRANCH           - cd /$DEPLOY_BRANCH           - sed -i 's/image:\ dockerhubuser.*$/image:\ dockerhubuser\/' $APP_NAME ':' $DEPLOY_TAG '/' $APP_NAME-deployment.yaml           - git add --all           - git commit -m 'Deploy ' $APP_NAME ' ' $DEPLOY_TAG''           - git push --set-upstream origin $DEPLOY_BRANCH     master:       - step:           name: Build           image:             name: guglio/dind-buildx:latest             username: $DOCKER_HUB_USERNAME             password: $DOCKER_HUB_PASSWORD             email: $DOCKER_HUB_EMAIL           runs-on: self.hosted           services:             - docker           condition:             changesets:               includePaths:               - "app/**"           script:           - export APP_NAME=`echo ${BITBUCKET_REPO_SLUG} | sed 's/_/-/ig' `           - echo $APP_NAME           - export DEPLOY_BRANCH= "deploy-dev"           - export DEPLOY_TYPE= "DEV"           - export DEPLOY_VERSION= "$BITBUCKET_BUILD_NUMBER"           - export DEPLOY_TAG= "$DEPLOY_TYPE-$DEPLOY_VERSION"           - export IMAGE_NAME=$DOCKER_HUB_USERNAME/$APP_NAME:$DEPLOY_TAG           - echo "Deploying to environment"           - cd app/           - ls -l           - docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD           - docker run --rm --privileged multiarch/qemu-user- static --reset -p yes; docker buildx create --use --name $APP_NAME           - docker buildx build -t "$IMAGE_NAME" --platform linux/amd64,linux/arm64 --push .           - docker buildx imagetools inspect "$IMAGE_NAME"           - git config --global user.email email@dominio           - git config --global user.name "Desenvolvimento"           - echo ''$BITBUCKET_GIT_SSH_ORIGIN' '           - git remote set-url origin ${BITBUCKET_GIT_SSH_ORIGIN}           - cd /opt/atlassian/pipelines/agent/build           - git clone --branch= "$DEPLOY_BRANCH" --depth 5 ${BITBUCKET_GIT_SSH_ORIGIN} /$DEPLOY_BRANCH           - cd /$DEPLOY_BRANCH           - sed -i 's/image:\ dockerhubuser.*$/image:\ dockerhubuser\/' $APP_NAME ':' $DEPLOY_TAG '/' $APP_NAME-deployment.yaml           - git add --all           - git commit -m 'Deploy ' $APP_NAME ' ' $DEPLOY_TAG''           - git push --set-upstream origin $DEPLOY_BRANCH definitions:   services:     docker:       image: docker:dind       memory: 3072   I appreciate any help. Best regards, Carlos

            29434169bd5d The image on the step is used to start a container for executing your scripts, while the service docker container is used to start the docker daemon against which the docker commands are executed. Hope this helps.

            e698327a1ed7 Unfortunately, we currently only support building multi-arch images using self-hosted runners.

            Justin Thomas added a comment - 29434169bd5d  The image  on the step is used to start a container for executing your scripts, while the service docker container is used to start the docker daemon against which the docker commands are executed. Hope this helps. e698327a1ed7  Unfortunately, we currently only support building multi-arch images using self-hosted runners.

              57465700c4e1 Edmund Munday
              fd665ffff158 f4b1en
              Votes:
              467 Vote for this issue
              Watchers:
              267 Start watching this issue

                Created:
                Updated: