Uploaded image for project: 'Bitbucket Cloud'
  1. Bitbucket Cloud
  2. BCLOUD-15317

Allow building multi-architecture Docker images (e.g. ARM images)

    • Our product teams collect and evaluate feedback from a number of different sources. To learn more about how we use customer feedback in the planning process, check out our new feature policy.

      Please allow --privileged flag to build multiarch docker images.
      According to this article, it is possible with Github + Travis :
      http://blog.hypriot.com/post/setup-simple-ci-pipeline-for-arm-images/

      Register qemu-*-static for all supported processors except the current one
      docker run --rm --privileged multiarch/qemu-user-static:register

      Currently, the following error is return when running the pipeline:

      • docker run --rm --privileged multiarch/qemu-user-static:register --reset
        docker: Error response from daemon: authorization denied by plugin pipelines: Command not supported.
        See 'docker run --help'.

      Thanks

          Form Name

            [BCLOUD-15317] Allow building multi-architecture Docker images (e.g. ARM images)

            Pinned comments

            Pinned by Edmund Munday

            Edmund Munday added a comment - - edited

            Hi all - as promised, we're excited to announce the release of ARM builds in the Pipelines cloud runtime.

            Head over to our announcement blog for all the details: 

            https://www.atlassian.com/blog/software-teams/announcing-arm-builds-in-cloud-for-bitbucket-pipelines

            Important note regarding multi-arch support:

            As mentioned, this initial release does make it possible to create multi-arch images using the `docker manifest` method, but does not support privileged containers or `buildx`.

            While less ergonomic than `buildx`, it should be noted that the `docker manifest` method can be significantly more performant than using `buildx` due to being able to leverage native runtimes for both architecture builds rather than qemu-based emulation which can be very slow.

            Stay tuned for future updates re: `buildx` support.

            Edmund Munday added a comment - - edited Hi all - as promised, we're excited to announce the release of ARM builds in the Pipelines cloud runtime. Head over to our announcement blog for all the details:  https://www.atlassian.com/blog/software-teams/announcing-arm-builds-in-cloud-for-bitbucket-pipelines Important note regarding multi-arch support: As mentioned, this initial release does make it possible to create multi-arch images using the `docker manifest` method , but does not support privileged containers or `buildx`. While less ergonomic than `buildx`, it should be noted that the `docker manifest` method can be significantly more performant than using `buildx` due to being able to leverage native runtimes for both architecture builds rather than qemu-based emulation which can be very slow. Stay tuned for future updates re: `buildx` support.

            All comments

            +1

            Ropelatto added a comment -

            +1

            Ropelatto added a comment - +1

            +1

            Rohit Gautam added a comment - +1

            +1

            Leonardo Rojas added a comment - +1

            +1

            Matthew Lee added a comment - +1

            Nima Zamani added a comment - - edited

            +1

            we really need this feature. How is this different from this link: [Building multi-architecture docker images with Bitbucket Pipelines | Bitbucket Cloud | Atlassian Documentation|https://confluence.atlassian.com/bbkb/building-multi-architecture-docker-images-with-bitbucket-pipelines-1252329371.html]

            Nima Zamani added a comment - - edited +1 we really need this feature. How is this different from this link: [Building multi-architecture docker images with Bitbucket Pipelines | Bitbucket Cloud | Atlassian Documentation|https://confluence.atlassian.com/bbkb/building-multi-architecture-docker-images-with-bitbucket-pipelines-1252329371.html]

            +1

            57465700c4e1 is there a provisional or estimated date for when this functionality will be available (or more generally, the use of `–privileged` in Bitbucket pipelines)? I know you've said that the goal is to enable it in the medium-term, just wondering if you could speak to roughly when it might be available as it's blocking a number of tasks for us. It'd be great to know whether we can expect it by the end of the year, Q1 next year, at some point in 2025, etc.

            Pieter Rombauts added a comment - 57465700c4e1 is there a provisional or estimated date for when this functionality will be available (or more generally, the use of `–privileged` in Bitbucket pipelines)? I know you've said that the goal is to enable it in the medium-term, just wondering if you could speak to roughly when it might be available as it's blocking a number of tasks for us. It'd be great to know whether we can expect it by the end of the year, Q1 next year, at some point in 2025, etc.

            +1

            +1

            +1

            +1

            +1

            Lucas Santos added a comment - +1

            +1

            Marcelo Primo added a comment - +1

            +1

            Hi all - we're in the process of a major architectural upgrade to Pipelines right now.

            One of the things we intend for this to enable (in the medium term) is multi-arch image builds. Full disclosure, this will not be available at launch, but it's a critical step in the direction we need to go to enable this.

            Edmund Munday added a comment - Hi all - we're in the process of a major architectural upgrade to Pipelines right now. One of the things we intend for this to enable (in the medium term) is multi-arch image builds. Full disclosure, this will not be available at launch, but it's a critical step in the direction we need to go to enable this.

            When is this going to get solved ? Is preventing me from onboarding like 30 services. Insane this isn't getting more priority.

            pablo.hendrickx added a comment - When is this going to get solved ? Is preventing me from onboarding like 30 services. Insane this isn't getting more priority.

            Hi all - just letting you know that yesterday we shipped ARM Support on Pipelines Runners: https://bitbucket.org/blog/announcing-support-for-linux-arm-runners-in-bitbucket-pipelines

            Understandably this is not exactly what this ticket is about, but it's related so sharing this info here as this is a required step towards us supporting multi-arch Docker Images.

            Edmund Munday added a comment - Hi all - just letting you know that yesterday we shipped ARM Support on Pipelines Runners: https://bitbucket.org/blog/announcing-support-for-linux-arm-runners-in-bitbucket-pipelines Understandably this is not exactly what this ticket is about, but it's related so sharing this info here as this is a required step towards us supporting multi-arch Docker Images.

            31994d4a81b0,
             
            I agree with you on everything except the private pipe images.
            We had no choice but to create a private pipe and run it on managed Bitbucket runners to make multi-arch builds work.

            e.g

             

            definitions:
              services:
                docker-hosted:
                  type: docker
                  image: docker:dind
            ...
                - step: &build_publish_pipe
                    name: Build and Publish Docker Latest Image
                    runs-on:
                      - self.hosted
                      - acme.hosted
                    services:
                      - docker-hosted
                    caches:
                      - docker
                    script:
                      - cat ${BITBUCKET_CLONE_DIR}/aws_docker_token | docker login --username AWS --password-stdin ${ECR_REPO}
                      - pipe: acme/pipe-multiarch-ecr-push-image:master
                        variables:
                          IMAGE_NAME: "<string>"
                          DOCKER_IMAGE_TAG: "<string>"
            
            

             

            ref: see step 11 in: https://support.atlassian.com/bitbucket-cloud/docs/write-a-pipe-for-bitbucket-pipelines/

             

             

            Oleg Tarassov added a comment - 31994d4a81b0 ,   I agree with you on everything except the private pipe images. We had no choice but to create a private pipe and run it on managed Bitbucket runners to make multi-arch builds work. e.g   definitions: services: docker-hosted: type: docker image: docker:dind ... - step: &build_publish_pipe name: Build and Publish Docker Latest Image runs-on: - self.hosted - acme.hosted services: - docker-hosted caches: - docker script: - cat ${BITBUCKET_CLONE_DIR}/aws_docker_token | docker login --username AWS --password-stdin ${ECR_REPO} - pipe: acme/pipe-multiarch-ecr-push-image:master variables: IMAGE_NAME: "<string>" DOCKER_IMAGE_TAG: "<string>"   ref: see step 11 in: https://support.atlassian.com/bitbucket-cloud/docs/write-a-pipe-for-bitbucket-pipelines/    

            Sam added a comment - - edited

            @Vitaliy Zabolotskyy They've been hard at work "Reviewing" it for over 5.5 months!

            Sam added a comment - - edited @Vitaliy Zabolotskyy They've been hard at work "Reviewing" it for over 5.5 months!

            Vitaliy Zabolotskyy added a comment - - edited

            Sorry to say, but Bitbucket is pretty much unusable in 2023. No CI scripts reusability, no loops in pipelines yaml, no WIP PRs, no support for private pipe images, no multi-arch builds. While existing feature set is OK for a 10-person startup - this is not an industrial-grade solution, not in the current decade.

            We are now thinking, how we move away from Bitbucket. Great job, Atlassian!

            Vitaliy Zabolotskyy added a comment - - edited Sorry to say, but Bitbucket is pretty much unusable in 2023. No CI scripts reusability, no loops in pipelines yaml, no WIP PRs, no support for private pipe images, no multi-arch builds. While existing feature set is OK for a 10-person startup - this is not an industrial-grade solution, not in the current decade. We are now thinking, how we move away from Bitbucket. Great job, Atlassian!

            No possibility to build ARM images on Bitbucket Pipelines is a showstopper for us. This almost 6 years old issue is about to go to the school

            Bohdan Astapov added a comment - No possibility to build ARM images on Bitbucket Pipelines is a showstopper for us. This almost 6 years old issue is about to go to the school

            pklos added a comment -

            Looking forward to seeing this done too. Currently if we want to save a little the environment while using cloud computing, the ARM architecture is the way to go. It's a shame we can't build ARM docker image in pipelines.

            pklos added a comment - Looking forward to seeing this done too. Currently if we want to save a little the environment while using cloud computing, the ARM architecture is the way to go. It's a shame we can't build ARM docker image in pipelines.

            Way overdue, look forward to seeing this prioritised in the near future. We want to switch out workloads to graviton instances so have started looking into alternative build systems

            Aidan de Graaf added a comment - Way overdue, look forward to seeing this prioritised in the near future. We want to switch out workloads to graviton instances so have started looking into alternative build systems

            We are saved by the Bitbucket host-runners to be able to build multiarch but it was a pain to have setup such workflow in our pipelines.

            Oleg Tarassov added a comment - We are saved by the Bitbucket host-runners to be able to build multiarch but it was a pain to have setup such workflow in our pipelines.

            Phil Gooch added a comment -

            It's good to see that this is now at Reviewing status. Unfortunately, it has come too late for us, and we have moved away from Bitbucket Cloud for now

            Phil Gooch added a comment - It's good to see that this is now at Reviewing status. Unfortunately, it has come too late for us, and we have moved away from Bitbucket Cloud for now

            This issue has gathered enough interest to be moved automatically to Reviewing status, where it will be reviewed to someone in the relevant product development team and moved on to the appropriate status.

            Mike Howells added a comment - This issue has gathered enough interest to be moved automatically to Reviewing status, where it will be reviewed to someone in the relevant product development team and moved on to the appropriate status.

            Clearly a huge drawback for Bitbucket. ARM64 is the go to for a variety of scenarios and using a self-hosted runner feels like moving a step back.

            Fotis Papadamis added a comment - Clearly a huge drawback for Bitbucket. ARM64 is the go to for a variety of scenarios and using a self-hosted runner feels like moving a step back.

            We also really need this, else we are forced to move away from Bitbucket...

            Thomas Berthold added a comment - We also really need this, else we are forced to move away from Bitbucket...

            Really need this feature. All ml-powered edge devices are now ARM. It's critical we are able to build images to target ARM platforms.

            Will Marsman added a comment - Really need this feature. All ml-powered edge devices are now ARM. It's critical we are able to build images to target ARM platforms.

            This is quite crucial functionality, especially with growing number of M1 users.
            Are there any plans to move forward with handling arm without having to use self-hosted runner?

            Jakub Badecki added a comment - This is quite crucial functionality, especially with growing number of M1 users. Are there any plans to move forward with handling arm without having to use self-hosted runner?

            Alex Bailey added a comment - - edited

            This is becoming really critical to us.

            Most of our services are hosted in AWS ECS. There are substantial performance improvements and cost savings we would like to take advantage of but we're being held back by the lack of ARM support from Bitbucket Pipelines.

            Not only is it key to support for embedded systems, ARM is becoming increasingly prevalent in both local and cloud computing. Apple are going 'all in' with the M1 chip and AWS are pushing strong benefits on their ARM chips in the cloud. 

            By choosing not to support this, Atlassian is really showing how out-of-touch they are becoming with the development community. 

             

            Alex Bailey added a comment - - edited This is becoming really critical to us. Most of our services are hosted in AWS ECS. There are substantial performance improvements and cost savings we would like to take advantage of but we're being held back by the lack of ARM support from Bitbucket Pipelines. Not only is it key to support for embedded systems, ARM is becoming increasingly prevalent in both local and cloud computing. Apple are going 'all in' with the M1 chip and AWS are pushing strong benefits on their ARM chips in the cloud.  By choosing not to support this, Atlassian is really showing how out-of-touch they are becoming with the development community.   

            This is a business requirement and will need to look at moving away from Bitbucket if not resolved. Please let us know if its going to be.

            xtremecontrols added a comment - This is a business requirement and will need to look at moving away from Bitbucket if not resolved. Please let us know if its going to be.

            Appreciate that there are probably very good reasons to block this.

             

            But to throw our two cents in as well - we really need this as well. More and more of our development is done on M1 Macs and deployments are on ARM64. 

            Stewart Ritchie added a comment - Appreciate that there are probably very good reasons to block this.   But to throw our two cents in as well - we really need this as well. More and more of our development is done on M1 Macs and deployments are on ARM64. 

            If not possible now at least an estimate when one can expect it? At that point, to each their own but at least those choices were made appropriately.

             

            thanks.

            Oleg Tarassov added a comment - If not possible now at least an estimate when one can expect it? At that point, to each their own but at least those choices were made appropriately.   thanks.

            Phil Gooch added a comment -

            We really need this - Bitbucket, please could you fix this, we'd love to stay using Bitbucket Pipelines but we need support for Gravitron!

            Phil Gooch added a comment - We really need this - Bitbucket, please could you fix this, we'd love to stay using Bitbucket Pipelines but we need support for Gravitron!

            Github Actions work as well

            Mike Nacey added a comment - Github Actions work as well

            Andrey added a comment - - edited

            Just found AWS CodeBuild has no issues building multi-architecture images so we'll be moving that part of the (or whole) pipeline to AWS.

            Andrey added a comment - - edited Just found AWS CodeBuild has no issues building multi-architecture images so we'll be moving that part of the (or whole) pipeline to AWS.

            Mike Nacey added a comment - - edited

            Guys, 

            The time for this is definitely yesterday. EKS is now saving loads of cash on Graviton instances, and we are handcuffed?

            MacBooks are all coming M1 from now on.

             

             

            Mike Nacey added a comment - - edited Guys,  The time for this is definitely yesterday. EKS is now saving loads of cash on Graviton instances, and we are handcuffed? MacBooks are all coming M1 from now on.    

            Andrey added a comment - - edited

            Created in 2017 and still gathering interest

            Andrey added a comment - - edited Created in 2017 and still gathering interest

            It is a very important feature to enable non amd64 docker containers to run. I hope this gets supported soon.

            ahmed.sobhy added a comment - It is a very important feature to enable non amd64 docker containers to run. I hope this gets supported soon.

            I certainly agree with @Jon Link.

            That being said, I did get it working with the following pipeline. The trick was to use a different dind image that has experimental features enabled.

            pipelines:
              default:
                - step:
                    name: Build for arm64
                    size: 4x # 1x = 4GB, 2x = 8GB, 4x = 16GB, 8x = 32GB RAM
                    runs-on:
                      - self.hosted
                    services:
                      - docker
                    script:
                      - docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
                      - docker buildx create --use
                      - docker buildx build --platform linux/arm64 ...
            
            definitions:
              services:
                docker: # can only be used with a self-hosted runner
                  image: igoratencompass/docker-dind:19.03.0 # a dind image with experimental features enabled in the daemon
            

            Depending on your self hosted runner machine, building emulated images is very slow though. And since I'm forced to use my own machines anyway, I'd rather just use an ARM device directly.
            This turned out to be quite easy to do, using the SSH pipe, with which I can run my commands on any device I want:

            pipelines:
              default:
                - step:
                    name: Build for ARM64
                    script:
                      - pipe: atlassian/ssh-run:0.4.0
                        variables:
                          SSH_USER: $ARM_BUILD_USER
                          SERVER: $ARM_BUILD_SERVER
                          PORT: $ARM_BUILD_SSH_PORT
                          MODE: "script"
                          COMMAND: "tools/ci-build.sh"
                          ENV_VARS: >-
                            BITBUCKET_BRANCH='${BITBUCKET_BRANCH}'
                            BITBUCKET_COMMIT='${BITBUCKET_COMMIT}'
            

            Nick de Palézieux added a comment - I certainly agree with @Jon Link. That being said, I did get it working with the following pipeline. The trick was to use a different dind image that has experimental features enabled. pipelines: default : - step: name: Build for arm64 size: 4x # 1x = 4GB, 2x = 8GB, 4x = 16GB, 8x = 32GB RAM runs-on: - self.hosted services: - docker script: - docker run --rm --privileged multiarch/qemu-user- static --reset -p yes - docker buildx create --use - docker buildx build --platform linux/arm64 ... definitions: services: docker: # can only be used with a self-hosted runner image: igoratencompass/docker-dind:19.03.0 # a dind image with experimental features enabled in the daemon Depending on your self hosted runner machine, building emulated images is very slow though. And since I'm forced to use my own machines anyway, I'd rather just use an ARM device directly. This turned out to be quite easy to do, using the SSH pipe, with which I can run my commands on any device I want: pipelines: default : - step: name: Build for ARM64 script: - pipe: atlassian/ssh-run:0.4.0 variables: SSH_USER: $ARM_BUILD_USER SERVER: $ARM_BUILD_SERVER PORT: $ARM_BUILD_SSH_PORT MODE: "script" COMMAND: "tools/ci-build.sh" ENV_VARS: >- BITBUCKET_BRANCH= '${BITBUCKET_BRANCH}' BITBUCKET_COMMIT= '${BITBUCKET_COMMIT}'

            Jon Link added a comment -

            With all due respect, self hosted runners are not the solution here. We're not paying to run this on a second machine. I get the issue with running it as privileged, but by saying run it on your own machine, you're really saying the pipelines is basically useless when it comes to docker.

            Jon Link added a comment - With all due respect, self hosted runners are not the solution here. We're not paying to run this on a second machine. I get the issue with running it as privileged, but by saying run it on your own machine, you're really saying the pipelines is basically useless when it comes to docker.

            Nick de Palézieux added a comment - - edited

            I'm also trying to get this to work and am having problems. I have configured a self hosted runner and am trying to get this pipeline to work:

            definitions:
             services:
               docker: # can only be used with a self-hosted runner
                 image: docker:dind 
            
            pipelines: 
              default: 
                - step: 
                  name: Compile 
                  runs-on: 
                    - 'self.hosted'
                  clone: skip-ssl-verify: true 
                  size: 2x 
                  services:
                    - docker 
                  script: 
                    - docker version 
                    - docker info 
                    - docker run --rm --privileged multiarch/qemu-user-static --reset -p yes; docker buildx create --use  

            The pipeline fails at the last step due to `–privileged`.
             
            Here is the log:

            Runner matching labels:
                - linux
                - self.hosted
            Runner name: dontthinkpad
            Runner labels: self.hosted, linux
            Runner version:
                current: 1.252
                latest: 1.252
            + umask 000
            
            ...
            
            Images used:
                build: atlassian/default-image@sha256:3a09dfec7e36fe99e3910714c5646be6302ccbca204d38539a07f0c2cb5902d4
                docker: docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-docker-daemon@sha256:5f95befdbd73f8a85ec3b7fb5a88d52a651979aff97b1355efc18df8a9811aef
            
            + docker version
            Client: Docker Engine - Community
             Version:           19.03.15
             API version:       1.40
             Go version:        go1.13.15
             Git commit:        99e3ed8
             Built:             Sat Jan 30 03:11:43 2021
             OS/Arch:           linux/amd64
             Experimental:      false
            
            Server: Docker Engine - Community
             Engine:
              Version:          20.10.5
              API version:      1.41 (minimum version 1.12)
              Go version:       go1.13.15
              Git commit:       363e9a8
              Built:            Tue Mar  2 20:18:31 2021
              OS/Arch:          linux/amd64
              Experimental:     false
             containerd:
              Version:          v1.4.3
              GitCommit:        269548fa27e0089a8b8278fc4fc781d7f65a939b
             runc:
              Version:          1.0.0-rc93
              GitCommit:        12644e614e25b05da6fd08a38ffa0cfe1903fdec
             docker-init:
              Version:          0.19.0
              GitCommit:        de40ad0
            
            + docker info
            Client:
             Debug Mode: false
            
            Server:
             Containers: 0
              Running: 0
              Paused: 0
              Stopped: 0
             Images: 0
             Server Version: 20.10.5
             Storage Driver: overlay2
              Backing Filesystem: extfs
              Supports d_type: true
              Native Overlay Diff: true
             Logging Driver: json-file
             Cgroup Driver: cgroupfs
             Plugins:
              Volume: local
              Network: bridge host ipvlan macvlan null overlay
              Authorization: pipelines
              Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
             Swarm: inactive
             Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
             Default Runtime: runc
             Init Binary: docker-init
             containerd version: 269548fa27e0089a8b8278fc4fc781d7f65a939b
             runc version: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
             init version: de40ad0
             Security Options:
              apparmor
              seccomp
               Profile: default
              userns
             Kernel Version: 5.8.0-59-generic
             Operating System: Alpine Linux v3.13 (containerized)
             OSType: linux
             Architecture: x86_64
             CPUs: 8
             Total Memory: 15.37GiB
             Name: fe0fc7a1fae5
             ID: YE3P:ABKU:L4T5:FFN3:YH6R:74DT:Q7EG:NEZV:SJV5:E7WM:76PB:YD2K
             Docker Root Dir: /var/lib/docker/165536.165536
             Debug Mode: false
             Registry: https://index.docker.io/v1/
             Labels:
             Experimental: false
             Insecure Registries:
              127.0.0.0/8
             Registry Mirrors:
              http://localhost:5000/
             Live Restore Enabled: false
             Product License: Community Engine
            
            WARNING: API is accessible on http://0.0.0.0:2375 without encryption.
                     Access to the remote API is equivalent to root access on the host. Refer
                     to the 'Docker daemon attack surface' section in the documentation for
                     more information: https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
            
            + docker run --rm --privileged multiarch/qemu-user-static --reset -p yes; docker buildx create --use
            docker: Error response from daemon: authorization denied by plugin pipelines: --privileged=true is not allowed.
            See 'docker run --help'.
            Searching for test report files in directories named [test-reports, test-results, surefire-reports, failsafe-reports] down to a depth of 4
            Finished scanning for test reports. Found 0 test report files.
            Merged test suites, total number tests is 0, with 0 failures and 0 errors.
            

            I already tried running the docker container that I'm running on the my machine with `--privileged`, but that didn't help:

            docker container run -it --rm --privileged ... docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-runner:1
            

            @Justin Thomas, how do I allow running `docker run --privileged` inside the runner pipeline?

            Nick de Palézieux added a comment - - edited I'm also trying to get this to work and am having problems. I have configured a self hosted runner and am trying to get this pipeline to work: definitions: services: docker: # can only be used with a self-hosted runner image: docker:dind pipelines: default : - step: name: Compile runs-on: - 'self.hosted' clone: skip-ssl-verify: true size: 2x services: - docker script: - docker version - docker info - docker run --rm --privileged multiarch/qemu-user- static --reset -p yes; docker buildx create --use The pipeline fails at the last step due to `–privileged`.   Here is the log: Runner matching labels: - linux - self.hosted Runner name: dontthinkpad Runner labels: self.hosted, linux Runner version: current: 1.252 latest: 1.252 + umask 000 ... Images used: build: atlassian/ default -image@sha256:3a09dfec7e36fe99e3910714c5646be6302ccbca204d38539a07f0c2cb5902d4 docker: docker- public .packages.atlassian.com/sox/atlassian/bitbucket-pipelines-docker-daemon@sha256:5f95befdbd73f8a85ec3b7fb5a88d52a651979aff97b1355efc18df8a9811aef + docker version Client: Docker Engine - Community Version: 19.03.15 API version: 1.40 Go version: go1.13.15 Git commit: 99e3ed8 Built: Sat Jan 30 03:11:43 2021 OS/Arch: linux/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 20.10.5 API version: 1.41 (minimum version 1.12) Go version: go1.13.15 Git commit: 363e9a8 Built: Tue Mar 2 20:18:31 2021 OS/Arch: linux/amd64 Experimental: false containerd: Version: v1.4.3 GitCommit: 269548fa27e0089a8b8278fc4fc781d7f65a939b runc: Version: 1.0.0-rc93 GitCommit: 12644e614e25b05da6fd08a38ffa0cfe1903fdec docker-init: Version: 0.19.0 GitCommit: de40ad0 + docker info Client: Debug Mode: false Server: Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 20.10.5 Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Authorization: pipelines Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc Default Runtime : runc Init Binary: docker-init containerd version: 269548fa27e0089a8b8278fc4fc781d7f65a939b runc version: 12644e614e25b05da6fd08a38ffa0cfe1903fdec init version: de40ad0 Security Options: apparmor seccomp Profile: default userns Kernel Version: 5.8.0-59- generic Operating System : Alpine Linux v3.13 (containerized) OSType: linux Architecture: x86_64 CPUs: 8 Total Memory: 15.37GiB Name: fe0fc7a1fae5 ID: YE3P:ABKU:L4T5:FFN3:YH6R:74DT:Q7EG:NEZV:SJV5:E7WM:76PB:YD2K Docker Root Dir: / var /lib/docker/165536.165536 Debug Mode: false Registry: https: //index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Registry Mirrors: http: //localhost:5000/ Live Restore Enabled: false Product License: Community Engine WARNING: API is accessible on http: //0.0.0.0:2375 without encryption. Access to the remote API is equivalent to root access on the host. Refer to the 'Docker daemon attack surface' section in the documentation for more information: https: //docs.docker.com/engine/security/security/#docker-daemon-attack-surface + docker run --rm --privileged multiarch/qemu-user- static --reset -p yes; docker buildx create --use docker: Error response from daemon: authorization denied by plugin pipelines: --privileged= true is not allowed. See 'docker run --help' . Searching for test report files in directories named [test-reports, test-results, surefire-reports, failsafe-reports] down to a depth of 4 Finished scanning for test reports. Found 0 test report files. Merged test suites, total number tests is 0, with 0 failures and 0 errors. I already tried running the docker container that I'm running on the my machine with `--privileged`, but that didn't help: docker container run -it --rm --privileged ... docker- public .packages.atlassian.com/sox/atlassian/bitbucket-pipelines-runner:1 @Justin Thomas, how do I allow running `docker run --privileged` inside the runner pipeline?

            bc42126cf290 Can you please get in touch with Atlassian Support, they should be able to help you with the error.

            Justin Thomas added a comment - bc42126cf290  Can you please get in touch with Atlassian Support , they should be able to help you with the error.

            Hi Justin Thomas,

            Sorry for the late message, but I have a need to run just one step as self-hosted for arm architecture build.

            The other steps would normally run in bitbucket using your structure.

            However, when I put in "definitions" the image docker:dind in the docker service it gives an error when compiling the other steps, as this option only works in self-hosted.

            I even tried to create a "docker-custom" and include the image only in it, but I gave a memory error when running, not getting the definition of 3072.

            I can't isolate this setting only for the "master" branch, as the homologation machine I have is ARM64 (which is not supported by Bitbucket for compilation), but the development machine is a normal AMD64.

            The only way I thought was to have two bitbucket-pipelines.yml for each branch, however when merging it overwrites the content even though I include it in .gitignore and setting the merge method in .gitaatributes. Everything is ignored.

            How to solve this problem?

            I'm doing automated deploy with Rancher and Continuous Delivery (Fleet integrated), which reads a specific branch that I change at the end of the build in the pipeline.

             

            Below my bitbucket-pipelines.yml

            options:
            
              docker: true
             
            
            pipelines:
            
              branches:
            
                develop:
            
                  - step:
            
                      name: Build
            
                      image:
            
                        name: golang:stretch
            
                        username: $DOCKER_HUB_USERNAME
            
                        password: $DOCKER_HUB_PASSWORD
            
                        email: $DOCKER_HUB_EMAIL
            
                      services:
            
                        - docker
            
                      condition:
            
                        changesets:
            
                          includePaths:
            
                          - "app/**"
            
                      script:
            
                      - export APP_NAME=`echo ${BITBUCKET_REPO_SLUG} | sed 's/_/-/ig'`
            
                      - echo $APP_NAME
            
                      - export DEPLOY_BRANCH="deploy-dev"
            
                      - export DEPLOY_TYPE="DEV"
            
                      - export DEPLOY_VERSION="$BITBUCKET_BUILD_NUMBER"
            
                      - export DEPLOY_TAG="$DEPLOY_TYPE-$DEPLOY_VERSION"
            
                      - export IMAGE_NAME=$DOCKER_HUB_USERNAME/$APP_NAME:$DEPLOY_TAG
            
                      - echo "Deploying to environment"
            
                      - cd app/
            
                      - ls -l
            
                      - docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
            
                      - docker build -t $IMAGE_NAME .
            
                      - docker push $IMAGE_NAME
            
                      - git config --global user.email email@dominio
            
                      - git config --global user.name "Desenvolvimento"
            
                      - echo ''$BITBUCKET_GIT_SSH_ORIGIN''
            
                      - git remote set-url origin ${BITBUCKET_GIT_SSH_ORIGIN}
            
                      - cd /opt/atlassian/pipelines/agent/build
            
                      - git clone --branch="$DEPLOY_BRANCH" --depth 5 ${BITBUCKET_GIT_SSH_ORIGIN} /$DEPLOY_BRANCH
            
                      - cd /$DEPLOY_BRANCH
            
                      - sed -i 's/image:\ dockerhubuser.*$/image:\ dockerhubuser\/'$APP_NAME':'$DEPLOY_TAG'/' $APP_NAME-deployment.yaml
            
                      - git add --all
            
                      - git commit -m 'Deploy '$APP_NAME' '$DEPLOY_TAG''
            
                      - git push --set-upstream origin $DEPLOY_BRANCH
            
                master:
            
                  - step:
            
                      name: Build
            
                      image:
            
                        name: guglio/dind-buildx:latest
            
                        username: $DOCKER_HUB_USERNAME
            
                        password: $DOCKER_HUB_PASSWORD
            
                        email: $DOCKER_HUB_EMAIL
            
                      runs-on: self.hosted
            
                      services:
            
                        - docker
            
                      condition:
            
                        changesets:
            
                          includePaths:
            
                          - "app/**"
            
                      script:
            
                      - export APP_NAME=`echo ${BITBUCKET_REPO_SLUG} | sed 's/_/-/ig'`
            
                      - echo $APP_NAME
            
                      - export DEPLOY_BRANCH="deploy-dev"
            
                      - export DEPLOY_TYPE="DEV"
            
                      - export DEPLOY_VERSION="$BITBUCKET_BUILD_NUMBER"
            
                      - export DEPLOY_TAG="$DEPLOY_TYPE-$DEPLOY_VERSION"
            
                      - export IMAGE_NAME=$DOCKER_HUB_USERNAME/$APP_NAME:$DEPLOY_TAG
            
                      - echo "Deploying to environment"
            
                      - cd app/
            
                      - ls -l
            
                      - docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
            
                      - docker run --rm --privileged multiarch/qemu-user-static --reset -p yes; docker buildx create --use --name $APP_NAME
            
                      - docker buildx build -t "$IMAGE_NAME" --platform linux/amd64,linux/arm64 --push .
            
                      - docker buildx imagetools inspect "$IMAGE_NAME"
            
                      - git config --global user.email email@dominio
            
                      - git config --global user.name "Desenvolvimento"
            
                      - echo ''$BITBUCKET_GIT_SSH_ORIGIN''
            
                      - git remote set-url origin ${BITBUCKET_GIT_SSH_ORIGIN}
            
                      - cd /opt/atlassian/pipelines/agent/build
            
                      - git clone --branch="$DEPLOY_BRANCH" --depth 5 ${BITBUCKET_GIT_SSH_ORIGIN} /$DEPLOY_BRANCH
            
                      - cd /$DEPLOY_BRANCH
            
                      - sed -i 's/image:\ dockerhubuser.*$/image:\ dockerhubuser\/'$APP_NAME':'$DEPLOY_TAG'/' $APP_NAME-deployment.yaml
            
                      - git add --all
            
                      - git commit -m 'Deploy '$APP_NAME' '$DEPLOY_TAG''
            
                      - git push --set-upstream origin $DEPLOY_BRANCH
            
            definitions:
            
              services:
            
                docker:
            
                  image: docker:dind
            
                  memory: 3072

             

            I appreciate any help.

            Best regards,

            Carlos

            Carlos Augusto added a comment - Hi Justin Thomas, Sorry for the late message, but I have a need to run just one step as self-hosted for arm architecture build. The other steps would normally run in bitbucket using your structure. However, when I put in "definitions" the image docker:dind in the docker service it gives an error when compiling the other steps, as this option only works in self-hosted. I even tried to create a "docker-custom" and include the image only in it, but I gave a memory error when running, not getting the definition of 3072. I can't isolate this setting only for the "master" branch, as the homologation machine I have is ARM64 (which is not supported by Bitbucket for compilation), but the development machine is a normal AMD64. The only way I thought was to have two bitbucket-pipelines.yml for each branch, however when merging it overwrites the content even though I include it in .gitignore and setting the merge method in .gitaatributes. Everything is ignored. How to solve this problem? I'm doing automated deploy with Rancher and Continuous Delivery (Fleet integrated), which reads a specific branch that I change at the end of the build in the pipeline.   Below my  bitbucket-pipelines.yml options:   docker: true   pipelines:   branches:     develop:       - step:           name: Build           image:             name: golang:stretch             username: $DOCKER_HUB_USERNAME             password: $DOCKER_HUB_PASSWORD             email: $DOCKER_HUB_EMAIL           services:             - docker           condition:             changesets:               includePaths:               - "app/**"           script:           - export APP_NAME=`echo ${BITBUCKET_REPO_SLUG} | sed 's/_/-/ig' `           - echo $APP_NAME           - export DEPLOY_BRANCH= "deploy-dev"           - export DEPLOY_TYPE= "DEV"           - export DEPLOY_VERSION= "$BITBUCKET_BUILD_NUMBER"           - export DEPLOY_TAG= "$DEPLOY_TYPE-$DEPLOY_VERSION"           - export IMAGE_NAME=$DOCKER_HUB_USERNAME/$APP_NAME:$DEPLOY_TAG           - echo "Deploying to environment"           - cd app/           - ls -l           - docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD           - docker build -t $IMAGE_NAME .           - docker push $IMAGE_NAME           - git config --global user.email email@dominio           - git config --global user.name "Desenvolvimento"           - echo ''$BITBUCKET_GIT_SSH_ORIGIN' '           - git remote set-url origin ${BITBUCKET_GIT_SSH_ORIGIN}           - cd /opt/atlassian/pipelines/agent/build           - git clone --branch= "$DEPLOY_BRANCH" --depth 5 ${BITBUCKET_GIT_SSH_ORIGIN} /$DEPLOY_BRANCH           - cd /$DEPLOY_BRANCH           - sed -i 's/image:\ dockerhubuser.*$/image:\ dockerhubuser\/' $APP_NAME ':' $DEPLOY_TAG '/' $APP_NAME-deployment.yaml           - git add --all           - git commit -m 'Deploy ' $APP_NAME ' ' $DEPLOY_TAG''           - git push --set-upstream origin $DEPLOY_BRANCH     master:       - step:           name: Build           image:             name: guglio/dind-buildx:latest             username: $DOCKER_HUB_USERNAME             password: $DOCKER_HUB_PASSWORD             email: $DOCKER_HUB_EMAIL           runs-on: self.hosted           services:             - docker           condition:             changesets:               includePaths:               - "app/**"           script:           - export APP_NAME=`echo ${BITBUCKET_REPO_SLUG} | sed 's/_/-/ig' `           - echo $APP_NAME           - export DEPLOY_BRANCH= "deploy-dev"           - export DEPLOY_TYPE= "DEV"           - export DEPLOY_VERSION= "$BITBUCKET_BUILD_NUMBER"           - export DEPLOY_TAG= "$DEPLOY_TYPE-$DEPLOY_VERSION"           - export IMAGE_NAME=$DOCKER_HUB_USERNAME/$APP_NAME:$DEPLOY_TAG           - echo "Deploying to environment"           - cd app/           - ls -l           - docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD           - docker run --rm --privileged multiarch/qemu-user- static --reset -p yes; docker buildx create --use --name $APP_NAME           - docker buildx build -t "$IMAGE_NAME" --platform linux/amd64,linux/arm64 --push .           - docker buildx imagetools inspect "$IMAGE_NAME"           - git config --global user.email email@dominio           - git config --global user.name "Desenvolvimento"           - echo ''$BITBUCKET_GIT_SSH_ORIGIN' '           - git remote set-url origin ${BITBUCKET_GIT_SSH_ORIGIN}           - cd /opt/atlassian/pipelines/agent/build           - git clone --branch= "$DEPLOY_BRANCH" --depth 5 ${BITBUCKET_GIT_SSH_ORIGIN} /$DEPLOY_BRANCH           - cd /$DEPLOY_BRANCH           - sed -i 's/image:\ dockerhubuser.*$/image:\ dockerhubuser\/' $APP_NAME ':' $DEPLOY_TAG '/' $APP_NAME-deployment.yaml           - git add --all           - git commit -m 'Deploy ' $APP_NAME ' ' $DEPLOY_TAG''           - git push --set-upstream origin $DEPLOY_BRANCH definitions:   services:     docker:       image: docker:dind       memory: 3072   I appreciate any help. Best regards, Carlos

            29434169bd5d The image on the step is used to start a container for executing your scripts, while the service docker container is used to start the docker daemon against which the docker commands are executed. Hope this helps.

            e698327a1ed7 Unfortunately, we currently only support building multi-arch images using self-hosted runners.

            Justin Thomas added a comment - 29434169bd5d  The image  on the step is used to start a container for executing your scripts, while the service docker container is used to start the docker daemon against which the docker commands are executed. Hope this helps. e698327a1ed7  Unfortunately, we currently only support building multi-arch images using self-hosted runners.

            Florian added a comment -

            Hi Justin,

            i got the build process running. Thanks for the advice.
            What i am not understanding is: there are two definitions of image. One in the step and one in the service definition. The one in service is not used as far as i tested it. It only worked for me when i added the image definition within the step.

            ...
             - step:
             runs-on: self.hosted
             image: docker:dind
             name: create Docker Image
             script:
            ...
            
            definitions: 
             services: 
             docker: # default docker service
             memory: 512
             docker-custom:
             type: docker 
             image: docker:dind
            

             

            @Monica, from my understanding you need the „runs-on:“ definition point to one of your runners within a step to use a custom docker service. And then additionally you need the definition of the custom docker service within the services.

            Florian added a comment - Hi Justin, i got the build process running. Thanks for the advice. What i am not understanding is: there are two definitions of image. One in the step and one in the service definition. The one in service is not used as far as i tested it. It only worked for me when i added the image definition within the step. ... - step: runs-on: self.hosted image: docker:dind name: create Docker Image script: ... definitions: services: docker: # default docker service memory: 512 docker-custom: type: docker image: docker:dind   @Monica, from my understanding you need the „runs-on:“ definition point to one of your runners within a step to use a custom docker service. And then additionally you need the definition of the custom docker service within the services.

            hmmm... I'm getting this:

             

            The 'services' section in your bitbucket-pipelines.yml file contains a custom docker service. Remove 'image', 'variables' and 'environment' from the docker service definition or use a self-hosted runner for the step.

             

             

            Monica Gordillo added a comment - hmmm... I'm getting this:   The 'services' section in your bitbucket-pipelines.yml file contains a custom docker service. Remove 'image', 'variables' and 'environment' from the docker service definition or use a self-hosted runner for the step.    

            Is there any way around this without the runner?

            Monica Gordillo added a comment - Is there any way around this without the runner?

            Hi 29434169bd5d,

            An example bitbucket-pipelines.yml

            pipelines:
              default:
                - step:
                    name: Macbook Runner 
                    image: guglio/dind-buildx:latest
                    runs-on:
                      - macbook
                      - self.hosted
                    services:
                      - docker
                    script:
                      - echo "Executing on self-hosted runner" 
                      - docker run --rm --privileged multiarch/qemu-user-static --reset -p yes; docker buildx create --use
                      - docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 .
            definitions:
              services:
                docker: # can only be used with a self-hosted runner
                  image: docker:dind
            

            Hope this helps, please let me know if it doesn't work.

            Cheers,

            Justin Thomas 

            Justin Thomas added a comment - Hi 29434169bd5d , An example bitbucket-pipelines.yml pipelines: default : - step: name: Macbook Runner image: guglio/dind-buildx:latest runs-on: - macbook - self.hosted services: - docker script: - echo "Executing on self-hosted runner" - docker run --rm --privileged multiarch/qemu-user- static --reset -p yes; docker buildx create --use - docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 . definitions: services: docker: # can only be used with a self-hosted runner image: docker:dind Hope this helps, please let me know if it doesn't work. Cheers, Justin Thomas 

            Florian added a comment - - edited

            @Justin, i tried to use a self.hosted runner to get a buildx to work (my plan was to use the docker:dind image and get the latest buildx bin installed) but it always fails due to an authorization restriction. The plugin that denies the call to docker with the privileged flag set, is ‚pipeline‘ and enabled within the runner container. Can you please show me how to use a self.hosted runner to use buildx?
            I always get this error `docker: Error response from daemon: authorization denied by plugin pipelines: --privileged=true is not allowed.`

            thanks

            Florian added a comment - - edited @Justin, i tried to use a self.hosted runner to get a buildx to work (my plan was to use the docker:dind image and get the latest buildx bin installed) but it always fails due to an authorization restriction. The plugin that denies the call to docker with the privileged flag set, is ‚pipeline‘ and enabled within the runner container. Can you please show me how to use a self.hosted runner to use buildx? I always get this error ` docker: Error response from daemon: authorization denied by plugin pipelines: --privileged=true is not allowed. ` thanks

            Hi everyone,

            This is Justin from the Bitbucket team. Thank you for your feedback on building multi-architecture images.

            We do understand the importance of this feature to you, but due to the security implications of allowing privileged docker command, we won't be adding support for building multi-architecture images in the short term.

            The good news is that we just released self-hosted runners, which allows you to run privileged commands. You can use self-hosted runners to build multi-architecture images.

            Thank you again for your feedback.

            Best,
            Justin Thomas

            Justin Thomas added a comment - Hi everyone, This is Justin from the Bitbucket team. Thank you for your feedback on building multi-architecture images. We do understand the importance of this feature to you, but due to the security implications of allowing privileged docker command, we won't be adding support for building multi-architecture images in the short term. The good news is that we just released self-hosted runners , which allows you to run privileged commands . You can use self-hosted runners to build multi-architecture images. Thank you again for your feedback. Best, Justin Thomas

            +1

            Roger Rodrigo added a comment - +1

            +1

            rud31mp added a comment -

            +1

            rud31mp added a comment - +1

            switch to aws codebuild, it has way better support for this kind of thing

            Connor Anderson added a comment - switch to aws codebuild, it has way better support for this kind of thing

            +1

            Pooja Chawla added a comment - +1

            +1

            krishna maurya added a comment - +1

            +1

            Abhinav Bussa added a comment - +1

            +1

            Gautam Mukoo added a comment - +1

            +1

            +1

            Simone Vollaro added a comment - +1

            Joe added a comment -

            +1

            Joe added a comment - +1

            This would be incredibly useful, as is we have no option other than to find another provider for our pipeline.

            Michael Abbott added a comment - This would be incredibly useful, as is we have no option other than to find another provider for our pipeline.

            RasperiPi?

            Benjamin Grauer added a comment - RasperiPi?

            Edgar Vonk added a comment -

            Might not be a solution for everyone but we now switched to using https://github.com/GoogleContainerTools/jib to build our cross-platform Docker images. This way you avoid dependency on the actual Docker daemon to build your images. ps: you do still need to the Docker service in your pipeline step.

            Still Atlassian should just fix this issue I think.

            Edgar Vonk added a comment - Might not be a solution for everyone but we now switched to using https://github.com/GoogleContainerTools/jib  to build our cross-platform Docker images. This way you avoid dependency on the actual Docker daemon to build your images. ps: you do still need to the Docker service in your pipeline step. Still Atlassian should just fix this issue I think.

            danielv added a comment -

            Moved to hosted Jenkins.

            danielv added a comment - Moved to hosted Jenkins.

            sjkummer added a comment -

            This issue made us moving to Gitlab (hey, it's free to use own runners there).

            Sorry @ Atlassian for advertise a competing product here - but you enforced my to login in order to stop watching this issue (no direct unsubscribe button in your notification mails 😬)

            sjkummer added a comment - This issue made us moving to Gitlab (hey, it's free to use own runners there). Sorry @ Atlassian for advertise a competing product here - but you enforced my to login in order to stop watching this issue (no direct unsubscribe button in your notification mails 😬)

            I think before Atlassian could enable buildx (which is what we need) they will need to upgrade their Bitbucket Cloud Pipelines Docker runtime. I can't believe they are still on version 19..

            Server Version: 19.03.15 

            Edgar Vonk added a comment - I think before Atlassian could enable buildx (which is what we need) they will need to upgrade their Bitbucket Cloud Pipelines Docker runtime. I can't believe they are still on version 19.. Server Version: 19.03.15

            Edgar Vonk added a comment -

            +1 we really need this feature because without it we cannot use Bitbucket Cloud to build our Docker images to run a Raspberry Pi. I.e. we cannot use Bitbucket Cloud in our CI/CD pipeline..

            Edgar Vonk added a comment - +1 we really need this feature because without it we cannot use Bitbucket Cloud to build our Docker images to run a Raspberry Pi. I.e. we cannot use Bitbucket Cloud in our CI/CD pipeline..

            +1

            +1

            Joel Schofield added a comment - +1

            +1

            Euhn Lee added a comment -

            Bumped, please enable buildx

            Euhn Lee added a comment - Bumped, please enable buildx

            Terry Bell added a comment -

            I will add my voice to @Alexander_Sadovskyi ↑  please enable buildx

            Terry Bell added a comment - I will add my voice to @Alexander_Sadovskyi ↑  please enable buildx

            Alexander Sadovskyi added a comment - - edited

            Can you just enable buildx? That will resolve all problems...

            A simple fix for a major problem.

            Alexander Sadovskyi added a comment - - edited Can you just enable buildx? That will resolve all problems... A simple fix for a major problem.

            This could be resolved by having the bifmt configuration for qemu-arm-static registered in the host system. 

            The qemu-arm-static can be supplied by image itself then. 

            The problem with the register scripts mentioned above (mount failed) is due to the fact that the script tries to mount something in /proc to then register the binfmt settings.
            Essentially that steps tells the kernel what interpreter to use when it encounters an ELF binary for ARM.
            https://ownyourbits.com/2018/06/13/transparently-running-binaries-from-any-architecture-in-linux-with-qemu-and-binfmt_misc/

            Dominik Fretz added a comment - This could be resolved by having the bifmt configuration for qemu-arm-static registered in the host system.  The qemu-arm-static can be supplied by image itself then.  The problem with the register scripts mentioned above (mount failed) is due to the fact that the script tries to mount something in /proc to then register the binfmt settings. Essentially that steps tells the kernel what interpreter to use when it encounters an ELF binary for ARM. https://ownyourbits.com/2018/06/13/transparently-running-binaries-from-any-architecture-in-linux-with-qemu-and-binfmt_misc/

            mkrastev added a comment -

            +1

            mkrastev added a comment - +1

            sjkummer added a comment -

            +1

            sjkummer added a comment - +1

            +1

            aaron added a comment -

            +1

            aaron added a comment - +1

            valkyrie00 added a comment -

            +1  Follow too

            valkyrie00 added a comment - +1  Follow too

            We're looking for this also !

            Nathanaël Lécaudé added a comment - We're looking for this also !

            Johnson Shao added a comment - - edited

            any updates on this???

            as a commercial user, I'm planning move to Github.

            Johnson Shao added a comment - - edited any updates on this??? as a commercial user, I'm planning move to Github.

            A real shame this isn't supported... We may end having to move all our pipelines away for this reason too, due to migration to ARM based stacks.

            Ralph Lawrence added a comment - A real shame this isn't supported... We may end having to move all our pipelines away for this reason too, due to migration to ARM based stacks.

            fiLLLip added a comment -

            Solved this by moving build pipelines to AWS CodeBuild. Very flexible! At first glance I think we will move all pipelines there. For now I see no reason not to.

            fiLLLip added a comment - Solved this by moving build pipelines to AWS CodeBuild. Very flexible! At first glance I think we will move all pipelines there. For now I see no reason not to.

            Ian Rogers added a comment -

            We have started using GitHub Actions as they have ARM support. It's a bit of a pain as we have been pushing our relevant repositories to two remotes. We will likely move all of our repositories over there in the long run so everything is in one place.

            Ian Rogers added a comment - We have started using GitHub Actions as they have ARM support. It's a bit of a pain as we have been pushing our relevant repositories to two remotes. We will likely move all of our repositories over there in the long run so everything is in one place.

            I can't believe this is still in status "gathering interest". Enough people develop for arm and the architecture is clearly the future! Even Microsoft has ported win10 to arm and apple will completely switch to it. So how is arm support still not important to Atlassian ???

            Benjamin Hess added a comment - I can't believe this is still in status "gathering interest". Enough people develop for arm and the architecture is clearly the future! Even Microsoft has ported win10 to arm and apple will completely switch to it. So how is arm support still not important to Atlassian ???

            fiLLLip added a comment -

            Cannot use Bitbucket pipelines as is for building docker containers for ARM. Tried
            export DOCKER_CLI_EXPERIMENTAL=enabled
            but it does not work...

            fiLLLip added a comment - Cannot use Bitbucket pipelines as is for building docker containers for ARM. Tried export DOCKER_CLI_EXPERIMENTAL=enabled but it does not work...

            Can someone give a date for when this feature will be available. ARM is basically king for embedded systems, I can't believe there is no support for it...

            vincent-nelis added a comment - Can someone give a date for when this feature will be available. ARM is basically king for embedded systems, I can't believe there is no support for it...

            Isn't it possible to just make the multi-arch build in another docker container and use this new docker container in the pipeline?

            Thomas Jetzinger added a comment - Isn't it possible to just make the multi-arch build in another docker container and use this new docker container in the pipeline?

            We as well have migrated to another cloud build system because Bitbucket doesn't support this. Please allow enabling experimental features in order to support these architectures.

            Connor Anderson added a comment - We as well have migrated to another cloud build system because Bitbucket doesn't support this. Please allow enabling experimental features in order to support these architectures.

            florian_irasun added a comment - - edited

            Hi Bitbucket team,

            we can not move all our embedded code to Bitbucket yet because this is a missing feature in the pipelines. Is there any update when this will be possible? It would be great to have the code all in one place, which is not possible at the moment with this feature missing at Bitbucket. Does anyone have a workaround working?

             

            Thanks,

            Florian

            florian_irasun added a comment - - edited Hi Bitbucket team, we can not move all our embedded code to Bitbucket yet because this is a missing feature in the pipelines. Is there any update when this will be possible? It would be great to have the code all in one place, which is not possible at the moment with this feature missing at Bitbucket. Does anyone have a workaround working?   Thanks, Florian

            Franklin Dattein added a comment - - edited

            Just a heads up, cross compilation with docker got a lot simples with `docker buildx`

            - export DOCKER_CLI_EXPERIMENTAL=enabled 
            - docker buildx build --platform linux/arm/v7 -t my-image:my-tag --push .

            Unfortunately, docker experimental feature cannot be enabled in Bitbucket and the command above it doesn't work.

            Please, consider enabling experimental features in Bitbucket's Docker engine.

            Thanks,
            Franklin

            Franklin Dattein added a comment - - edited Just a heads up, cross compilation with docker got a lot simples with `docker buildx` - export DOCKER_CLI_EXPERIMENTAL=enabled - docker buildx build --platform linux/arm/v7 -t my-image:my-tag --push . Unfortunately, docker experimental feature cannot be enabled in Bitbucket and the command above it doesn't work. Please, consider enabling experimental features in Bitbucket's Docker engine. Thanks, Franklin

            Same here, we need to build for Raspberry Pi. Would be nice, if we could build ARM via pipeline. The native way using an ARM Hosted Instance on EC2 seems preferable...

            Benjamin Hess added a comment - Same here, we need to build for Raspberry Pi. Would be nice, if we could build ARM via pipeline. The native way using an ARM Hosted Instance on EC2 seems preferable...

            j0nl1 added a comment -

            Hello,

            We have the same situation, currently we are doing our builds in ci but we will prefer to use Bitbucket pipelines. 

             

            Thanks

            j0nl1 added a comment - Hello, We have the same situation, currently we are doing our builds in ci but we will prefer to use Bitbucket pipelines.    Thanks

            Hello Bitbucket support team,

            This is very obvious need for our organization.  There must have a solution. 

             

            Musarraf Hossain Sekh added a comment - Hello Bitbucket support team, This is very obvious need for our organization.  There must have a solution.   

            N8 added a comment -

            We need this too to deploy software to RaspberryPis, why has this thread fallen silent?

            N8 added a comment - We need this too to deploy software to RaspberryPis, why has this thread fallen silent?

            Any update on this? I believe this is also relevant for multi-architecture builds in pipelines, which is not currently possible.

            Lukas Solanka added a comment - Any update on this? I believe this is also relevant for multi-architecture builds in pipelines, which is not currently possible.

            Mike Howells added a comment - - edited

            99b7cb543ca8 said:

            Bumping this thread as ARM64 is a present need on Bitbucket pipelines. Please give an update on availability of QEMU or native ARM64 instances.

            Original comment missed during migration to jira.atlassian.com due to timing of export.

            Mike Howells added a comment - - edited 99b7cb543ca8 said: Bumping this thread as ARM64 is a present need on Bitbucket pipelines. Please give an update on availability of QEMU or native ARM64 instances. Original comment missed during migration to jira.atlassian.com due to timing of export.

              57465700c4e1 Edmund Munday
              fd665ffff158 f4b1en
              Votes:
              469 Vote for this issue
              Watchers:
              269 Start watching this issue

                Created:
                Updated: