• Our product teams collect and evaluate feedback from a number of different sources. To learn more about how we use customer feedback in the planning process, check out our new feature policy.

      I have 2 environments I deploy to, and the Pipelines for these deployments are multi-step but the steps are the same. I use variables to pass in details from BitBucket into the scripts that run, and use these variables across the steps. Ideally, I'd like my scripts to be able to use variables in their steps, but the values of those variables vary according to the "deployment".

      I can't find a way of using Deployments and Deployment variables to use the same pipeline for the different environments. It doesn't appear possible to have multiple steps in the same pipeline with the same deployment value (test/staging/production).

      Deployment variables only seem to be useful for different steps in the same pipeline (i.e. step 1 deploys to test, step 2 deploy to staging, step 3 to deploy to Prod), but it would be useful if a multiple steps could use the same deployment variables, or if a pipeline could run against a particular set of deployment variables (i.e. test/staging/prod)

            [BCLOUD-18261] Support multi-step deployments

            Luke Phillips added a comment - f4dbaae2a132 I dont know why this is closed. Issues are still around after +5 years. https://community.atlassian.com/t5/Bitbucket-questions/Usage-of-Deployment-variables-on-multiple-steps/qaq-p/1103443?anon_like=2238885#U2948012

            d7fa1e6e13fa naaah, not at all. You're using the framework to build something completely new. I must say I'm pretty happy with the results. I wrote an article on this.

            https://medium.com/@jedfra6/how-i-finally-solved-cicd-in-bitbucket-after-2-years-of-trying-080e2159a798

            Since that time I also solved an issue with admin checks for specific jobs, which is great. 

            Would I prefer to have all that ootb in Bitbucket - for sure. But I'm happy at least I have tools at hand to help me go forward

            Jędrzej Frankowski added a comment - d7fa1e6e13fa naaah, not at all. You're using the framework to build something completely new. I must say I'm pretty happy with the results. I wrote an article on this. https://medium.com/@jedfra6/how-i-finally-solved-cicd-in-bitbucket-after-2-years-of-trying-080e2159a798 Since that time I also solved an issue with admin checks for specific jobs, which is great.  Would I prefer to have all that ootb in Bitbucket - for sure. But I'm happy at least I have tools at hand to help me go forward

            Marcin Kielar added a comment - - edited

            badacfdb410f Dynamic Pipelines are indeed powerful, but building your own pipeline engine on top of Bitbucket Pipelines seems like using TypeScript to write JavaScript. It's all bells and whistles, until you realize you have to implement TypeScript yourself first.

            This is not a solution, and Atlassian should just do better in providing basic functionality all modern (and old ) CICD platforms provide for years.

            Marcin Kielar added a comment - - edited badacfdb410f Dynamic Pipelines are indeed powerful, but building your own pipeline engine on top of Bitbucket Pipelines seems like using TypeScript to write JavaScript. It's all bells and whistles, until you realize you have to implement TypeScript yourself first. This is not a solution, and Atlassian should just do better in providing basic functionality all modern (and old ) CICD platforms provide for years.

            If it's of any help - if any of you have Bitbucket Premium, you should be able to solve those problems with Bitbucket Dynamic Pipelines. That's what I did. 

            Jędrzej Frankowski added a comment - If it's of any help - if any of you have Bitbucket Premium, you should be able to solve those problems with Bitbucket Dynamic Pipelines. That's what I did. 

            Shubham Pai added a comment - - edited

            So in my usecase stages was a good alternative, but then parallel steps within a stage isn't supported....bleh. One alternative after another alternative after another for things to barely work. +1 to 64cb4dfcddfb's comment the other day. This is why I'm moving my pipelines to gitlab instead.

            Shubham Pai added a comment - - edited So in my usecase stages was a good alternative, but then parallel steps within a stage isn't supported ....bleh. One alternative after another alternative after another for things to barely work. +1 to 64cb4dfcddfb 's comment the other day. This is why I'm moving my pipelines to gitlab instead.

            I think what we all need to understand is that our friends at Atlassian have their hands full making changes of dubious value to their UIs.  Why would they work on making their tools do a better job of meeting our actual requirements when they can just move UI elements around to make them harder to find?

            david.resnick added a comment - I think what we all need to understand is that our friends at Atlassian have their hands full making changes of dubious value to their UIs.  Why would they work on making their tools do a better job of meeting our actual requirements when they can just move UI elements around to make them harder to find?

            +1

            Amazing how many basic features which are staples of CI/CD development are just not allowed, or a limitation, or totally impossible in Bitbucket pipelines. Do Atlassian realise that their main competititors in this space i.e. GitHub/GitLab are totally blowing Bitbucket pipelines in the middle of last decade. This ticket and dozens or so other tickets to allow for basic stuff, like variablising deployment variables, allowing conditional steps, deployments with parallel steps, more than one step with a deployment in a stage and other very simple capabilities to implement is really hindering the ability for engineers to use Bitbucket pipelines properly. These tickets have been open for literally years, with no updates, no comments from anyone at Atlassian and it's very poor.

            Adam Sambridge added a comment - +1 Amazing how many basic features which are staples of CI/CD development are just not allowed, or a limitation, or totally impossible in Bitbucket pipelines. Do Atlassian realise that their main competititors in this space i.e. GitHub/GitLab are totally blowing Bitbucket pipelines in the middle of last decade. This ticket and dozens or so other tickets to allow for basic stuff, like variablising deployment variables, allowing conditional steps, deployments with parallel steps, more than one step with a deployment in a stage and other very simple capabilities to implement is really hindering the ability for engineers to use Bitbucket pipelines properly. These tickets have been open for literally years, with no updates, no comments from anyone at Atlassian and it's very poor.

            Do "Dynamic Pipelines" solve any of this? 

            Jędrzej Frankowski added a comment - Do "Dynamic Pipelines" solve any of this? 

            "Stage" does not solve the issue when looking at more complex pipelines, where you would want multiple steps to be grouped under a stage but you would want to be able to have manual trigger or conditions on the steps as well. It is still a big limitation and definitely unresolved.

            Alexandra Boarna added a comment - "Stage" does not solve the issue when looking at more complex pipelines, where you would want multiple steps to be grouped under a stage but you would want to be able to have manual trigger or conditions on the steps as well. It is still a big limitation and definitely unresolved.

            I don't believe the `stage` solution works for my use case where I would also like to share deployment across multiple steps. We are using our BB pipeline to plan and apply terraform changes across multiple environment. In this case, we want to group our steps by "terraform plan" for all regions then group steps by "terraform apply" for all regions. We would like to share deployment across two different parallel steps that cannot be grouped by 'stage'.

            This will allow us to review the terraform plan for all regions before applying it.

            See example below.

            ---
            definitions:
              - step: &plan
                  script:
                    - make save_plan CELL_ID=$CELL_ID UNIT=pipeline AWS_ACCOUNT=$AWS_ACCOUNT
              - steps: &apply
                  script:
                    - make apply_plan CELL_ID=$CELL_ID UNIT=pipeline AWS_ACCOUNT=$AWS_ACCOUNT
                    
            pipelines:
              default:
                - parallel:
                    - step:
                        <<: *plan
                        name: "Plan dev region-1"
                        deployment: dev-region-1
                    - step:
                        <<: *plan
                        name: "Plan dev region-2"
                        deployment: dev-region-2
                - step:
                    name: "Apply all the things in Internal Dev"
                    trigger: manual
                    script:
                      - "true"
                - parallel:
                    - step:
                        <<: *apply
                        name: "Apply dev region-1"
                        deployment: dev-region-1
                    - step:
                        <<: *apply
                        name: "Apply dev region-2"
                        deployment: dev-region-2 

             

            Victoria Chen added a comment - I don't believe the `stage` solution works for my use case where I would also like to share deployment across multiple steps. We are using our BB pipeline to plan and apply terraform changes across multiple environment. In this case, we want to group our steps by "terraform plan" for all regions then group steps by "terraform apply" for all regions. We would like to share deployment across two different parallel steps that cannot be grouped by 'stage'. This will allow us to review the terraform plan for all regions before applying it. See example below. --- definitions: - step: &plan script: - make save_plan CELL_ID=$CELL_ID UNIT=pipeline AWS_ACCOUNT=$AWS_ACCOUNT - steps: &apply script: - make apply_plan CELL_ID=$CELL_ID UNIT=pipeline AWS_ACCOUNT=$AWS_ACCOUNT pipelines: default : - parallel: - step: <<: *plan name: "Plan dev region-1" deployment: dev-region-1 - step: <<: *plan name: "Plan dev region-2" deployment: dev-region-2 - step: name: "Apply all the things in Internal Dev" trigger: manual script: - " true " - parallel: - step: <<: *apply name: "Apply dev region-1" deployment: dev-region-1 - step: <<: *apply name: "Apply dev region-2" deployment: dev-region-2  

              57465700c4e1 Edmund Munday
              f4dbaae2a132 Gary Breavington
              Votes:
              1067 Vote for this issue
              Watchers:
              502 Start watching this issue

                Created:
                Updated:
                Resolved:

                  Estimated:
                  Original Estimate - Not Specified
                  Not Specified
                  Remaining:
                  Remaining Estimate - 0h
                  0h
                  Logged:
                  Time Spent - 0.75h
                  0.75h