-
Suggestion
-
Resolution: Fixed
-
Our product teams collect and evaluate feedback from a number of different sources. To learn more about how we use customer feedback in the planning process, check out our new feature policy.
I have 2 environments I deploy to, and the Pipelines for these deployments are multi-step but the steps are the same. I use variables to pass in details from BitBucket into the scripts that run, and use these variables across the steps. Ideally, I'd like my scripts to be able to use variables in their steps, but the values of those variables vary according to the "deployment".
I can't find a way of using Deployments and Deployment variables to use the same pipeline for the different environments. It doesn't appear possible to have multiple steps in the same pipeline with the same deployment value (test/staging/production).
Deployment variables only seem to be useful for different steps in the same pipeline (i.e. step 1 deploys to test, step 2 deploy to staging, step 3 to deploy to Prod), but it would be useful if a multiple steps could use the same deployment variables, or if a pipeline could run against a particular set of deployment variables (i.e. test/staging/prod)
- image-2023-03-09-13-59-07-821.png
- 48 kB
- Edmund Munday
- image-2023-03-09-13-58-51-174.png
- 48 kB
- Edmund Munday
- image-2023-03-09-13-58-24-903.png
- 21 kB
- Edmund Munday
- image-2023-03-09-13-58-10-409.png
- 97 kB
- Edmund Munday
- image-2023-03-09-13-57-34-286.png
- 168 kB
- Edmund Munday
- image-2023-03-09-13-57-14-674.png
- 169 kB
- Edmund Munday
- mentioned in
-
Page Failed to load
-
Page Failed to load
-
Page Failed to load
-
Page Failed to load
-
Page Failed to load
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
[BCLOUD-18261] Support multi-step deployments
d7fa1e6e13fa naaah, not at all. You're using the framework to build something completely new. I must say I'm pretty happy with the results. I wrote an article on this.
Since that time I also solved an issue with admin checks for specific jobs, which is great.
Would I prefer to have all that ootb in Bitbucket - for sure. But I'm happy at least I have tools at hand to help me go forward
badacfdb410f Dynamic Pipelines are indeed powerful, but building your own pipeline engine on top of Bitbucket Pipelines seems like using TypeScript to write JavaScript. It's all bells and whistles, until you realize you have to implement TypeScript yourself first.
This is not a solution, and Atlassian should just do better in providing basic functionality all modern (and old ) CICD platforms provide for years.
If it's of any help - if any of you have Bitbucket Premium, you should be able to solve those problems with Bitbucket Dynamic Pipelines. That's what I did.
So in my usecase stages was a good alternative, but then parallel steps within a stage isn't supported....bleh. One alternative after another alternative after another for things to barely work. +1 to 64cb4dfcddfb's comment the other day. This is why I'm moving my pipelines to gitlab instead.
I think what we all need to understand is that our friends at Atlassian have their hands full making changes of dubious value to their UIs. Why would they work on making their tools do a better job of meeting our actual requirements when they can just move UI elements around to make them harder to find?
+1
Amazing how many basic features which are staples of CI/CD development are just not allowed, or a limitation, or totally impossible in Bitbucket pipelines. Do Atlassian realise that their main competititors in this space i.e. GitHub/GitLab are totally blowing Bitbucket pipelines in the middle of last decade. This ticket and dozens or so other tickets to allow for basic stuff, like variablising deployment variables, allowing conditional steps, deployments with parallel steps, more than one step with a deployment in a stage and other very simple capabilities to implement is really hindering the ability for engineers to use Bitbucket pipelines properly. These tickets have been open for literally years, with no updates, no comments from anyone at Atlassian and it's very poor.
"Stage" does not solve the issue when looking at more complex pipelines, where you would want multiple steps to be grouped under a stage but you would want to be able to have manual trigger or conditions on the steps as well. It is still a big limitation and definitely unresolved.
I don't believe the `stage` solution works for my use case where I would also like to share deployment across multiple steps. We are using our BB pipeline to plan and apply terraform changes across multiple environment. In this case, we want to group our steps by "terraform plan" for all regions then group steps by "terraform apply" for all regions. We would like to share deployment across two different parallel steps that cannot be grouped by 'stage'.
This will allow us to review the terraform plan for all regions before applying it.
See example below.
--- definitions: - step: &plan script: - make save_plan CELL_ID=$CELL_ID UNIT=pipeline AWS_ACCOUNT=$AWS_ACCOUNT - steps: &apply script: - make apply_plan CELL_ID=$CELL_ID UNIT=pipeline AWS_ACCOUNT=$AWS_ACCOUNT pipelines: default: - parallel: - step: <<: *plan name: "Plan dev region-1" deployment: dev-region-1 - step: <<: *plan name: "Plan dev region-2" deployment: dev-region-2 - step: name: "Apply all the things in Internal Dev" trigger: manual script: - "true" - parallel: - step: <<: *apply name: "Apply dev region-1" deployment: dev-region-1 - step: <<: *apply name: "Apply dev region-2" deployment: dev-region-2
I cannot secure my Terraform workflows properly without that 😡{}
image: hashicorp/terraform:latest pipelines: branches: master: - step: name: plan platform-prod module deployment: prod script: - terraform -chdir="./modules/platform-prod" init - terraform -chdir="./modules/platform-prod" plan - step: name: apply platform-prod module deployment: prod script: - terraform -chdir="./modules/platform-prod" init - terraform -chdir="./modules/platform-prod" apply -auto-approve trigger: manual
I think what we were looking is a documentation on how to use the fix for this issue. From what I can see, if you use stage, you will end up something like below. The deployment variables are shared in step 1 and 2 inside the stage
branches: some-branch: - stage: name: Deploy to Dev deployment: DEV steps: - step: *step1 - step: *step2 - stage: name: Deploy to Staging deployment: STG steps: - step: *step1 - step: *step2 - step: *some-other-steps ...
@gedeminas You've been able to share deployment variables across steps for a while now; it is called a "stage". See https://support.atlassian.com/bitbucket-cloud/docs/stage-options/
The above comment by Edmund highlights the limitations of it and what still needs to be implemented.
Some people are unhappy about this fact, but I've been using stages for ages now and it works fine for my use-cases. In this screenshot all the items below "Deploy To Production" are inside the stage and they all share the deployment variables assigned to "production". By using Terraform we can deploy to a single or multiple places at once.
EDIT: Hmm, I can't seem to post screenshots.
We have the following if you can't see it.
> Deploy To Production [Deploy] () Terraform Production () Build Application () Deploy Container () Send Deployment Marker () Purge Web Cache
I don't understand what the last comment means exactly.
Is the issue fixed? Can we already use same deployment environment on different steps from late 2022?
Which one of proposed to follow tickets is related to this problem?
Hi all - as the first iteration of this feature was shipped in late 2022, I'm going to be closing this ticket and referring people to please vote/comment on the specific sub-tickets for the enhancements they are wanting to see with this feature.
For context, we are currently working on adding manual triggers inside stages in a limited capacity. We plan to have this ready in the next few months.
Follow on tickets:
- Manual Triggers: https://jira.atlassian.com/browse/BCLOUD-22223
- Parallel Steps in a Stage: https://jira.atlassian.com/browse/BCLOUD-22214
- Conditional Step in Stage: https://jira.atlassian.com/browse/BCLOUD-22216
- Parallel Stages: https://jira.atlassian.com/browse/BCLOUD-22215
Any updates on this one? We've just moved over from gitlab due to their pricing hikes and pretty disappointed this is an issue in bitbucket. Should have checked this properly before we moved (where we could do this).
Hopefully enough votes and comments on this might move it into someone's backlog!
Edit: Maybe this might help BCLOUD-20821
Bumping thread. Trying to solve the same problem of using my deployment with in multiple steps.
Same issues here I want to be able to use same deployment environment on different steps and different stages, please work on this.
I still got this error, when use different deployment on each step
The deployment environment 'Production' in your bitbucket-pipelines.yml file occurs multiple times in the pipeline. Please refer to our documentation for valid environments and their ordering.
Generally this works for me @mifa. They should probably close this issue as the base issue is done?
We have implemented
Deploy to Staging (stage) - Terraform - Build App - Build & Deploy Container - Purge Caches Deploy to Production (stage) - Terraform - Build App - Build & Deploy Container - Purge Caches
which works fine and we can share deployment variables in each part of the stage.
There are a number of other issues to further enhance the implementation as listed in this post https://jira.atlassian.com/browse/BCLOUD-18261?focusedCommentId=3196834&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-3196834
it's look like they not working on this..
if there's award for the most bad of support product, maybe Bitbucket will got no. 1.
If anybody wants the ability to set manual triggers for individual steps in a stage (avoiding the workaround solutions like Dawid above) here is the exact issue to look at and vote up:
Below you can see how we have used "other workarounds", to carry some environment variables over, while using manually triggered steps.
API deployment:
definitions: steps: - step: &save-envs name: "Save env variables" script: - echo "export APP_REGION=$APP_REGION" > .envs - echo "export AWS_REGION=$AWS_REGION" >> .envs - echo "export ENV_NAME=$ENV_NAME" >> .envs - echo "export PROJECT=$PROJECT" >> .envs artifacts: - .envs - step: &deployment runs-on: - 'self.hosted' - 'linux.shell' - 'aws.deployer' script: - source .envs pipelines: branches: rc-api: - step: <<: *save-envs deployment: "RC" - step: <<: *deployment name: "Deploy to RC env" script: - source .envs - /srv/api-deployer/deploy.sh - step: <<: *deployment name: "Switch to RC env" trigger: manual script: - source .envs - /srv/api-deployer/switch.sh
Frontend deployment:
definitions: steps: - step: &save-envs name: "Save env variables" script: - echo "export APP_REGION=$APP_REGION" > .envs - echo "export AWS_REGION=$AWS_REGION" >> .envs - echo "export ENV_NAME=$ENV_NAME" >> .envs - echo "export PROJECT=$PROJECT" >> .envs artifacts: - .envs - step: &deployment runs-on: - 'self.hosted' - 'linux.shell' - 'aws.deployer' script: - source .envs pipelines: branches: rc-frontend: - step: <<: *save-envs deployment: "RC" - step: <<: *deployment name: "Deploy to RC env" script: - source .envs - /srv/fe-deployer/deploy.sh - step: <<: *deployment name: "Switch to RC env" trigger: manual script: - source .envs - /srv/fe-deployer/switch.sh
Same problem here, already uncomfortable to use different stages just to be able to use same deployment multiple times, but with a manual trigger in the pipeline, its not working anymore. Why is it so difficult to implement such a simple thing? Looks like we have to migrate to Gitlab ....
Just wanted to say this one of the feature I am really waiting for. Not sure when this might come, it might be too later for us.
Stages are nice, but we need manual triggers too and they are not available in stages yet.
We build our backend (API) and frontend in separate pipelines, because they deploy to separate servers. Backend builds quickly, but frontend takes 15+ minutes of waiting. We don't want to deploy newer API version, before the new frontend is ready.
To mitigate this, we prepare a folder with the new API version and another folder with the new frontend version. Once both API and frontend are ready, we re-create (switch) symlinks, so that they point to the newest API and frontend versions. This last step needs to be triggered manually, because devs who deploy API and frontend are not the same people and API guys need to synchronize with the frontend guys.
API deployment is fast:
rc-api: - step: name: "Deploy to RC env" deployment: "RC" runs-on: - 'self.hosted' - 'linux.shell' - 'aws.deployer' script: - echo "Deploying to RC env" - /srv/api-deployer/deploy.sh - step: name: "Switch to RC env" trigger: manual runs-on: - 'self.hosted' - 'linux.shell' - 'aws.deployer' script: - echo "Switching to RC env" - /srv/api-deployer/switch.sh
Frontend deployment is slow:
rc-frontend: - step: name: "Deploy to RC env" deployment: "RC" runs-on: - 'self.hosted' - 'linux.shell' - 'aws.deployer' script: - echo "Deploying to RC env" - /srv/fe-deployer/deploy.sh - step: name: "Switch to RC env" trigger: manual runs-on: - 'self.hosted' - 'linux.shell' - 'aws.deployer' script: - echo "Switching to RC env" - /srv/fe-deployer/switch.sh
We would like to be able to add deployment: "RC" to the switch step too as we need to carry some environment variables over. Right now we need to pass them from deploy.sh to switch.sh by using other workarounds, which is uncomfortable.
9860fbca7b8b - this should have been the way Stages worked since it was shipped late last year.
Let me know if you're still having issues - feel free to ping me directly on emunday@atlassian.com
Hi @Edmund, when was this change made?Last time I tested this use case (which is the reason I filed against this ticket) it wasn't working yet, about 1 month ago or so. I'll double check on our end and get back to you. Thanks for the example and the update.
Hi 9860fbca7b8b - forgive me if I'm not understanding your message correctly, but the current iteration of Pipeline Stages should be able to handle your use-case perfectly fine.
The screenshots below demonstrate two Pipelines steps running within a single Stage. You'll notice in the logs that the variables are shared between both of the steps, as they are both referencing the same deployment in the .yaml file.
pipelines: default: - stage: deployment: production steps: - step: script: - echo "Step 1" - env - step: script: - echo "Step 2" - env
Step 1:
Step 2:
This has been going around for over 2 years now. I keep getting updates about requirements, as far as I understand the title of the ticket describes the requirements clearly enough. The whole point of deployments is to track two things: environment state and environment variables. We want to be able to use the exact same environment variables on multiple steps. Given that, can someone please clarify what the complexity of this is and why so much back and forth with multiple customers?
Thanks ef62eeda4196 - so if I'm understanding correctly, you have a build step that is effectively dynamic and dependent on the environment that the artifact is going to be deployed to? For example, you might be baking environment-specific configuration into the actual artifact for situations where that artifact is being deployed to an environment where remote config isn't possible.
So ideally in this case you'd want to:
A: Be able to run multiple stages in parallel, one for each of the different environments, allowing you to parallelise your build/deploy stage.
B: Be able to execute that set of manual triggers you have in your Build/Deploy steps within the Stages.
70690f83c0ef - can you please provide more context on which of the 3 major outstanding requirements are required for your use-case? Please vote/comment on those 3 tickets too as this main thread will most likely be closed off soon so we can track the individual work-streams with more granularity.
Happy 4 year anniversary! They've logged 45 minutes, so don't lose hope!
Hi @Edmund Munday,
This is the problem we have, I need a build step and a deploy step for the same deployment, to get around this limitation we group the two steps into one.
This is a problem because we can't reuse the build for other deploys
tags: "*": - step: *test - step: *sonar - step: <<: *build trigger: manual deployment: production # <<< here - step: <<: *deploy name: "Deploy to PRD" trigger: manual # same deployment in multiple steps deployment: production - step: *Notify - step: *FlushCDN
b4244317ab32 & ef62eeda4196 - can you please provide more context on which of the 3 major outstanding requirements are required for your use-case? Please vote/comment on those 3 tickets too as this main thread will most likely be closed off soon so we can track the individual work-streams with more granularity.
Same problem here at the company, is the solution so complicated to take so long?
7703d6f79705, 68450903da24, c6d8ebf6ba94, 20646e0f507f - Please ensure you vote & follow on the relevant BCLOUD tickets for the enhancements your teams are looking for.
If you could comment on those tickets with example use-cases too, that would be extremely helpful. In particular, examples of use-cases for parallel steps within a stage would be greatly appreciated.
https://jira.atlassian.com/browse/BCLOUD-22214
https://jira.atlassian.com/browse/BCLOUD-22223
https://jira.atlassian.com/browse/BCLOUD-22216
Stage should support internal manual trigger for individual steps and parallel steps.
pipelines: custom: deploy-to-npd-env: - variables: - name: Environment default: Integration allowed-values: - Develop - Integration - Beta - Staging - step: <<: *detect-changes-from-latest-commit-deployed - stage: name: Terraform Plan&Apply steps: - step: <<: *create-dotenv-file - step: <<: *terraform-plan-per-infra-modules - step: <<: *terraform-apply-per-infra-modules trigger: manual
Thanks for the work! It's a serious impairment for us that stages don't support parallel steps, to the point what we won't be using stages, because parallelization is more important. Can we please get the manual trigger and parallel steps features added to stages?
It's been said before but the fact that steps within stages won't accept triggers and conditions is a deal-breaker for our workflow too.
Our use case is as follows:
- stage: test
- deployment: test
- step: terraform plan
- step: terraform apply
- trigger: manual
- stage: staging
- deployment: staging
- step: terraform plan
- step: terraform apply
- trigger: manual
- trigger: manual
- stage: production
- deployment: production
- step: terraform plan
- step: terraform apply
- trigger: manual
- trigger: manual
IMO, it does look reasonable.
68450903da24 - thanks for the feedback on the documentation. I've requested an update be made to the docs to incorporate your feedback and make it more clear how multiple steps within a Stage behave. As you pointed out, the new behaviour which shares an environment lock across multiple sequential steps within a Stage is the largest foundational change that's been done as part of this feature so far.
We are currently working on adding additional capabilities to flesh out Pipeline Stages and hope to have some updates to share soon.
We did the same as Dardo Sordi, and it seems GitLab CI is vastly superior.
Bitbucket Pipelines unfortunately seems to be playing catch up to every other CI/CD tool in the market.
Thanks for getting this one into beta guys! I think it would really help to mention that it in addition to allowing grouping steps into a deployment, it critically allows steps to share a common deployment environment in https://support.atlassian.com/bitbucket-cloud/docs/configure-bitbucket-pipelinesyml/
It's a matter of wording, because it kind of says that, and if you know how Deployments work, you'll assume the steps share the Deployment environment, but it doesn't really call any attention to the fact of what was achieved here, and what problems it addresses.
It sounds more like a cosmetic grouping, esp. since the example doesn't show a deployment in use (which is understandable, you're trying to show the base feature). But then it should be mentioned more powerfully in the paragraph above.
it is possible now with the 'stages' beta feature.
But stages-based solution has a drawback as it is not possible to have parallel steps inside a stage.
So this solution looks not as a complete solution but rather as a a trade-off for me as backend and frontend of the same app can't be built simultaneously and use the same deployment variable set at the same time. We have 'choose what fits you better' instead.
In the sample below staging deployment will be shared across all steps.
pipelines:
custom:
deploy-to-staging:
- stage:
name: My Stage
deployment: staging
steps:
- step:
<<: *step1
- step:
<<: *step2
- step:
<<: *step3
- step:
<<: *step4
Another vote for this issue - this prevents us from using concurrency control where there are two steps pertaining to one environment.
I joined such a session before and one of the talking point was multi-step deployments. Provided many feedback on it too. It took multi years to implement it and even then, by restricting other features. We are unable to use stages.
f4dbaae2a132 I dont know why this is closed. Issues are still around after +5 years.
https://community.atlassian.com/t5/Bitbucket-questions/Usage-of-Deployment-variables-on-multiple-steps/qaq-p/1103443?anon_like=2238885#U2948012