-
Suggestion
-
Resolution: Unresolved
It would be useful to have an init: block where environment variables can be set from within bitbucket-pipelines.yml.
An example use case would be to set up globals for the repo. This way, I can do something like declare the project name, docker image name, and so on within the file rather than having to do so through the bitbucket UI which makes it a bit easier to set up new builds along with new projects. This is particularly useful when working in a microservice architecture and services are being stood up somewhat rapidly and even changing just as rapidly.
Here's a sample pipeline config for one of my projects. Notice in the artifacts area, I have to hardcode the path because I don't have an environment variable set, and I don't really have an opportunity to run the export command to set it like I do for the script areas. This could be more genericized, had i had the option.
#!yaml image: microsoft/dotnet:sdk .set_environment: &setenv | export PROJECT_NAME=DAS.Data.ETL.HomeNet export DOCKER_IMAGE=gcr.io/dasplatform-production/das/data.etl.homenet export DEPLOYMENT_NAME=etl-homenet .gcloud_auth: &gcloudAuth | echo $GCP_CREDS_DEV | base64 --decode --ignore-garbage > ./gcloud-api-key.json gcloud auth activate-service-account --key-file gcloud-api-key.json gcloud auth configure-docker --quiet gcloud config set project $GCP_PROJECT gcloud container clusters get-credentials reticle --region us-central1-a pipelines: default: - step: &buildtest name: Build and Unit Test caches: - dotnetcore artifacts: - src/**/bin/Release/**.nupkg script: - dotnet restore src -s $NUGET_SOURCE -s https://api.nuget.org/v3/index.json - dotnet build src -c Release -p:BuildNumber=$BITBUCKET_BUILD_NUMBER - dotnet tool install -g trx2junit - export PATH="$PATH:/root/.dotnet/tools" - | function convert_tests { ls test-results/*.trx | xargs trx2junit rm test-results/*.trx } trap convert_tests EXIT - dotnet vstest src/**/bin/Release/**/**.Tests.dll --logger:trx --ResultsDirectory:test-results branches: dev: - parallel: - step: *buildtest - step: name: Publish netcore artifacts artifacts: - src/DAS.Data.ETL.HomeNet/out/** script: - *setenv - dotnet restore src -s $NUGET_SOURCE -s https://api.nuget.org/v3/index.json - dotnet publish src/$PROJECT_NAME/$PROJECT_NAME.csproj -c Release -o out - step: name: Build and push docker image image: google/cloud-sdk:latest script: - *setenv - *gcloudAuth - docker build -t $DOCKER_IMAGE:$BITBUCKET_COMMIT src/$PROJECT_NAME/ - docker push $DOCKER_IMAGE services: - docker - step: name: Deploy to Dev deployment: test image: google/cloud-sdk:latest script: - *setenv - *gcloudAuth - kubectl apply -f kube/dev.yaml # this is where helm could be nice - kubectl set image -n dev deploy/$DEPLOYMENT_NAME $DEPLOYMENT_NAME=$DOCKER_IMAGE:$BITBUCKET_COMMIT master: # these can't be run in parallel because the nuget publish needs the artifacts of the first step - step: *buildtest - step: name: Publish netcore artifacts artifacts: - src/DAS.Data.ETL.HomeNet/out/** script: - *setenv - dotnet restore src -s $NUGET_SOURCE -s https://api.nuget.org/v3/index.json - dotnet publish src/$PROJECT_NAME/$PROJECT_NAME.csproj -c Release -o out - dotnet nuget push src/**/bin/Release/**.nupkg -s $NUGET_SOURCE - step: name: Build and push docker image image: google/cloud-sdk:latest script: # master builds also tag latest - *setenv - *gcloudAuth - docker build -t $DOCKER_IMAGE:$BITBUCKET_COMMIT src/$PROJECT_NAME/ - docker tag $DOCKER_IMAGE:$BITBUCKET_COMMIT $DOCKER_IMAGE:latest - docker push $DOCKER_IMAGE:$BITBUCKET_COMMIT - docker push $DOCKER_IMAGE:latest services: - docker - step: name: Deploy to Staging/QA deployment: staging image: google/cloud-sdk:latest script: # this can't really be tested well in QA so for now, it goes straight from dev to prod - echo 'There is no staging environment at this time. This is just here to enforce the standard pipeline.' - step: name: Deploy to Production deployment: production trigger: manual image: google/cloud-sdk:latest script: - *setenv - *gcloudAuth - kubectl apply -f kube/prod.yaml # this is where helm could be nice - kubectl set image -n prod deploy/$DEPLOYMENT_NAME $DEPLOYMENT_NAME=$DOCKER_IMAGE:$BITBUCKET_COMMIT