• Icon: Suggestion Suggestion
    • Resolution: Unresolved
    • None
    • Deployments, Tests
    • 5
    • 10
    • Our product teams collect and evaluate feedback from a number of different sources. To learn more about how we use customer feedback in the planning process, check out our new feature policy.

      It is not possible to report on smoke test failures when a set of smoke tests (selenium, etc) are executed as part of the environments task lists.

      We should add reporting around the deployment result to record and track smoke test failures.

          Form Name

            [BAM-13276] Smoke testing reports in deployment environments

            This is really bad. No news from Atlassian about an issue with 130 votes.

            How many votes are needed to have this task under consideration?

            Thanks.

            Pablo Garcia added a comment - This is really bad. No news from Atlassian about an issue with 130 votes. How many votes are needed to have this task under consideration? Thanks.

            Bob Swift {Appfire} added a comment - I think the link you wanted to share is:  https://jira.atlassian.com/issues/?jql=project%20%3D%20BAM%20AND%20status%20%3D%20Open%20ORDER%20BY%20votes%20DESC

            Can someone from Atlassian give an update about this issue? 119 votes and no official heads-up in two years!

            According to Atlassian's feature request policy, they should be giving us constant updates about the 20 top issues: https://jira.atlassian.com/browse/BAM-13276?jql=project%20%3D%20BAM%20AND%20status%20%3D%20Open%20ORDER%20BY%20votes%20DESC

            Thanks.

            Atlassian DRT added a comment - Can someone from Atlassian give an update about this issue? 119 votes and no official heads-up in two years! According to Atlassian's feature request policy , they should be giving us constant updates about the 20 top issues: https://jira.atlassian.com/browse/BAM-13276?jql=project%20%3D%20BAM%20AND%20status%20%3D%20Open%20ORDER%20BY%20votes%20DESC Thanks.

            Bob Swift added a comment -

            I see it has been almost 4 years since my last comment here regarding this omission . Since then, we have continued to mostly use unique (regular) builds for each environment. However, if I really wanted to use the deployment support as is, I would add a script or Bamboo CLI task to the build that runs a test build using the queueBuild action and the wait parameter. This means the task will fail if the test build fails. 

            Bob Swift added a comment - I see it has been almost 4 years since my last comment here regarding this omission . Since then, we have continued to mostly use unique (regular) builds for each environment. However, if I really wanted to use the deployment support as is, I would add a script or Bamboo CLI task to the build that runs a test build using the queueBuild  action and the wait parameter. This means the task will fail if the test build fails. 

            This kind of negates the possibility of ever using Bamboo as a continuous delivery tool, which is a must for modern CI platforms.  Deployment without a test is a big, big omission... that means we are back to our old build plans which don't support the level of parallelism, etc. we require.

            Can we review this status and move it from low to blocker?  Is there any chance of looking at this?

            Jacob Briggs added a comment - This kind of negates the possibility of ever using Bamboo as a continuous delivery tool, which is a must for modern CI platforms.  Deployment without a test is a big, big omission... that means we are back to our old build plans which don't support the level of parallelism, etc. we require. Can we review this status and move it from low to blocker?  Is there any chance of looking at this?

            Plan runners add on adds task to run integration tests build plan from deployment ans pass or fail the deployment based on tests status
            https://marketplace.atlassian.com/plugins/com.mdb.plugins.planrunners/server/overview
            Hope this helps.

            Bhagyashree added a comment - Plan runners add on adds task to run integration tests build plan from deployment ans pass or fail the deployment based on tests status https://marketplace.atlassian.com/plugins/com.mdb.plugins.planrunners/server/overview Hope this helps.

            Interesting. Thanks for the suggestion. As for the API, I agree, there's not much there

            Good conversation T.W. !

            Michael Valentin added a comment - Interesting. Thanks for the suggestion. As for the API, I agree, there's not much there Good conversation T.W. !

            T. W. added a comment -

            An API call is a good idea as well. However, depending on the build duration it might give users a cumbersome user experience (first showing the build is ok which then probably gets invalidated again later). So I rather suggest to make the QA for the build within the build stage (by provisioning a temporary VM and running the tests). Thus you can directly extract the build feedback to users and guarantee that the release in your library is bullet-proof for deployments.
            And as far as I can see the API only provides methods to add comments and labels to a result: https://docs.atlassian.com/bamboo/REST/5.11.0/#d2e1865 (which makes sense from a workflow perspective).

            T. W. added a comment - An API call is a good idea as well. However, depending on the build duration it might give users a cumbersome user experience (first showing the build is ok which then probably gets invalidated again later). So I rather suggest to make the QA for the build within the build stage (by provisioning a temporary VM and running the tests). Thus you can directly extract the build feedback to users and guarantee that the release in your library is bullet-proof for deployments. And as far as I can see the API only provides methods to add comments and labels to a result: https://docs.atlassian.com/bamboo/REST/5.11.0/#d2e1865 (which makes sense from a workflow perspective).

            Thanks T.W. That makes perfect sense. We do a similar two stage as you do. Our trouble is in that 2nd deployment were, for the reasons you stated, we re-run our tests. When we re-run those tests, we don't see a clear way of taking the parsable XML and having it impact on the "success" or "failure" of the deployment plan. Nor can we easily see the test results similar to who Bamboo presents them in the Test tab of a build. That would be perfect in our opinion. I would be willing to use the Bamboo API if there was a function we could call that allowed us to "set" the status of the deployment based on the errors or lack thereof from our test harness.

            Any thoughts on that end? I would have thought someone had thought of how to improve the deployment process. We've been thinking of going back to just using Build plans Or Go or some other deployment pipeline service.

            Michael Valentin added a comment - Thanks T.W. That makes perfect sense. We do a similar two stage as you do. Our trouble is in that 2nd deployment were, for the reasons you stated, we re-run our tests. When we re-run those tests, we don't see a clear way of taking the parsable XML and having it impact on the "success" or "failure" of the deployment plan. Nor can we easily see the test results similar to who Bamboo presents them in the Test tab of a build. That would be perfect in our opinion. I would be willing to use the Bamboo API if there was a function we could call that allowed us to "set" the status of the deployment based on the errors or lack thereof from our test harness. Any thoughts on that end? I would have thought someone had thought of how to improve the deployment process. We've been thinking of going back to just using Build plans Or Go or some other deployment pipeline service.

            T. W. added a comment -

            Well, we actually perform two deployments: one during the build stage, where an SSH task triggers Ansible. Ansible provisions a temp VM, deploys the code, generates necessary test data and triggers the tests on the VM. Afterwards it copies the resulted XML reports back to the Bamboo server into the build directory. After the SSH task you can then use a Test Result Parser to parse your result XML files and updates your jira tickets and the users.
            If the build and tests were successful, you have a ready release to be deployed on a static dev, qa or staging platform. This is the second stage, the deployment stage. IMO there's no need to run the tests again. In case your staging platform consists of more servers and deployment might fail for other reasons you could still use the same SSH task in the deployment stage to trigger the test on that platform as well. However, that would be more for manual debugging in that case.
            Makes sense?

            T. W. added a comment - Well, we actually perform two deployments: one during the build stage, where an SSH task triggers Ansible. Ansible provisions a temp VM, deploys the code, generates necessary test data and triggers the tests on the VM. Afterwards it copies the resulted XML reports back to the Bamboo server into the build directory. After the SSH task you can then use a Test Result Parser to parse your result XML files and updates your jira tickets and the users. If the build and tests were successful, you have a ready release to be deployed on a static dev, qa or staging platform. This is the second stage, the deployment stage. IMO there's no need to run the tests again. In case your staging platform consists of more servers and deployment might fail for other reasons you could still use the same SSH task in the deployment stage to trigger the test on that platform as well. However, that would be more for manual debugging in that case. Makes sense?

              Unassigned Unassigned
              jdumay James Dumay
              Votes:
              136 Vote for this issue
              Watchers:
              108 Start watching this issue

                Created:
                Updated: