• Our product teams collect and evaluate feedback from a number of different sources. To learn more about how we use customer feedback in the planning process, check out our new feature policy.

      I have a Docker image I want to use: https://hub.docker.com/r/xueshanf/s3fs/

      The image will mount an S3 bucket for me. I think this image can be really helpful for running Maven builds that like to download the world. Here I can have a custom settings.xml point to the S3 bucket where all my external dependencies exist. My build / deploy time would dramatically go down.

      There are a few caveats on how I would securely configure the container. It requires that the host has a .s3fs file containing my AWS access and secret keys. I could host the image in a private repository but that's not ideal if this is going to be a solution for all your users.

      I'd love some feedback on what you think about this solution and how you could potentially add it to Bitbucket Pipelines.

      Cheers,
      Bjorn

            [BCLOUD-13368] Pipeline Docker image that mounts S3 repository

            Katherine Yabut made changes -
            Workflow Original: JAC Suggestion Workflow [ 3538775 ] New: JAC Suggestion Workflow 3 [ 3593148 ]
            Status Original: RESOLVED [ 5 ] New: Closed [ 6 ]
            Matt Ryall made changes -
            Status Original: GATHERING INTEREST [ 11772 ] New: RESOLVED [ 5 ]

            Hi Bjorn,

            The image doesn't start s3fs by default so you'll have to add the s3fs command from the docker compose example to your yaml file. If you do that after writing the .s3fs file timing shouldn't be an issue.

            Regards
            Sam

            StannousBaratheon added a comment - Hi Bjorn, The image doesn't start s3fs by default so you'll have to add the s3fs command from the docker compose example to your yaml file. If you do that after writing the .s3fs file timing shouldn't be an issue. Regards Sam

            Hi Sam,

            The question here would be about timing. If I specify an image I want to use, this image already expects the .s3fs file to be present. By the time we get to my yaml file, it's already too late. It also comes with a docker-compose.yml file to start it properly. You can find it here: https://github.com/xueshanf/docker-s3fs.

            Let me know if this could potentially work. If not I see this as a great feature for Pipelines.

            Cheers,
            Bjorn

            Bjorn Harvold added a comment - Hi Sam, The question here would be about timing. If I specify an image I want to use, this image already expects the .s3fs file to be present. By the time we get to my yaml file, it's already too late. It also comes with a docker-compose.yml file to start it properly. You can find it here: https://github.com/xueshanf/docker-s3fs . Let me know if this could potentially work. If not I see this as a great feature for Pipelines. Cheers, Bjorn

            You could also use secured Pipelines variables to store your AWS credentials and write them to the .s3fs file as part of your build. This is what we recommend for SSH keys currently, please see this answer for further information: https://answers.atlassian.com/questions/39243415/how-can-i-use-ssh-in-bitbucket-pipelines

            StannousBaratheon added a comment - You could also use secured Pipelines variables to store your AWS credentials and write them to the .s3fs file as part of your build. This is what we recommend for SSH keys currently, please see this answer for further information: https://answers.atlassian.com/questions/39243415/how-can-i-use-ssh-in-bitbucket-pipelines
            Geoff created issue -

              Unassigned Unassigned
              bjornharvold Bjorn Harvold
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: