Uploaded image for project: 'Server Deployments and Scale'
  1. Server Deployments and Scale
  2. SCALE-23

Defining the DC node disk size during the CloudFormation template phase

    • Icon: Suggestion Suggestion
    • Resolution: Fixed
    • None
    • None
    • AWS Quick Start
    • None
    • Our product teams collect and evaluate feedback from a number of different sources. To learn more about how we use customer feedback in the planning process, check out our new feature policy.

      Issue Summary

      The Cloudformation currently default to set the node disk size to 8GB. The log directory Bitbucket_Home/log is stored in the node. The logs size increases drastically causing performance issue with the nodes. Log rotation which able to compress and archive log files is a temporary fix.

      It would be great to see the option is given to increase the node disk size during the CloudFormation template phase.

      Suggestion

      Suggestion from the customer, similar to the additional storage for your repository data feature, having an option to add a secondary ELB to store the log, cache and temp files.

            [SCALE-23] Defining the DC node disk size during the CloudFormation template phase

            Rogier.Timmermans added a comment - - edited

            Thanks for the quick pick-up!

             

            Edit: Fix confirmed! Below a quick lsblk/df against a newly deployed 7.6.0 stack's emerged webnode. Many thanks!

             

            [root@ip-192-168-106-32 log]# lsblk
            NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
            nvme0n1 259:0 0 50G 0 disk
            ├─nvme0n1p1 259:1 0 50G 0 part /
            └─nvme0n1p128 259:2 0 1M 0 part
            [root@ip-192-168-106-32 log]# df -h
            Filesystem Size Used Avail Use% Mounted on
            devtmpfs 3.8G 0 3.8G 0% /dev
            tmpfs 3.8G 0 3.8G 0% /dev/shm
            tmpfs 3.8G 396K 3.8G 1% /run
            tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup
            /dev/nvme0n1p1 50G 2.7G 48G 6% /
            192.168.106.23:/media/atl/bitbucket/shared 100G 135M 100G 1% /media/atl/bitbucket/shared
            tmpfs 763M 0 763M 0% /run/user/1000

            Rogier.Timmermans added a comment - - edited Thanks for the quick pick-up!   Edit: Fix confirmed! Below a quick lsblk/df against a newly deployed 7.6.0 stack's emerged webnode. Many thanks!   [root@ip-192-168-106-32 log] # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 259:0 0 50G 0 disk ├─nvme0n1p1 259:1 0 50G 0 part / └─nvme0n1p128 259:2 0 1M 0 part [root@ip-192-168-106-32 log] # df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 3.8G 0 3.8G 0% /dev tmpfs 3.8G 0 3.8G 0% /dev/shm tmpfs 3.8G 396K 3.8G 1% /run tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup /dev/nvme0n1p1 50G 2.7G 48G 6% / 192.168.106.23:/media/atl/bitbucket/shared 100G 135M 100G 1% /media/atl/bitbucket/shared tmpfs 763M 0 763M 0% /run/user/1000

            Hi d8d4f7618995 thanks for letting us know. It looks like there was a mistake when we tried to implement this and the volume was added to the Bitbucket NFS node instead of the Bitbucket webapp node. We've corrected that so the parameter should now work as intended and increase the root volume size of the Bitbucket webapp EC2 instance. You can get the latest version of the template here: https://aws.amazon.com/quickstart/architecture/bitbucket/

            Ben P (Inactive) added a comment - Hi d8d4f7618995 thanks for letting us know. It looks like there was a mistake when we tried to implement this and the volume was added to the Bitbucket NFS node instead of the Bitbucket webapp node. We've corrected that so the parameter should now work as intended and increase the root volume size of the Bitbucket webapp EC2 instance. You can get the latest version of the template here: https://aws.amazon.com/quickstart/architecture/bitbucket/

            Rogier.Timmermans added a comment - - edited

            Executed a dry-run test of the AWS Quickstart template on 18-09-2020. The size of webnodes (ClusterNodeVolumeSize) can indeed be indicated in the template, however, the created resources remain at their default 8GB.

            I would expect that a newly spawned node would either init at 50G but they emerge with an 8G disk.

            Fix does not seem effective.

             

             

            Rogier.Timmermans added a comment - - edited Executed a dry-run test of the AWS Quickstart template on 18-09-2020. The size of webnodes (ClusterNodeVolumeSize) can indeed be indicated in the template, however, the created resources remain at their default 8GB. I would expect that a newly spawned node would either init at 50G but they emerge with an 8G disk. Fix does not seem effective.    

            Dylan Rathbone added a comment - Fixed via: https://github.com/aws-quickstart/quickstart-atlassian-bitbucket/commit/89cd62a4be68dba385cf6c93b7a3d75b842002dd

            Dylan Rathbone added a comment - Fix applied to BB via this commit: https://github.com/aws-quickstart/quickstart-atlassian-bitbucket/commit/89cd62a4be68dba385cf6c93b7a3d75b842002dd

            Details of the change request:

            The issue can be resolved via a modification of the template "quickstart-atlassian-Bitbucket"

            It would require the following change(s):

            1) a variable needs to be introduced that is filled out on the CloudFormation parameter phase; default is 8GB so this can be set as the default value for this variable as well.

            2) ClusterNodeLaunchConfig Properties would need to be adjusted with a BlockDeviceMapping section where the variable influences the size of the volume.

                45. BlockDeviceMappings:

                     - DeviceName: /dev/xvdh
                       Ebs:
                             VolumeSize: 50      # use the value of the variable here
                             VolumeType: gp2  # you may want to introduce this as a choice/variable as well but it's not a requirement from my side
                            Encrypted: true      # you may want to introduce this as a choice/variable as well but it's not a requirement from my side

             

            This should be an easy change and allow for a more flexible deployment from the get-go for any user without diverging from ATL templates.

             

             

            Rogier.Timmermans added a comment - Details of the change request: The issue can be resolved via a modification of the template " quickstart-atlassian-Bitbucket " It would require the following change(s): 1) a variable needs to be introduced that is filled out on the CloudFormation parameter phase; default is 8GB so this can be set as the default value for this variable as well. 2)  ClusterNodeLaunchConfig Properties  would need to be adjusted with a BlockDeviceMapping section where the variable influences the size of the volume.     45. BlockDeviceMappings:          - DeviceName: /dev/xvdh            Ebs:                  VolumeSize: 50      # use the value of the variable here                  VolumeType: gp2  # you may want to introduce this as a choice/variable as well but it's not a requirement from my side                 Encrypted: true      # you may want to introduce this as a choice/variable as well but it's not a requirement from my side   This should be an easy change and allow for a more flexible deployment from the get-go for any user without diverging from ATL templates.    

            Rogier.Timmermans added a comment - - edited

            The log growth we're seeing is mostly caused by a customer that is doing an abnormal amount of update checks leading to the access logs to grow really fast; this then swallows up cache/tmp space which ultimately leads to the service crashing.

            We can mitigate this obviously (also, see SCALE-22) but anything that we do on the node will not be a permanent solution upon a terminate/re-init event where the node will reset to it's defaults - admittedly these are rare but they are also out of our direct control so they are an unknown factor.

            Still, i do feel that 8GB might be a bit tight moving forward and having the ability to configure the size of the Nodes from the CloudFormation template would be a good feature.

            Rogier.Timmermans added a comment - - edited The log growth we're seeing is mostly caused by a customer that is doing an abnormal amount of update checks leading to the access logs to grow really fast; this then swallows up cache/tmp space which ultimately leads to the service crashing. We can mitigate this obviously (also, see SCALE-22 ) but anything that we do on the node will not be a permanent solution upon a terminate/re-init event where the node will reset to it's defaults - admittedly these are rare but they are also out of our direct control so they are an unknown factor. Still, i do feel that 8GB might be a bit tight moving forward and having the ability to configure the size of the Nodes from the CloudFormation template would be a good feature.

              Unassigned Unassigned
              bannamalai Baskar Annamalai (Inactive)
              Votes:
              1 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: