-
Bug
-
Resolution: Fixed
-
Low
-
all
-
None
-
1
-
Severity 3 - Minor
-
The Document Deploying enterprise-scale Confluence on AWS: a step-by-step guide suggested c5.xlarge instance type for large performance.
Cluster Nodes | ||
---|---|---|
Parameter label (name) | Default | Description |
Cluster node instance type (ClusterNodeInstanceType) |
c5.xlarge | Choose the application node type that matches your size and preferred node configuration in Application and database nodes. For example, if you prefer Performance for Large, choose c5.4xlarge. |
Maximum number of cluster nodes (ClusterNodeMax) |
1 |
|
Both of these parameters default to 1. Do not change this default, even if the Application and database nodes recommends a specific number of nodes. You'll need to deploy Confluence Data Center with one one application node, and then scale it up later after configuring Confluence. |
Also, it recommends having 8GB as HEAP memory.
Application tuning | ||
---|---|---|
Parameter label (name) | Default | Description |
Confluence Heap Size Override (JvmHeapOverride) |
N/A | Set this to 8g. This is the same heap size used in the tests that form the basis for our recommendations. |
But as per the AWS documentation, {{c5.xlarge }} has 8 GB of system RAM. This creates memory pressure while starting a Confluence.
failed; error='Not enough space' (errno=12) # # There is insufficient memory for the Java Runtime Environment to continue. # Native memory allocation (mmap) failed to map 8589934592 bytes for committing reserved memory. # An error report file with more information is saved as: # /opt/atlassian/confluence/hs_err_pid8953.log
Expected Results
The document should update accordingly so that users will not be confused and create instances accordingly.
Workaround
It's a documentation bug.