-
Suggestion
-
Resolution: Unresolved
-
None
-
None
-
21
-
Bitbucket's adaptive throttling sub-system is a system protection mechanism that uses a ticketing system whereby "heavyweight" Git hosting operations (such as clone/fetch/push) are throttled. That is, when the system is too busy to handle a Git hosting operation immediately it will queue the request, then after 5 minutes if that request didn’t make it to the front of the queue and get executed, it will be rejected.
The data that is included in this dynamic throttling includes:
- Number of CPU cores
- %CPU usage
- Total system memory
- Various constraints that can be tuned
One of the more commonly used tunables used is `throttle.resource.scm-hosting.adaptive.limit.max`. This this sets the upper limit on the number of SCM hosting operations, meaning pushes and pulls over HTTP or SSH, which may be running concurrently.
A common reason for doing this is if a system has insufficient memory relative to the size of the largest repositories. For such a system, if tuning is not carried out it is possible for the Git processes Bitbucket forks to exhaust system memory.
A useful improvement to adaptive throttling would be to account for RAM in a dynamic manner, potentially taking into account:
- Available RAM at the time the ticket is granted; not just total ram calculated when Bitbucket starts; and/or
- Average or maximum RAM usage of forked Git processes for each ticket granted
This doesn't eliminate the chance of an out-of-memory situation, because Bitbucket can't predict exactly how much memory a Git process may consume, but may reduce the chance of an out-of-memory condition occurring in a system that is not appropriately configured.