-
Suggestion
-
Resolution: Fixed
-
None
We have mono repos with lots of configuration files separated into two axis that are modelled simply into a two-level directory tree (e.g. abc/xyz/file1, abc/xyz/file2, def/foo/file1, etc).
Today, when we do a pipeline run for a combination (e.g. abc with xyz), we need to clone the entire repo which is around 1GB. However, the actual data with sparse-checkout then only nets around 10MB per such abc/xyz directory combination.
With the "git clone --filter" support, the filtering would occur on the server side, meaning at the end of the day only 10MB would be transmitted, avoiding network transfer and speeding up our builds significantly.
At the moment this filter is not implemented on bitbucket cloud:
git clone --filter=combine:blob:none+tree:0 --no-checkout --branch ${DEPLOY_TARGET_BRANCH} ${REPO_URL}
results in a warning:
warning: filtering not recognized by server, ignoring
Update: Thu 08, Aug 2024
Growing demand for large monorepo scale repositories requires ability to clone specific hierarchy under a repository (sparse checkout). Being able to clone lightweight repositories and expand based on need helps our users avoid upfront latency and storage demands. Bitbucket has fully rolled out support for clone filters for git clone/fetch over HTTPs and SSH.