We have mono repos with lots of configuration files separated into two axis that are modelled simply into a two-level directory tree (e.g. abc/xyz/file1, abc/xyz/file2, def/foo/file1, etc).
Today, when we do a pipeline run for a combination (e.g. abc with xyz), we need to clone the entire repo which is around 1GB. However, the actual data with sparse-checkout then only nets around 10MB per such abc/xyz directory combination.
With the "git clone --filter" support, the filtering would occur on the server side, meaning at the end of the day only 10MB would be transmitted, avoiding network transfer and speeding up our builds significantly.
At the moment this filter is not implemented on bitbucket cloud:
results in a warning: