Feedback from: https://community.atlassian.com/t5/Bitbucket-questions/Integrate-Bitbucket-Pipelines-cache-with-buildkit/qaq-p/2590637
When building images which require buildkit features the standard docker driver caching becomes unavailable.
There are a couple workarounds at the moment, with the primary one being caching in a repository. This is feasible only when the the repository in question already exists, which in case of e.g. CDK is not the case on initial build, plus it doesn't give you access to latest tags anyway.
The way we are currently working around it is to --cache-to=type=local to a known directory, and then utilize a defined cache in bitbucket-pipelines.yml to cache that directory, with the Dockerfile being the key. Since our layers cache maxes out at around 80-100MB uncompressed, this works okay'ish, but obviously invalidates all caches when something in a later step in the Dockerfile changes, which should not affect previous layers.
Now here's my recommendation:
Buildkit already has a cache backend for the GitHub Actions cache, so it can immediately directly communicate with the cache. Bitbucket could implement the GitHub Actions cache protocol in its cache, which would allow to use tha gha cache type directly, no changes required upstream.
Alternatively, Atlassian could contribute a cache backend for their Bitbucket Pipelines cache to Buildkit (e.g. type=bp), which would allow docker to immediately cache from and to the Bitbucket cache.
Considering that buildx is becoming the standard, an out-of-the-box caching solution should be found.
This is a must, for my team we discovered that after the docker engine was upgraded on our pipelines, cache stopped working altogether, because BuildKit is the default caching since 2024/2/1.
we contacted the team about this and the solution was to rollback the previously used docker engine. this is fine and all, but must be fixed in order for buildkits to be viable