On Wed, Oct 30, 2019 at 06:08:18PM +0100, Simon Holmberg wrote: > I've been experimenting with the new Partial Clone feature, attempting > to use it to filter out the otherwise full history of the large binary > resources in our repos. It works really well on the initial clone. But > once you start jumping around in history a lot, the repo will grow out > of proportion again as promised pack files are fetched. > > Are there any plans to add a --filter parameter to git gc as well, > that would be able to prune past history of objects and convert them > back into pack promises? Or am I wrong in assuming that this could > ever act as a native replacement for LFS? Without this, a repo would > only continue to grow ad infinitum, resulting in the same issues as > before unless you actively chose to delete your entire clone and > re-clone it from upstream once in a while. I don't recall seeing anybody actively working on this, but I think it would be a good idea. You'd probably want to be able to specify it in your config somehow, so that subsequent repacks pruned as necessary without you having to remember to do it each time. You could naively just drop everything that matches the filter, and then re-fetch it as needed. But for efficiency, you may want to keep some other objects: - objects mentioned directly in the index, or the tree of HEAD; you'd end up re-fetching these next time you "git checkout" - perhaps objects fetched recently are more worth keeping (e.g., ones with an mtime less than a day or two). I don't know if that helps, though. What you really care about is how recently they were accessed (assuming there's some locality there), not written. A frequently-accessed object may have been fetched immediately after you cloned, giving it an old mtime. Since we can get any of the objects again if we want and we're just optimizing, this is really just a cache-expiration problem. But it may be hard to implement any of the stock algorithms without having logs of which objects were accessed. -Peff