Patrick Steinhardt <ps@xxxxxx> writes: > While this is a nice safeguard, I wonder whether it is sufficient. > Suppose you for example have a non-bare repository that already has > blobs checked out that would become removed by the filtering repack -- > does Git handle this situation gracefully? > > A quick check seems to indicate that it does. But not quite as well as > I'd have hoped: when I switch to a detached HEAD with an arbitrary > commit and then execute `git repack --filter=blob:none` then it also > removes blobs that are referenced by the currently checked-out commit. > This may or may not be what the user is asking for, but I'd rather lean > towards this behaviour being surprising. Hmph, the user asked not to have blobs that came from remote locally and instead refetch them from the promisor on-demand, so I would expect some pruning to happen (I am not a lazy-clone user, though). As long as we do the pruning sensibly, that is. Unless you are always following somebody else without doing any work on your own, you are likely to have objects that exist only locally and nowhere else. It would be unexpected and surprising, if we lost them only because they are of type 'blob' and because there is a promisor remote configured. Even if that is documented, that would be an unacceptable foot-gun misfeature. It is not just a local repository corruption that can be recovered by cloning from elsewhere. We are looking at lost work that cannot be recovered. I wonder if this topic can be salvaged by making it less aggressive in pruning, perhaps by traversing from the tips of remote-tracking branches of the promisor remote to identify which blobs can safely be pruned (by definition, promisor remote cannot lose objects that it once published, or its cloners will immediately have corrupt repositories). That may turn this from a misfeature into a feature. Thanks.