On Monday, 10 February 2025 12:38:17, CET Han Young wrote: > We repack promisor packs by reading all the objects in promisor packs > (in repack.c), and send them to pack-objects. pack-objects then write a > single pack containing all the promisor objects. The actual old promisor > pack deletion happens in repack.c > > So just simply copying the keep-pack logic to repack_promisor_objects() > does not prevent the keep promisor packs from being repacked. I don't know much about the internals so maybe I misinterpreted what I saw, but the patch seems to fix the issue I observed: I have two 40 GiB promisor packs, and as soon as I accumulated 50 additional small packs (due to fetches), gc triggered a repack including the two big packs, despite them being above the bigPackThreshold so they could be kept. Repacking them is very disruptive, both because of the CPU and RAM load this produces, and also because this is on btrfs with snapshots, so rewriting those 80 GiB into a new file means consuming that much extra disk space for no good reason. With my patch, gc did not touch these two big packs but still collected all the small ones into one new pack as expected. Everything else also seems to work fine. According to the man page for git-pack-objects, it seems to me that this is how it's meant to work, because the description for --keep-pack says "This flag causes an object already in the given pack to be ignored, even if it would have otherwise been packed." (and something similar for --honor-pack- keep). To my untrained eyes, it looks like that's also how want_found_object()/add_object_entry() in pack-objects.c handle it. 2T