Taylor Blau <me@xxxxxxxxxxxx> writes: > When using merge-tree often within a repository[^1], it is possible to > generate a relatively large number of loose objects, which can result in > degraded performance, and inode exhaustion in extreme cases. Well, be it "git merge-tree" or "git merge", new loose objects tend to accumulate until "gc" kicks in, so it is not a new problem for mere mortals, is it? As one "interesting" use case of "merge-tree" is for a Git hosting site with bare repositories to offer trial merges, without which majority of the object their repositories acquire would have been in packs pushed by their users, "Gee, loose objects consume many inodes in exchange for easier selective pruning" becomes an issue, right? Just like it hurts performance to have too many loose object files, presumably it would also hurt performance to keep too many packs, each came from such a trial merge. Do we have a "gc" story offered for these packs created by the new feature? E.g., "once merge-tree is done creating a trial merge, we can discard the objects created in the pack, because we never expose new objects in the pack to the outside, processes running simultaneously, so instead closing the new packfile by calling flush_bulk_checkin_packfile(), we can safely unlink the temporary pack. We do not even need to spend cycles to run a gc that requires cycles to enumerate what is still reachable", or something like that? Thanks.