Junio C Hamano <gitster@xxxxxxxxx> writes: >> Yes. But I think the default limit for the number of loose objects, 7000, >> gives us small overhead when we do enumeration of all objects. > > Hmph, I didn't see the code that does the estimation of loose object > count before starting to enumerate, though. Another thing the code could do to avoid negative consequences on projects that look quite different from yours (e.g. the other side does not have insane number of refs, but there are locally quite a few loose objects) is to count how many entries are on *refs list before we decide to enumerate all loose objects. When the refs list is relatively shorter than the estimated number of loose objects (you can actually do the estimation based on sampling, or just rely on your assumed 7k), it may be a win _not_ to trigger the new code you are adding to this codepath with this patch. I would imagine that the simplest implementaion may just count for (ref = *refs, count = 0; ref && count++ < LIMIT; ref = ref->next) ; use_oidset_optim = (LIMIT <= count); assuming your "up to 7k loose objects" and then experimenting to determine the LIMIT which is a rough number of refs that makes the oidset optimization worthwhile. We need a bit better/descriptive name for the LIMIT, if we go that route, though. Thanks.