On Tue, Jan 19, 2021 at 03:33:47PM +0100, Jacob Vosmaer wrote: > What I learned is that by default, a fetch ends up using the > '--include-tag' command line option of git-pack-objects. This causes > git-pack-objects to iterate through all the tags of the repository to > see if any should be included in the pack because they point to packed > objects. The problem is that this "iterate through all the tags" uses > for_each_ref which iterates through all references in the repository, > and in doing so loads each associated object into memory to check if > the ref is broken. But all we need for '--include-tag' is to iterate > through refs/tags/. > > On the repo we were testing this on, there are about > 500,000 refs but only 2,000 tags. So we had to load a lot of objects > just for the sake of '--include-tag'. It was common to see more than > half the CPU time in git-pack-objects being spent in do_for_each_ref, > and that in turn was dominated by ref_resolves_to_object. Some of these details may be useful in the commit message, too. :) Your "load a lot of objects" had me worried for a moment. We try hard not to load objects during such an iteration, even when peeling them (because the packed-refs format has a magic shortcut there). But I think that is all working as intended. What you were seeing was just tons of has_object_file() to make sure the ref was not corrupt (so finding the entry in a packfile, but not actually inflating the object contents). Arguably both upload-pack and pack-objects could use the INCLUDE_BROKEN flag to avoid even checking this. We'd notice the problem when somebody actually tried to fetch the object in question. That would speed things up further on top of your patch, because we wouldn't need to check the existence of even the tags. But it's definitely orthogonal, and should be considered separately. -Peff