On Sun, May 02, 2021 at 06:06:57PM +0700, Bagas Sanjaya wrote: > Recently I stumbled upon git unpack-objects documentation, which says: > > > Read a packed archive (.pack) from the standard input, expanding the objects contained within and writing them into the repository in "loose" (one object per file) format. > > However, I have some questions: > > 1. When I do git fetch, what is the threshold/limit for "Unpacking objects", > in other words what is the minimum number of objects for invoking > "Resolving deltas" instead of "Unpacking objects"? > 2. Can the threshold between unpacking objects and resolving deltas be > configurable? See the fetch.unpackLimit config. The default is 100 objects. > 3. Why in some cases Git do unpacking objects where resolving deltas > can be done? I don't know if the documentation discusses this tradeoff anywhere, but off the top of my head: - storing packs can be more efficient in disk space (because of deltas within the pack, but also fewer inodes for small objects). This effect is bigger the more objects you have. - storing packs can be less efficient, because thin packs will be completed with duplicates of already-stored objects. The overhead is bigger the fewer objects you have. Which I suspect is the main logic driving the object count (I didn't dig in the history or the archive, though; you might find more discussion there). AFAIK the number 100 doesn't have any real scientific basis. There are some other subtle effects, too: - storing packs from the wire may make git-gc more efficient (you can often reuse deltas sent by the other side) - storing packs from the wire may produce a worse outcome after git-gc, because you are reusing deltas produced by the client for their push (who might not have spent as much CPU looking for them as you would) -Peff