On Wed, Oct 03, 2018 at 12:08:15PM -0700, Stefan Beller wrote: > I share these concerns in a slightly more abstract way, as > I would bucket the actions into two separate bins: > > One bin that throws away information. > this would include removing expired reflog entries (which > I do not think are garbage, or collection thereof), but their > usefulness is questionable. > > The other bin would be actions that optimize but > do not throw away any information, repacking (without > dropping files) would be part of it, or the new > "write additional files". > > Maybe we can move all actions of the second bin into a new > "git optimize" command, and git gc would do first the "throw away > things" and then the optimize action, whereas clone would only > go for the second optimizing part? One problem with that world-view is that some of the operations do _both_, for efficiency. E.g., repacking will drop unreachable objects in too-old packs. We could actually be more aggressive in combining things here. For instance, a full object graph walk in linux.git takes 30-60 seconds, depending on your CPU. But we do it at least twice during a gc: once to repack, and then again to determine reachability for pruning. If you generate bitmaps during the repack step, you can use them during the prune step. But by itself, the cost of generating the bitmaps generally outweighs the extra walk. So it's not worth generating them _just_ for this (but is an obvious optimization for a server which would be generating them anyway). -Peff