On Mon, 27 Sep 2010, Jan KrÃger wrote: > In 479b56ba ('make "repack -f" imply "pack-objects --no-reuse-object"'), > git repack -f was changed to include recompressing all objects on the > zlib level on the assumption that if the user wants to spend that much > time already, some more time won't hurt (and recompressing is useful if > the user changed the zlib compression level). > > However, "some more time" can be quite long with very big repositories, > so some users are going to appreciate being able to choose. Hence, this > adds a new -F option that uses the old behaviour of recalculating deltas > only and keeping the zlib compression intact. > > Measurements taken using this patch on a current clone of git.git > indicate a 17% decrease in time being made available to users: > > git repack -Adf 38.79s user 0.56s system 133% cpu 29.394 total > git repack -AdF 34.84s user 0.56s system 145% cpu 24.388 total > > Signed-off-by: Jan KrÃger <jk@xxxxx> > --- > > The concrete case that prompted me to write this patch was a repository > of 25 GB that some guys were trying to repack. 17% of the time needed to > repack -f that much data is... substantial. > > Discussion point: it might make more sense to switch the meanings > around, making -F do the 'bigger' routine and reverting -f to what it > used to be. I don't feel strongly about that, however. That's exactly what I was about to propose before I read through your email down to this part. I personally don't find --no-reuse-object particularly useful. I hardly imagine that people are changing the pack compression level that often if at all. So I doubt moving the current --no-reuse-object behavior to -F and reverting -f to --no-reuse-delta would cause any serious inconvenience. It certainly won't _break_ anything. So you have my ACK to do that change. In addition to that change, perhaps a note could be added to the documentation for pack.compression indicating that for the new setting to take effect for existing packs, they must be repacked with -F. Nicolas