For a couple years now I've had a maintenance script which repacks all the repos at @dayjob thusly: git config repack.usedeltabaseoffset true git config pack.compression 9 git config pack.indexversion 2 git config gc.autopacklimit 4 git config gc.packrefs true git config gc.reflogexpire never git config gc.reflogexpireunreachable never git gc --auto --aggressive --prune This has worked fine on repos large and small. However, starting a couple days ago git started running out of memory on a relatively modest repo[*] while repacking on a Linux box with 12GB memory (+ 12GB swap). I am able to gc the repo by either removing --aggressive or .keep'ing the oldest pack. [*] Stats: du -hs objects 141M objects git count-objects -v count: 0 size: 0 in-pack: 57656 packs: 37 size-pack: 143811 prune-packable: 0 garbage: 0 git version 1.7.10 I've since found a message from Shawn recommending against using --aggressive: http://groups.google.com/group/repo-discuss/msg/d2462eed67813571 > Junio Hamano and I looked at things a few weeks ago; it turns out the > --aggressive flag doesn't generally provide a benefit like we thought > it would. It would be safe to remove from your GC script, and will > speed things up considerably. A couple questions: 1) If --aggressive does not generally provide a benefit, should it be made a no-op? 2) Is it expected that gc --aggressive would run out of memory on this repo? I've posted the repo in case anyone wants to take a look: http://dl.dropbox.com/u/2138120/WebKit-trimmed.git.zip j. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html