On Tue, Apr 17, 2012 at 12:16:15PM -0400, Jay Soffian wrote: > For a couple years now I've had a maintenance script which repacks all > the repos at @dayjob thusly: > > git config repack.usedeltabaseoffset true > git config pack.compression 9 > git config pack.indexversion 2 > git config gc.autopacklimit 4 > git config gc.packrefs true > git config gc.reflogexpire never > git config gc.reflogexpireunreachable never > git gc --auto --aggressive --prune > > This has worked fine on repos large and small. However, starting a > couple days ago git started running out of memory on a relatively > modest repo[*] while repacking on a Linux box with 12GB memory (+ 12GB > swap). I am able to gc the repo by either removing --aggressive or > .keep'ing the oldest pack. I wonder where the memory is going. In theory, the memory consumption for packing comes from keeping all of the objects for a given window in memory (so we are looking for a delta for object X, and we have a window of Y[0]..Y[$window] objects that we will consider). And for a multi-threaded pack, that's per-thread. How many cores are there on this box? Have you tried setting pack.windowMemory to (12 / # of cores) or thereabouts? > 1) If --aggressive does not generally provide a benefit, should it be > made a no-op? In your case, I think it is overkill. But it seems lame that git _can't_ do a full repack on such a beefy machine. You don't want to do it all the time, but you might want to do it at least once. -Peff -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html