On Thu, 14 Feb 2008, Brandon Casey wrote: > I have successfully repacked this repo a few times on a 2.1GHz system > with 16G. > > The smallest attained pack was about 1.45G (1556569742B). > [...] > > * Multi threaded (250m window) > [pack] > threads = 4 > windowmemory = 250m > compression = 9 > [repack] > usedeltabaseoffset = true > > pack size: 1767405703 > time: 3 hours > > First >99% took 50min. Last 10000 objects took 2hours. Right. That's because the algorithm to distribute the load between threads ends up stealing work from other threads whenever a thread is done with its own share. So the easy objects are quickly done with by a few threads until they all converge onto the hard ones. In the non threaded case, the slow down ocurs around 12%. It looks like those hard objects are huge binary blobs. If they could be removed from the repository entirely and regenerated as needed instead of being carried around then I expect the repository size would fall below the 500MB mark. Nicolas - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html