On Sat, Aug 11, 2007 at 05:12:24PM -0400, Jon Smirl wrote: > If anyone is bored and looking for something to do, making the delta > code in git repack multithreaded would help. Yesterday I did a big > repack that took 20 minutes and it only used one of my four cores. It > was compute bound the entire time. First, how much time is used by the write and how much by the deltify phase? If the writing phase uses too much time and you have enough free memory, you can try to raise the config variable pack.deltacachelimit (default 1000). It will save an additional delta operation for all object, whose delta is smaller than pack.deltacachelimit by caching the delta. Have you considered the impact on memory usage, if there are large blobs in the repository? While repacking, git keeps $window_size (default: 10) objects unpacked in memory. For all (except one), it additionally stores the delta index, which has about the same size as the object. So the worst case memory usage is "sizeof(biggest object)*(2*$window_size - 1)". If you have blobs >=100 MB, you need some GB of memory. Partitioning the problem is not trivial: * To get not worse packing resultes, we must first sort all objects by type, path, size. Then we can split split the list (for each task one part), which we can deltify individually. The problems are: - We need more memory, as each tasks keeps its own window of $window_size objects (+ delta indexes) in memory. - The list must be split in parts, which require the same amount of time. This is difficult, as it depends on the size of the objects as well as how they are stored (delta chain length). * On the other hand, we could run all try_delta operations for one object parallel. This way, we would need not very much more memory, but require more synchronisation (and more complex code). mfg Martin Kögler - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html