On Wed, Mar 1, 2017 at 9:57 AM, Marius Storm-Olsen <mstormo@xxxxxxxxx> wrote: > > Indeed, I did do a > -c pack.threads=20 --window-memory=6g > to 'git repack', since the machine is a 20-core (40 threads) machine with > 126GB of RAM. > > So I guess with these sized objects, even at 6GB per thread, it's not enough > to get a big enough Window for proper delta-packing? Hmm. The 6GB window should be plenty good enough, unless your blobs are in the gigabyte range too. > This repo took >14hr to repack on 20 threads though ("compression" step was > very fast, but stuck 95% of the time in "writing objects"), so I can only > imagine how long a pack.threads=1 will take :) Actually, it's usually the compression phase that should be slow - but if something is limiting finding deltas (so that we abort early), then that would certainly tend to speed up compression. The "writing objects" phase should be mainly about the actual IO. Which should be much faster *if* you actually find deltas. > But arent't the blobs sorted by some metric for reasonable delta-pack > locality, so even with a 6GB window it should have seen ~25 similar objects > to deltify against? Yes they are. The sorting for delta packing tries to make sure that the window is effective. However, the sorting is also just a heuristic, and it may well be that your repository layout ends up screwing up the sorting, so that the windows just work very badly. For example, the sorting code thinks that objects with the same name across the history are good sources of deltas. But it may be that for your case, the binary blobs that you have don't tend to actually change in the history, so that heuristic doesn't end up doing anything. The sorting does use the size and the type too, but the "filename hash" (which isn't really a hash, it's something nasty to give reasonable results for the case where files get renamed) is the main sort key. So you might well want to look at the sorting code too. If filenames (particularly the end of filenames) for the blobs aren't good hints for the sorting code, that sort might end up spreading all the blobs out rather than sort them by size. And again, if that happens, the "can I delta these two objects" code will notice that the size of the objects are wildly different and won't even bother trying. Which speeds up the "compressing" phase, of course, but then because you don't get any good deltas, the "writing out" phase sucks donkey balls because it does zlib compression on big objects and writes them out to disk. So there are certainly multiple possible reasons for the deltification to not work well for you. Hos sensitive is your material? Could you make a smaller repo with some of the blobs that still show the symptoms? I don't think I want to download 206GB of data even if my internet access is good. Linus