We have a serious performance problem with one of our large repos. The repo is our internal version of the android platform/manifest project. Our repo after running a clean "repack -A -d -F" is close to 8G in size, has over 700K refs, and it has over 8M objects. The repo takes around 40min to clone locally (same disk to same disk) using git 1.8.2.1 on a high end machine (56 processors, 128GB RAM)! It takes around 10mins before getting to the resolving deltas phase which then takes most of the rest of the time. While this is a fairly large repo, a straight cp -r of the repo takes less than 2mins, so I would expect a clone to be on the same order of magnitude in time. For perspective, I have a kernel/msm repo with a third of the ref count and double the object count which takes only around 20mins to clone on the same machine (still slower than I would like). I mention 1.8.2.1 because we have many old machines which need this. However, I also tested this with git v2.18 and it actually is much slower even (~140mins). Reading the advice on the net, people seem to think that repacking with shorter delta-chains would help improve this. I have not had any success with this yet. I have been thinking about this problem, and I suspect that this compute time is actually spent doing SHA1 calculations, is that possible? Some basic back of the envelope math and scripting seems to show that the repo may actually contain about 2TB of data if you add up the size of all the objects in the repo. Some quick research on the net seems to indicate that we might be able to expect something around 500MB/s throughput on computing SHA1s, does that seem reasonable? If I really have 2TB of data, should it then take around 66mins to get the SHA1s for all that data? Could my repo clone time really be dominated by SHA1 math? Any advice on how to speed up cloning this repo, or what to pursue more in my investigation? Thanks, -Martin -- The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation