On Wed, Jul 14, 2021 at 10:32:56AM -0700, Elijah Newren wrote: > > It's slightly sad that we only got a 30% CPU improvement in the past 10 > > years. Are you just counting clock speed as a short-hand here? I think > > that doesn't tell the whole story. But all of this is a side-note > > anyway. What I care about is your actual timings. :) > > I'm using shorthand when discussing file sizes above (though I used > actual measurements when picking new values below). But the 30% came > from measuring the timings with the exact same sample file as you and > using a lightly modified version of your original script (tweaked to > also change file basenames) on an AWS c5xlarge instance. My timings > showed they were only about 30% faster than what you got when you last > bumped the limits. Interesting. My timings are much faster. With a 20k file, I get (on my laptop, which is an i9-9880H): N CPU (2008) CPU (now) 10 0.43s 0.007s 100 0.44s 0.071s 200 1.40s 0.226s 400 4.87s 0.788s 800 18.08s 2.887s 1000 27.82s 4.431s The 2008 timings are from the old email you linked in your commit message, and the new one is from running the revised script you showed. The savings seem like more than 30%. I don't know if that's all CPU or if something changed in the code. Using a 3k file (the median for ls-tree), numbers are similar, but a little smaller (my n=1300 is about 1.4s). So I think we're both in the same ballpark (and certainly an AWS machine is a perfectly fine representative sample of where people might run Git). -Peff