On Tue, Mar 28, 2017 at 03:50:34PM -0400, Jeff Hostetler wrote: > It was a convenient way to isolate, average, and compare > read_index() times, but I suppose we could do something > like that. > > I did confirm that a ls-files does show a slight 0.008 > second difference on the 58K file Linux tree when toggled > on or off. Yeah, I agree it helps isolate the change. I'm just not sure we want to carry a bunch of function-specific perf-testing code. And one of the nice things about testing a real command is that it's...a real command. So it's an actual improvement a user might see. > But I'm tempted to suggest that we just omit my helper exe > and not worry about a test -- since we don't have any test > repos large enough to really demonstrate the differences. > My concern is that that 0.008 would be lost in the noise > of the rest of the test and make for an unreliable result. Yeah, I think that would be fine. You _could_ write a t/perf test and then use your 400MB monstrosity as GIT_PERF_LARGE_REPO. But given that most people don't have such a thing, there's not much value over you just showing off the perf improvement in the commit message. We could also have a t/perf test that generates a monstrous index and shows that it's faster. But frankly, I don't think this is all that interesting as a performance regression test. It's not like there's something subtle about the performance improvement; we stopped computing the SHA-1, and (gasp!) it takes exactly one SHA-1 computation's less time. So just mentioning the test case and the improvement in the commit message is sufficient, IMHO. -Peff