On Tue, Oct 05, 2021 at 01:45:03PM -0400, Taylor Blau wrote: > > GIT_PERF_REPEAT_COUNT=3 \ > > test_perf "status" " > > git status > > " > > > > GIT_PERF_REPEAT_COUNT=1 \ > > test_perf "checkout other" " > > git checkout other > > " > [...] > > Well explained, and makes sense to me. I didn't know we set > GIT_PERF_REPEAT_COUNT inline with the performance tests themselves, but > grepping shows that we do it in the fsmonitor tests. Neither did I. IMHO that is a hack that we would do better to avoid, as the point of it is to let the user drive the decision of time versus quality of results. So the first example above is spending extra time that the user may have asked us not to, and the second is getting less significant results by not repeating the trial. Presumably the issue in the second one is that the test modifies state. The "right" solution there is to give test_perf() a way to set up the state between trials (you can do it in the test_perf block, but you'd want to avoid letting the setup step affect the timing). I'd also note that GIT_PERF_REPEAT_COUNT=1 \ test_perf ... in the commit message is a bad pattern. On some shells, the one-shot variable before a function will persist after the function returns (so it would accidentally tweak the count for later tests, too). All that said, I do think cleaning up the test_time files after each test_perf is a good precuation, even if I don't think it's a good idea in general to flip the REPEAT_COUNT variable in the middle of a test. -Peff