On Mon, Oct 04, 2021 at 10:29:03PM +0000, Jeff Hostetler via GitGitGadget wrote: > From: Jeff Hostetler <jeffhost@xxxxxxxxxxxxx> > > Teach test_perf_() to remove the temporary test_times.* files Small nit: s/test_times/test_time here and throughout. > at the end of each test. > > test_perf_() runs a particular GIT_PERF_REPEAT_COUNT times and creates > ./test_times.[123...]. It then uses a perl script to find the minimum > over "./test_times.*" (note the wildcard) and writes that time to > "test-results/<testname>.<testnumber>.result". > > If the repeat count is changed during the pXXXX test script, stale > test_times.* files (from previous steps) may be included in the min() > computation. For example: > > ... > GIT_PERF_REPEAT_COUNT=3 \ > test_perf "status" " > git status > " > > GIT_PERF_REPEAT_COUNT=1 \ > test_perf "checkout other" " > git checkout other > " > ... > > The time reported in the summary for "XXXX.2 checkout other" would > be "min( checkout[1], status[2], status[3] )". > > We prevent that error by removing the test_times.* files at the end of > each test. Well explained, and makes sense to me. I didn't know we set GIT_PERF_REPEAT_COUNT inline with the performance tests themselves, but grepping shows that we do it in the fsmonitor tests. Dropping any test_times files makes sense as the right thing to do. I have no opinion on whether it should happen before running a perf test, or after generating the results. So what you did here looks good to me. An alternative approach might be to only read the test_time.n files we know should exist based on GIT_PERF_REPEAT_COUNT, perhaps like: test_seq "$GIT_PERF_REPEAT_COUNT" | perl -lne 'print "test_time.$_"' | xargs "$TEST_DIRECTORY/perf/min_time.perl" >"$base".result but I'm not convinced that the above is at all better than what you wrote, since leaving the extra files around is the footgun we're trying to avoid in the first place. Thanks, Taylor