Re: [PATCH] t/perf/perf-lib.sh: remove test_times.* at the end test_perf_()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 10/10/21 5:26 PM, SZEDER Gábor wrote:
On Tue, Oct 05, 2021 at 01:45:03PM -0400, Taylor Blau wrote:
On Mon, Oct 04, 2021 at 10:29:03PM +0000, Jeff Hostetler via GitGitGadget wrote:
From: Jeff Hostetler <jeffhost@xxxxxxxxxxxxx>

Teach test_perf_() to remove the temporary test_times.* files

Small nit: s/test_times/test_time here and throughout.

at the end of each test.

test_perf_() runs a particular GIT_PERF_REPEAT_COUNT times and creates
./test_times.[123...].  It then uses a perl script to find the minimum
over "./test_times.*" (note the wildcard) and writes that time to
"test-results/<testname>.<testnumber>.result".

If the repeat count is changed during the pXXXX test script, stale
test_times.* files (from previous steps) may be included in the min()
computation.  For example:

...
GIT_PERF_REPEAT_COUNT=3 \
test_perf "status" "
	git status
"

GIT_PERF_REPEAT_COUNT=1 \
test_perf "checkout other" "
	git checkout other
"
...

The time reported in the summary for "XXXX.2 checkout other" would
be "min( checkout[1], status[2], status[3] )".

We prevent that error by removing the test_times.* files at the end of
each test.

Well explained, and makes sense to me. I didn't know we set
GIT_PERF_REPEAT_COUNT inline with the performance tests themselves, but
grepping shows that we do it in the fsmonitor tests.

Dropping any test_times files makes sense as the right thing to do. I
have no opinion on whether it should happen before running a perf test,
or after generating the results. So what you did here looks good to me.

I think it's better to remove those files before running the perf
test, and leave them behind after the test finished.  This would give
developers an opportunity to use the timing results for whatever other
statistics they might be interested in.

I could see doing it before.  I'd like to leave it as is for now.
Let's fix the correctness now and we can fine tune it later (with
your suggestion below).

That makes me wonder if it would it be better to have the script
keep all of the test time values?  That is, create something like
test_time.$test_count.$test_seq.  Then you could look at all of
the timings over the whole test script, rather just of those of the
one where you stopped it.



And yes, I think it would be better if 'make test' left behind
't/test-results' with all the test trace output for later analysis as
well.  E.g. grepping through the test logs can uncover bugs like this:

   https://public-inbox.org/git/20211010172809.1472914-1-szeder.dev@xxxxxxxxx/

and I've fixed several similar test bugs that I've found when looking
through 'test-results/*.out'.  Alas, it's always a bit of a hassle to
comment out 'make clean-except-prove-cache' in 't/Makefile'.


Jeff



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux