On Wed, Jan 30 2019, Johannes Schindelin wrote: > Hi Ævar, > > On Wed, 30 Jan 2019, Ævar Arnfjörð Bjarmason wrote: > >> On Wed, Jan 30 2019, Johannes Schindelin wrote: >> >> > On Tue, 29 Jan 2019, Ævar Arnfjörð Bjarmason wrote: >> > >> >> On Tue, Jan 29 2019, Jeff King wrote: >> >> >> >> > On Tue, Jan 29, 2019 at 06:56:08AM -0800, Derrick Stolee via >> >> > GitGitGadget wrote: >> >> > >> >> >> From: Derrick Stolee <dstolee@xxxxxxxxxxxxx> >> >> >> >> >> >> When running the test suite for code coverage using >> >> >> 'make coverage-test', a single test failure stops the >> >> >> test suite from completing. This leads to significant >> >> >> undercounting of covered blocks. >> >> >> >> >> >> Add two new targets to the Makefile: >> >> >> >> >> >> * 'prove' runs the test suite using 'prove'. >> >> >> >> >> >> * 'coverage-prove' compiles the source using the >> >> >> coverage flags, then runs the test suite using >> >> >> 'prove'. >> >> >> >> >> >> These targets are modeled after the 'test' and >> >> >> 'coverage-test' targets. >> >> > >> >> > I think these are reasonable to have (and I personally much prefer >> >> > "prove" to the raw "make test" output anyway). >> >> >> >> I wonder if anyone would mind if we removed the non-prove path. >> >> >> >> When I added it in 5099b99d25 ("test-lib: Adjust output to be valid TAP >> >> format", 2010-06-24) there were still some commonly shipped OS's that >> >> had a crappy old "prove", but now almost a decade later that's not a >> >> practical problem, and it's installed by default with perl, and we >> >> already depend on perl for the tests. >> > >> > It's not only about crappy old `prove`, it is also about requiring Perl >> > (and remember, Perl is not really native in Git for Windows' case; >> >> We require perl now for testing, NO_PERL is just for the installed >> version of git. > > Which is confusing, if you want to put it nicely. > >> If you change the various test-lib.sh and test-lib-functions.sh that >> unconditionally uses "perl" or "$PERL_PATH" hundreds/thousands (didn't >> take an exact count, just watched fail scroll by) tests fail. > > I know. Oh boy, I know. > > But we do not have to keep that status quo, nor do we have to make it > worse. > > It would not surprise me in the least if we could accelerate our entire > test suite by reducing our heavy reliance on scripting (including Perl) to > the point that it really takes too little time *not* to run. (Right now, > if you are on Windows, you better think twice before you start the test > suite, it will easily take over 3h (!!!) to run in a regular developer > setup. Even on a regular Mac, I would think twice before starting the run > that blocks my machine for easily 20 minutes straight. Needless to say > that few developers, if any, use it to validate their patches, in > particular on Windows. Meaning: for all real purposes, the test suite is > nearly useless on Windows.) > > So let's not bake *even more* Perl usage into our test suite. Thanks. > >> So my assumption is that anyone running the tests now has perl anyway, >> and thus a further hard dependency on it won't hurt anything. > > By that token, the effort to turn many a script into a built-in for better > performance and substantially better error checking would be totally > nonsensical. "Because anyone running Git used those scripts anyway, so > making them a hard dependency won't hurt anything"? > > I do not believe even a fraction of a second that that effort is > nonsensical. Just like I do not believe even a fraction of a second that > it makes sense for our test suite to rely on scripting so much. Or for us > to make that reliance even bigger, for that matter. > >> > I still have a hunch that we could save on time *dramatically* by >> > simply running through regular `make` rather than through `prove`). >> >> My hunch is that on the OS's where this would matter (e.g. Windows) the >> overhead is mainly spawning the processes, and it doesn't matter if it's >> make or perl doing the spawning, but I have nothing to back that up... > > I have at least the experience of several thousands runs of the test suite > on Windows, together with a couple dozen hours spent recently *just* on > making the CI of GitGitGadget at least bearable. > > So I do not quite understand why you offered a contrary opinion when you > have nothing to back it up. > > I mean, I would really like to have an informed discussion with you, to > benefit from your skills and from your experience to make the entire > design of our test suite better (there is so much room for improvement, we > should really be able to put together our knowledge to enhance it). It > needs to be based on facts, of course. Let's get some numbers then. On master, go to the "t" directory and run this: for f in t[0-9]*.sh; do (echo '#!/bin/sh' && echo "echo ok 1 $f" && echo sleep 1 && echo echo 1..1) >$f; done That effectively turns all our tests into a "hello world" with a sleep of 1 second. Then run both: time prove -j12 t00[0-9]*.sh And: time make -j12 t00[0-9]*.sh For some value of -j12 and t00[0-9]*.sh. In my testing "make" is a bit faster, but not by any amount that would matter when this is run for real.