Re: [PATCH v2 1/1] Makefile: add a prerequisite to the coverage-report target

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Apr 11 2022, Junio C Hamano wrote:

> Ævar Arnfjörð Bjarmason <avarab@xxxxxxxxx> writes:
>
>> I haven't come up with a patch for these coverage targets, but I think
>> it would be much more useful to:
>>
>>  * Not have the target itself compile git, i.e. work like SANITIZE=leak,
>>    there's no reason you shouldn't be able to e.g. combine the two
>>    easily, it's just a complimentary set of flags.
>>
>>  * We should be able to run this in parallel, see
>>    e.g. https://stackoverflow.com/questions/14643589/code-coverage-using-gcov-on-parallel-run
>>    for a trick for how to do that on gcc 9+, on older gcc GCOV_PREFIX*
>>    can be used.
>>
>>    I.e. we'd save these in t/test-results/t0001.coverage or whatever,
>>    and then "join" them at the end.
>
> I can see how this might lead to "Ah, *.coverage file exists so we
> run report to show that existing result", but it is not reasonable
> to say "we didn't touch t0001 so we do not have to rerun the script
> under coverage-test" because whatever git subcommand we use may have
> be updated (we _could_ describe the dependency fully so we only
> re-run t0001 if any of t0001-init.sh, "git init", "git config", and
> "git" is newer than the existing t0001.coverage; I do not know if
> that is sensible, though).  And ...

I've done some experimenting with having "make -C t t0001-init.sh" only
run if the underlying code changed, which we can do by scraping the
trace2 output, generated*.d files, and e.g. making t0001-init.sh depend
on whatever builtin/init-db.c and the rest depend on.

But that's not what I'm talking about here, I'm just saying that we'd do
a normal "make test" where we write the gcov tests per-test into
t/test-results/t0001 and join them at the end of the run.

So the equivalent of a FORCE run, just like now.

>> I wonder if the issue this patch is trying to address would then just go
>> away, i.e. isn't it OK that we'd re-run the tests to get the report
>> then? gcov doesn't add that much runtime overhead.
>
> ... I don't think overhead of gcov matters all that much.  Overhead
> of "Having to" rerun tests primarily comes from running the tests,
> with or without gcov enabled, so...

No, on a multi-core machine the inability to run with -jN is the main
factor in making this run slow. E.g. on my 8 core box the tests run in
2-3 minutes with -j8, with -j1 it's 20-25 minutes.

(-j1 numbers from wetware memory, I didn't want to wait for that slower
run while writing this, which is pretty much the point...).

So I'm wondering if the desire to keep the old coverage report around is
synonymous with the current implementation running so slowly.

We could also make that work, e.g. with order-only dependencies, but if
this doesn't run in ~30m but instead in ~3m with -jN v.s. -j1 perhaps we
could just treat it like we do "make test" itself.

> Or are you suggesting that we'd enable gcov in all our test runs by
> default?

No, just that when you do want to generate this report that we can make
it happen much faster than we currently do.

The GCC manual discusses all the ways to easily make gcov work with
parallelism, these rules just didn't use those methods.




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux