On Tue, Sep 06, 2022 at 06:52:26PM -0400, Eric Sunshine wrote: > On Tue, Sep 6, 2022 at 6:35 PM Eric Wong <e@xxxxxxxxx> wrote: > > Eric Sunshine via GitGitGadget <gitgitgadget@xxxxxxxxx> wrote: > > > +unless ($Config{useithreads} && eval { > > > + require threads; threads->import(); > > > > Fwiw, the threads(3perl) manpage has this since 2014: > > > > The use of interpreter-based threads in perl is officially discouraged. > > Thanks for pointing this out. I did see that, but as no better > alternative was offered, and since I did want this to work on Windows, > I went with it. I did some timings the other night, and I found something quite curious with the thread stuff. Here's a hyperfine run of "make" in the t/ directory before any of your patches. It uses "prove" to do parallelism under the hood: Benchmark 1: make Time (mean ± σ): 68.895 s ± 0.840 s [User: 620.914 s, System: 428.498 s] Range (min … max): 67.943 s … 69.531 s 3 runs So that gives us a baseline. Now the first thing I wondered is how bad it would be to just run chainlint.pl once per script. So I applied up to that patch: Benchmark 1: make Time (mean ± σ): 71.289 s ± 1.302 s [User: 673.300 s, System: 417.912 s] Range (min … max): 69.788 s … 72.120 s 3 runs I was quite surprised that it made things slower! It's nice that we're only calling it once per script instead of once per test, but it seems the startup overhead of the script is really high. And since in this mode we're only feeding it one script at a time, I tried reverting the "chainlint.pl: validate test scripts in parallel" commit. And indeed, now things are much faster: Benchmark 1: make Time (mean ± σ): 61.544 s ± 3.364 s [User: 556.486 s, System: 384.001 s] Range (min … max): 57.660 s … 63.490 s 3 runs And you can see the same thing just running chainlint by itself: $ time perl chainlint.pl /dev/null real 0m0.069s user 0m0.042s sys 0m0.020s $ git revert HEAD^{/validate.test.scripts.in.parallel} $ time perl chainlint.pl /dev/null real 0m0.014s user 0m0.010s sys 0m0.004s I didn't track down the source of the slowness. Maybe it's loading extra modules, or maybe it's opening /proc/cpuinfo, or maybe it's the thread setup. But it's a surprising slowdown. Now of course your intent is to do a single repo-wide invocation. And that is indeed a bit faster. Here it is without the parallel code: Benchmark 1: make Time (mean ± σ): 61.727 s ± 2.140 s [User: 507.712 s, System: 377.753 s] Range (min … max): 59.259 s … 63.074 s 3 runs The wall-clock time didn't improve much, but the CPU time did. Restoring the parallel code does improve the wall-clock time a bit, but at the cost of some extra CPU: Benchmark 1: make Time (mean ± σ): 59.029 s ± 2.851 s [User: 515.690 s, System: 380.369 s] Range (min … max): 55.736 s … 60.693 s 3 runs which makes sense. If I do a with/without of just "make test-chainlint", the parallelism is buying a few seconds of wall-clock: Benchmark 1: make test-chainlint Time (mean ± σ): 900.1 ms ± 102.9 ms [User: 12049.8 ms, System: 79.7 ms] Range (min … max): 704.2 ms … 994.4 ms 10 runs Benchmark 1: make test-chainlint Time (mean ± σ): 3.778 s ± 0.042 s [User: 3.756 s, System: 0.023 s] Range (min … max): 3.706 s … 3.833 s 10 runs I'm not sure what it all means. For Linux, I think I'd be just as happy with a single non-parallelized test-chainlint run for each file. But maybe on Windows the startup overhead is worse? OTOH, the whole test run is so much worse there. One process per script is not going to be that much in relative terms either way. And if we did cache the results and avoid extra invocations via "make", then we'd want all the parallelism to move to there anyway. Maybe that gives you more food for thought about whether perl's "use threads" is worth having. -Peff