On Mon, Jun 5, 2017 at 9:03 PM, Stefan Beller <sbeller@xxxxxxxxxx> wrote: >> That's never going to be a problem on a less beefy machine with >> --state=slow,save, since the 30s test is going to be long over by the >> time the rest of the tests run. >> >> Cutting down on these long tail tests allows me to e.g. replace this: >> >> git rebase -i --exec '(make -j56 all && cd t && prove -j56 <some >> limited glob>)' >> >> With a glob that runs the entire test suite, with the rebase only >> taking marginally longer in most cases while getting much better test >> coverage than I'd otherwise bother with. > > I wonder if this functionality is rather best put into prove? It would be nice to have a general facility to abort & kill tests based on some criteria as they're run by Test::Harness, but making that work reliably with all the edge cases prove needs to deal with (tens/hundreds of thousands of test suites) is a much bigger project than this. > Also prove doesn't know which tests are "interesting", > e.g. if you were working on interactive rebase, then you really > want the longest test to be run in full? If I were hacking rebase or another feature which has such a long running test then the long running test without the timeout would be part of my "regular" testing. The point of this feature is that most tests aren't like that, then you can use this and do the full test suite every time. > And this "judge by time, not by interest" doesn't bode well with > me. They're not mutually exclusive. > I have a non-beefy machine such that this particular problem > doesn't apply to me, but instead the whole test suite takes just > long to run. > > For that I reduce testing intelligently, i.e. I know where I am > working on, so I run only some given tests (in case of > submodules I'd go with "prove t74*") which would also fix > your issue IIUC? No, because even when you're working on e.g. "grep" something you're doing occasionally breaks in some completely unrelated test because it happens to cover an aspect of grep which is not part of the main tests. I ran into this recently while hacking the wildmatch() implementation. There's dozens of tests all over the test suite that'll break in subtle ways if wildmatch() breaks, often in cases where the main wildmatch test is still passing. Running the whole thing, even in a limited timeout fashion, has a much higher chance of catching whatever I've screwed up earlier, before I do an occasional full test suite run. Running the tests in 10 or 15s is a much shorter time to wait for during a edit/compile/test cycle.