On Wed, Nov 22, 2023 at 08:17:46AM -0800, Darrick J. Wong wrote: > On Wed, Nov 22, 2023 at 04:44:58PM +0200, Nikolai Kondrashov wrote: > > On 11/20/23 00:54, Theodore Ts'o wrote: > > > So as for *me*, I'm going to point people at: > > > > > > https://github.com/tytso/xfstests-bld/blob/master/Documentation/kvm-quickstart.md > > > > ... > > > > > (And note that I keep the xfstests-bld repo's on kernel.org and > > > github.com both uptodate, and I prefer using the using the github.com > > > URL because it's easier for the new developer to read and understand > > > it.) > > > > I already queued a switch to the kernel.org URL, which Darrick has suggested. > > I'll drop it now, but you guys would have to figure it out between yourselves, > > which one you want :D > > > > Personally, I agree that the one on GitHub is more reader-friendly, FWIW. > > For xfstests-bld links, I'm ok with whichever domain Ted wants. > > > > And similarly, just because the V: line might say, "kvm-xfstests > > > smoke", someone could certainly use kdevops if they wanted to. So > > > perhaps we need to be a bit clearer about what we expect the V: line > > > to mean? > > > > I tried to handle some of that with the "subsets", so that you can run a wider > > test suite and still pass the Tested-with: check. I think this has to be > > balanced between allowing all the possible ways to run the tests and a > > reasonable way to certify the commit was tested automatically. > > > > E.g. name the test "xfstests", and list all the ways it can be executed, thus > > communicating that it should still say "Tested-with: xfstests" regardless of > > the way. And if there is a smaller required subset, name it just "xfstests > > smoke" and list all the ways it can be run, including the simplest > > "kvm-xfstests smoke", but accept just "Tested-with: xfstests-smoke". > > > > I'm likely getting things wrong, but I hope you get what I'm trying to say. > > Not entirely -- for drive-by contributions and obvious bugfixes, a quick > "V: xfstests-bld: kvm-xfstests smoke" / "V: fstests: ./check -g smoke" > run is probably sufficient. For trivial drive-by contributions and obvious bug fixes, I think this is an unnecessary burden on these potential contributors. If it's trivial, then there's little burden on the reviewer/maintainer to actually test it, whilst there is significant burden on the one-off contributor to set up an entirely new, unfamiliar testing environment just to fix something trivial. If you want every patch tested, the follow the lead of the btrfs developers: set up a CI mechanism on github and ask people to submit changes there first and then provide a link to the successful test completion ticket with the patch submission. > (Insofar as n00bs running ./check isn't sufficient, but that's something > that fstests needs to solve...) > > For nontrivial code tidying, the author really ought to run the whole > test suite. It's still an open question as to whether xfs tidying > should run the full fuzz suite too, since that increases the runtime > from overnightish to a week. Yes, the auto group tests should be run before non-trivial patch sets are submitted. That is the entire premise of the the auto group existing - it's the set of regression tests we expect to pass for any change. However, the full on disk format fuzz tests should not be required to be run. Asking people to run tests that take a week for general cleanups and code quality improvements is just crazy talk - the cost-benefit equation is all out of whack here, especially if the changes have no interaction with the on-disk format at all. IMO, extensive fuzz testing is something that should be done post-integration - requiring individual developers to run tests that take at least a week to run before they can submit a patchset for review/inclusion is an excessive burden, and we don't need every developer to run such tests every time they do something even slightly non-trivial. It is the job of the release manager to co-ordinate with the testing resources to run extensive, long running tests after code has been integrated into the tree. Forcing individual developers to run this sort of testing just isn't an efficient use of resources.... > For /new features/, the developer(s) ought to come up with a testing > plan and run that by the community. Eventually those will merge into > fstests or ktest or wherever. That's how it already works, isn't it? -Dave. -- Dave Chinner david@xxxxxxxxxxxxx