Ritesh Harjani (IBM) <ritesh.list@xxxxxxxxx> writes: > Leah Rumancik <leah.rumancik@xxxxxxxxx> writes: > >> Last year we covered the new process for backporting to XFS. There are >> still remaining pain points: establishing a baseline for new branches >> is time consuming, testing resources aren't easy to come by for >> everyone, and selecting appropriate patches is also time consuming. To >> avoid the need to establish a baseline, I'm planning on converting to >> a model in which I only run failed tests on the baseline. I test with >> gce-xfstests and am hoping to automate a relaunch of failed tests. >> Perhaps putting the logic to process the results and form new ./check >> commands could live in fstests-dev in case it is useful for other >> testing infrastructures. > > Nice idea. Another painpoint to add - > 4k blocksize gets tested a lot but as soon as we switch to large block > size testing, either with LBS, or on a system with larger pagesize... > ...we quickly starts seeing problems. Most of them could be testcase > failure, so if this could help establish a baseline, that might be helpful. > > > Also if could collborate on exclude/known failures w.r.t different > test configs that might come handy for people who are looking to help in > this effort. In fact, why not have different filesystems cfg files and their > corresponding exclude files as part of fstests repo itself? > I know xfstests-bld maintains it here [1][2][3]. And it is rather > very convinient to point this out to anyone who asks me of what test > configs to test with or what tests are considered to be testcase > failures bugs with a given fs config. > > So it will very helpful if we could have a mechanism such that all of > this fs configs (and it's correspinding excludes) could be maintained in > fstests itself, and anyone who is looking to test any fs config should > be quickly be able to test it with ./check <fs_cfg_params>. Has this > already been discussed before? Does this sound helpful for people who > are looking to contribute in this effort of fs testing? > > > [1] [ext4]: > https://github.com/tytso/xfstests-bld/tree/master/test-appliance/files/root/fs/ext4/cfg Looking at the expunge comments, I think many of those entries should just be turned into inline checks in the test preamble and skipped with _notrun. The way I see it, expunged tests should be kept to a minimum, and the goal should be to eventually remove them from the list, IMO. They are tests that are known to be broken or flaky now, and can be safely ignored when doing unrelated work, but that will be fixed in the future. Tests that will always fail because the feature doesn't exist in the filesystem, or because it asks for an impossible situation in a specific configuration should be checked inline and skipped, IMO. +1 for the idea of having this in fstests. Even if we lack the infrastructure to do anything useful with it in ./check, having them in fstests will improve collaboration throughout different fstests wrappers (kernelci, xfstests-bld, etc.) > [2] [xfs]: https://github.com/tytso/xfstests-bld/tree/master/test-appliance/files/root/fs/xfs/cfg > [3] [fs]: https://github.com/tytso/xfstests-bld/tree/master/test-appliance/files/root/fs/ > > -ritesh -- Gabriel Krisman Bertazi