On Tue, Mar 08, 2022 at 11:40:18AM -0500, Theodore Ts'o wrote: > One of my team members has been working with Darrick to set up a set > of xfs configs[1] recommended by Darrick, and she's stood up an > automated test spinner using gce-xfstests which can watch a git branch > and automatically kick off a set of tests whenever it is updated. I think its important to note, as we would all know, that contrary to most other subsystems, in so far as blktests and fstests is concerned, simply passing a test once does not mean there is no issue given that some test can fail with a failure rate of 1/1,000 for instance. How many times you want to run a full set of fstests against a filesystem varies depending on your filesystem, requirements and also what resources you have. It also varies depending on how long you want to dedicate time towards this. To help with these concepts I ended up calling this a kernel-ci steady state goal on kdevops: │ CONFIG_KERNEL_CI_STEADY_STATE_GOAL: │ │ The maximum number of possitive successes to have before bailing out │ a kernel-ci loop and report success. This value is currently used for │ all workflows. A value of 100 means 100 tests will run before we │ bail out and report we have achieved steady state for the workflow │ being tested. For fstests for XFS and btrfs, when testing for enterprise, I ended up going with a steady state test goal of 500. That is, 500 consecutive runs of fstests without any failure. This takes about 1 full week to run and one of my eventual goals is to reduce this time. Perhaps it makes more sense to talk generally how to optimize these sorts of tests, or share information on experiences like these. Do we want to define a steady state goal for stable for XFS? Luis