On Sat, Jul 02, 2022 at 01:01:22PM -0400, Theodore Ts'o wrote: > Note: I recommend that you skip using the loop device xfstests > strategy, which Luis likes to advocate. For the perspective of > *likely* regressions caused by the Folio patches, I claim they are > going to cause you more pain than they are worth. If there are some > strange Folio/loop device interactions, they aren't likely going to be > obvious/reproduceable failures that will cause pain to linux-next > testers. While it would be nice to find **all** possible bugs before > patches go usptream to Linus, if it slows down your development > velocity to near-standstill, it's not worth it. We have to be > realistic about things. Regressions with the loopback block driver can creep up and we used to be much worse, but we have gotten better at it. Certainly testing a loopback driver can mean running into a regression with the loopback driver. But some block driver must be used in the end. > What about other file systems? Well, first of all, xfstests only has > support for the following file systems: > > 9p btrfs ceph cifs exfat ext2 ext4 f2fs gfs glusterfs jfs msdos > nfs ocfs2 overlay pvfs2 reiserfs tmpfs ubifs udf vfat virtiofs xfs > > {kvm,gce}-xfstests supports these 16 file systems: > > 9p btrfs exfat ext2 ext4 f2fs jfs msdos nfs overlay reiserfs > tmpfs ubifs udf vfat xfs > > kdevops has support for these file systems: > > btrfs ext4 xfs Thanks for this list Ted! And so adding suport for a new filesystem in kdevops should be: * a kconfig symbol for the fs and then one per supported mkfs config option you want to support * a configuration file for it, this can be as elaborate to support different mkfs config options as we have for xfs [0] or one with just one or two mkfs config options [1]. The default is just shared information. [0] https://github.com/linux-kdevops/kdevops/blob/master/playbooks/roles/fstests/templates/xfs/xfs.config [1] https://github.com/linux-kdevops/kdevops/blob/master/playbooks/roles/fstests/templates/ext4/ext4.config > There are more complex things you could do, such as running a baseline > set of tests 500 times (as Luis suggests), I advocate 100 and I suggest that is a nice goal for enterprise kernels. I also personally advocate this confidence in a baseline for stable kernels if *I* am going to backport changes. > but I believe that for your > use case, it's not a good use of your time. You'd need to speed > several weeks finding *all* the flaky tests up front, especially if > you want to do this for a large set of file systems. It's much more > efficient to check if a suspetected test regression is really a flaky > test result when you come across them. Or you work with a test runner that has the list of known failures / flaky failures for a target configuration like using loopbacks already. And hence why I tend to attend to these for xfs, btrfs, and ext4 when I have time. My goal has been to work towards a baseline of at least 100 successful runs without failure tracking upstream. Luis