Re: [LSF/MM TOPIC] FS, MM, and stable trees

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 09, 2022 at 05:28:28PM -0800, Luis Chamberlain wrote:
> On Wed, Mar 09, 2022 at 04:19:21PM -0500, Josef Bacik wrote:
> > On Wed, Mar 09, 2022 at 11:00:49AM -0800, Luis Chamberlain wrote:
> > > On Wed, Mar 09, 2022 at 01:49:18PM -0500, Josef Bacik wrote:
> > > > On Wed, Mar 09, 2022 at 10:41:53AM -0800, Luis Chamberlain wrote:
> > > > > On Tue, Mar 08, 2022 at 11:40:18AM -0500, Theodore Ts'o wrote:
> > > > > > One of my team members has been working with Darrick to set up a set
> > > > > > of xfs configs[1] recommended by Darrick, and she's stood up an
> > > > > > automated test spinner using gce-xfstests which can watch a git branch
> > > > > > and automatically kick off a set of tests whenever it is updated.
> > > > > 
> > > > > I think its important to note, as we would all know, that contrary to
> > > > > most other subsystems, in so far as blktests and fstests is concerned,
> > > > > simply passing a test once does not mean there is no issue given that
> > > > > some test can fail with a failure rate of 1/1,000 for instance.
> > > > > 
> > > > 
> > > > FWIW we (the btrfs team) have been running nightly runs of fstests against our
> > > > devel branch for over a year and tracking the results.
> > > 
> > > That's wonderful, what is your steady state goal? And do you have your
> > > configurations used public and also your baseline somehwere? I think
> > > this later aspect could be very useful to everyone.
> > > 
> > 
> > Yeah I post the results to http://toxicpanda.com, you can see the results from
> > the runs, and http://toxicpanda.com/performance/ has the nightly performance
> > numbers and graphs as well.
> 
> That's great!
> 
> But although this runs nightly, it seems this runs fstest *once* to
> ensure if there are no regressions. Is that right?
> 

Yup once per config, so 8 full fstest runs.

> > This was all put together to build into something a little more polished, but
> > clearly priorities being what they are this is as far as we've taken it.  For
> > configuration you can see my virt-scripts here
> > https://github.com/josefbacik/virt-scripts which are what I use to generate the
> > VM's to run xfstests in.
> > 
> > The kernel config I use is in there, I use a variety of btrfs mount options and
> > mkfs options, not sure how interesting those are for people outside of btrfs.
> 
> Extremely useful.
> 

[root@fedora-rawhide ~]# cat /xfstests-dev/local.config
[btrfs_normal_freespacetree]
TEST_DIR=/mnt/test
TEST_DEV=/dev/mapper/vg0-lv0
SCRATCH_DEV_POOL="/dev/mapper/vg0-lv7 /dev/mapper/vg0-lv6 /dev/mapper/vg0-lv5 /dev/mapper/vg0-lv4 /dev/mapper/vg0-lv3 /dev/mapper/vg0-lv2 /dev/mapper/vg0-lv1 "
SCRATCH_MNT=/mnt/scratch
LOGWRITES_DEV=/dev/mapper/vg0-lv8
PERF_CONFIGNAME=jbacik
MKFS_OPTIONS="-K -f -O ^no-holes"
MOUNT_OPTIONS="-o space_cache=v2"
FSTYP=btrfs

[btrfs_compress_freespacetree]
MOUNT_OPTIONS="-o compress=zlib,space_cache=v2"
MKFS_OPTIONS="-K -f -O ^no-holes"

[btrfs_normal]
TEST_DIR=/mnt/test
TEST_DEV=/dev/mapper/vg0-lv0
SCRATCH_DEV_POOL="/dev/mapper/vg0-lv9 /dev/mapper/vg0-lv8 /dev/mapper/vg0-lv7 /dev/mapper/vg0-lv6 /dev/mapper/vg0-lv5 /dev/mapper/vg0-lv4 /dev/mapper/vg0-lv3 /dev/mapper/vg0-lv2 /dev/mapper/vg0-lv1 "
SCRATCH_MNT=/mnt/scratch
LOGWRITES_DEV=/dev/mapper/vg0-lv10
PERF_CONFIGNAME=jbacik
MKFS_OPTIONS="-K -O ^no-holes -R ^free-space-tree"
MOUNT_OPTIONS="-o discard=async"

[btrfs_compression]
MOUNT_OPTIONS="-o compress=zstd,discard=async"
MKFS_OPTIONS="-K -O ^no-holes -R ^free-space-tree"

[kdave]
MKFS_OPTIONS="-K -O no-holes -R ^free-space-tree"
MOUNT_OPTIONS="-o discard,space_cache=v2"

[root@xfstests3 ~]# cat /xfstests-dev/local.config
[btrfs_normal_noholes]
TEST_DIR=/mnt/test
TEST_DEV=/dev/mapper/vg0-lv0
SCRATCH_DEV_POOL="/dev/mapper/vg0-lv9 /dev/mapper/vg0-lv8 /dev/mapper/vg0-lv7 /dev/mapper/vg0-lv6 /dev/mapper/vg0-lv5 /dev/mapper/vg0-lv4 /dev/mapper/vg0-lv3 /dev/mapper/vg0-lv2 /dev/mapper/vg0-lv1 "
SCRATCH_MNT=/mnt/scratch
LOGWRITES_DEV=/dev/mapper/vg0-lv10
PERF_CONFIGNAME=jbacik
MKFS_OPTIONS="-K -O no-holes -f -R ^free-space-tree"

[btrfs_compress_noholes]
MKFS_OPTIONS="-K -O no-holes -f -R ^free-space-tree"
MOUNT_OPTIONS="-o compress=lzo"

[btrfs_noholes_freespacetree]
MKFS_OPTIONS="-K -O no-holes -f"
MOUNT_OPTIONS="-o space_cache=v2"


> > Right now I have a box with ZNS drives waiting for me to set this up on so that
> > we can also be testing btrfs zoned support nightly, as well as my 3rd
> > RaspberryPi that I'm hoping doesn't blow up this time.
> 
> Great to hear you will be covering ZNS as well.
> 
> > I have another virt setup that uses btrfs snapshots to create a one off chroot
> > to run smoke tests for my development using virtme-run.  I want to replace the
> > libvirtd vms with virtme-run, however I've got about a 2x performance difference
> > between virtme-run and libvirtd that I'm trying to figure out, so right now all
> > the nightly test VM's are using libvirtd.
> > 
> > Long, long term the plan is to replace my janky home setup with AWS VM's that
> > can be fired from GitHub actions whenever we push branches, that way individual
> > developers can get results for their patches before they're merged, and we don't
> > have to rely on my terrible python+html for test results.
> 
> If you do move to AWS just keep in mind using loopback drives +
> truncated files *finds* more issues than not. So when I used AWS
> I got two spare nvme drives and used one to stuff the truncated
> files there.
> 

My plan was to get ones with attached storage and do the LVM thing I do for my
vms.

> > > Yes, everyone's test setup can be different, but this is why I went with
> > > a loopback/truncated file setup, it does find more issues and so far
> > > these have all been real.
> > > 
> > > It kind of begs the question if we should adopt something like kconfig
> > > on fstests to help enable a few test configs we can agree on. Thoughts?
> > > 
> > 
> > For us (and I imagine other fs'es) the kconfigs are not interesting, it's the
> > combo of different file system features that can be toggled on and off via mkfs
> > as well as different mount options.  For example I run all the different mkfs
> > features through normal mount options, and then again with compression turned
> > on.  Thanks,
> 
> So what I mean by kconfig is not the Linux kernel kconfig, but rather
> the kdevops kconfig options. kdevops essentially has a kconfig symbol
> per mkfs-param-mount config we test. And it runs *ones* guest per each
> of these. For example:
> 
> config FSTESTS_XFS_SECTION_REFLINK_1024
> 	bool "Enable testing section: xfs_reflink_1024"
> 	default y
> 	help
> 	  This will create a host to test the baseline of fstests using the
> 	  following configuration which enables reflink using 1024 byte block
> 	  size.
> 
> 	[xfs_reflink]
> 	MKFS_OPTIONS='-f -m reflink=1,rmapbt=1, -i sparse=1,'
> 	FSTYP=xfs
> 
> The other ones can be found here for XFS:
> 
> https://github.com/mcgrof/kdevops/blob/master/workflows/fstests/xfs/Kconfig
> 
> So indeed, exactly what you mean. What I'm getting at is that it would
> be good to construct these with the community. So it would beg the
> question if we should embrace for instance kconfig language to be
> able to configure fstests (yes I know it is xfstests but I think loose
> new people who tend to assume that xfstest is only for XFS, so I only
> always call it fstests).
> 

Got it, that's pretty cool, I pasted my configs above.  Once I figure out why
virtme is so much slower than libvirtd I'll give kdevops a try and see if I can
make it work for my setup.  Thanks,

Josef



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux