[no subject]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > > This was all put together to build into something a little more polished, but
> > > clearly priorities being what they are this is as far as we've taken it.  For
> > > configuration you can see my virt-scripts here
> > > https://github.com/josefbacik/virt-scripts which are what I use to generate the
> > > VM's to run xfstests in.
> > > 
> > > The kernel config I use is in there, I use a variety of btrfs mount options and
> > > mkfs options, not sure how interesting those are for people outside of btrfs.
> > 
> > Extremely useful.
> > 
> 
> [root@fedora-rawhide ~]# cat /xfstests-dev/local.config
> [btrfs_normal_freespacetree]
> TEST_DIR=/mnt/test
> TEST_DEV=/dev/mapper/vg0-lv0
> SCRATCH_DEV_POOL="/dev/mapper/vg0-lv7 /dev/mapper/vg0-lv6 /dev/mapper/vg0-lv5 /dev/mapper/vg0-lv4 /dev/mapper/vg0-lv3 /dev/mapper/vg0-lv2 /dev/mapper/vg0-lv1 "
> SCRATCH_MNT=/mnt/scratch
> LOGWRITES_DEV=/dev/mapper/vg0-lv8
> PERF_CONFIGNAME=jbacik
> MKFS_OPTIONS="-K -f -O ^no-holes"
> MOUNT_OPTIONS="-o space_cache=v2"
> FSTYP=btrfs
> 
> [btrfs_compress_freespacetree]
> MOUNT_OPTIONS="-o compress=zlib,space_cache=v2"
> MKFS_OPTIONS="-K -f -O ^no-holes"
> 
> [btrfs_normal]
> TEST_DIR=/mnt/test
> TEST_DEV=/dev/mapper/vg0-lv0
> SCRATCH_DEV_POOL="/dev/mapper/vg0-lv9 /dev/mapper/vg0-lv8 /dev/mapper/vg0-lv7 /dev/mapper/vg0-lv6 /dev/mapper/vg0-lv5 /dev/mapper/vg0-lv4 /dev/mapper/vg0-lv3 /dev/mapper/vg0-lv2 /dev/mapper/vg0-lv1 "
> SCRATCH_MNT=/mnt/scratch
> LOGWRITES_DEV=/dev/mapper/vg0-lv10
> PERF_CONFIGNAME=jbacik
> MKFS_OPTIONS="-K -O ^no-holes -R ^free-space-tree"
> MOUNT_OPTIONS="-o discard=async"
> 
> [btrfs_compression]
> MOUNT_OPTIONS="-o compress=zstd,discard=async"
> MKFS_OPTIONS="-K -O ^no-holes -R ^free-space-tree"
> 
> [kdave]
> MKFS_OPTIONS="-K -O no-holes -R ^free-space-tree"
> MOUNT_OPTIONS="-o discard,space_cache=v2"
> 
> [root@xfstests3 ~]# cat /xfstests-dev/local.config
> [btrfs_normal_noholes]
> TEST_DIR=/mnt/test
> TEST_DEV=/dev/mapper/vg0-lv0
> SCRATCH_DEV_POOL="/dev/mapper/vg0-lv9 /dev/mapper/vg0-lv8 /dev/mapper/vg0-lv7 /dev/mapper/vg0-lv6 /dev/mapper/vg0-lv5 /dev/mapper/vg0-lv4 /dev/mapper/vg0-lv3 /dev/mapper/vg0-lv2 /dev/mapper/vg0-lv1 "
> SCRATCH_MNT=/mnt/scratch
> LOGWRITES_DEV=/dev/mapper/vg0-lv10
> PERF_CONFIGNAME=jbacik
> MKFS_OPTIONS="-K -O no-holes -f -R ^free-space-tree"
> 
> [btrfs_compress_noholes]
> MKFS_OPTIONS="-K -O no-holes -f -R ^free-space-tree"
> MOUNT_OPTIONS="-o compress=lzo"
> 
> [btrfs_noholes_freespacetree]
> MKFS_OPTIONS="-K -O no-holes -f"
> MOUNT_OPTIONS="-o space_cache=v2"

Thanks I can eventually cake these in to kdevops (or patches welcmeD)
modulo I use loopback/truncated filews. It is possible to add an option
to use dm linear too if that is really desirable.

> > > Right now I have a box with ZNS drives waiting for me to set this up on so that
> > > we can also be testing btrfs zoned support nightly, as well as my 3rd
> > > RaspberryPi that I'm hoping doesn't blow up this time.
> > 
> > Great to hear you will be covering ZNS as well.
> > 
> > > I have another virt setup that uses btrfs snapshots to create a one off chroot
> > > to run smoke tests for my development using virtme-run.  I want to replace the
> > > libvirtd vms with virtme-run, however I've got about a 2x performance difference
> > > between virtme-run and libvirtd that I'm trying to figure out, so right now all
> > > the nightly test VM's are using libvirtd.
> > > 
> > > Long, long term the plan is to replace my janky home setup with AWS VM's that
> > > can be fired from GitHub actions whenever we push branches, that way individual
> > > developers can get results for their patches before they're merged, and we don't
> > > have to rely on my terrible python+html for test results.
> > 
> > If you do move to AWS just keep in mind using loopback drives +
> > truncated files *finds* more issues than not. So when I used AWS
> > I got two spare nvme drives and used one to stuff the truncated
> > files there.
> > 
> 
> My plan was to get ones with attached storage and do the LVM thing I do for my
> vms.

The default for AWS for kdevops is to use m5ad.4xlarge (~$0.824 per
Hour) that comes with 61 GiB RAM, 16 vcpus, 1 8 GiB main drive, and two
additional 300 GiB nvme drives. The nvme drives are used so to also
mimic the KVM setup when kdevops uses local virtualization.

FWIW, the kdevops AWS kconfig is at terraform/aws/Kconfig

  Luis



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux