Re: [PATCH] ceph/005: verify correct statfs behaviour with quotas

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 25 May 2022 09:53:53 +0100, Luís Henriques wrote:

> David Disseldorp <ddiss@xxxxxxx> writes:
> 
> > Hi Luís,
> >
> > It looks like this one is still in need of review...  
> 
> Ah! Thanks for reminding me about it, David!
> 
> >
> > On Wed, 27 Apr 2022 15:34:09 +0100, Luís Henriques wrote:
> >  
> >> When using a directory with 'max_bytes' quota as a base for a mount,
> >> statfs shall use that 'max_bytes' value as the total disk size.  That
> >> value shall be used even when using subdirectory as base for the mount.
> >> 
> >> A bug was found where, when this subdirectory also had a 'max_files'
> >> quota, the real filesystem size would be returned instead of the parent
> >> 'max_bytes' quota value.  This test case verifies this bug is fixed.
> >> 
> >> Signed-off-by: Luís Henriques <lhenriques@xxxxxxx>
> >> ---
> >>  tests/ceph/005     | 40 ++++++++++++++++++++++++++++++++++++++++
> >>  tests/ceph/005.out |  2 ++
> >>  2 files changed, 42 insertions(+)
> >>  create mode 100755 tests/ceph/005
> >>  create mode 100644 tests/ceph/005.out
> >> 
> >> diff --git a/tests/ceph/005 b/tests/ceph/005
> >> new file mode 100755
> >> index 000000000000..0763a235a677
> >> --- /dev/null
> >> +++ b/tests/ceph/005
> >> @@ -0,0 +1,40 @@
> >> +#! /bin/bash
> >> +# SPDX-License-Identifier: GPL-2.0
> >> +# Copyright (C) 2022 SUSE Linux Products GmbH. All Rights Reserved.
> >> +#
> >> +# FS QA Test 005
> >> +#
> >> +# Make sure statfs reports correct total size when:
> >> +# 1. using a directory with 'max_byte' quota as base for a mount
> >> +# 2. using a subdirectory of the above directory with 'max_files' quota
> >> +#
> >> +. ./common/preamble
> >> +_begin_fstest auto quick quota
> >> +
> >> +_supported_fs generic
> >> +_require_scratch
> >> +
> >> +_scratch_mount
> >> +mkdir -p $SCRATCH_MNT/quota-dir/subdir
> >> +
> >> +# set quotas
> >> +quota=$((1024*10000))
> >> +$SETFATTR_PROG -n ceph.quota.max_bytes -v $quota $SCRATCH_MNT/quota-dir
> >> +$SETFATTR_PROG -n ceph.quota.max_files -v $quota $SCRATCH_MNT/quota-dir/subdir
> >> +_scratch_unmount
> >> +
> >> +SCRATCH_DEV=$SCRATCH_DEV/quota-dir _scratch_mount  
> >
> > Aside from the standard please-quote-your-variables gripe, I'm a little  
> 
> Sure, I'll fix those in next iteration.
> 
> > confused with the use of SCRATCH_DEV for this test. Network FSes where
> > mkfs isn't provided don't generally use it. Is there any way that this
> > could be run against TEST_DEV, or does the umount / mount complicate
> > things too much?  
> 
> When I looked at other tests doing similar things (i.e. changing the mount
> device during the test), they all seemed to be using SCRATCH_DEV.  I guess
> that I could change TEST_DEV instead.  I'll revisit this and see if that
> works.
> 
> Anyway, regarding the usage of SCRATCH_DEV in cephfs, I've used several
> different approaches:
> 
> - Use 2 different filesystems created on the same cluster,
> - Use 2 volumes on the same filesystem, or
> - Simply use 2 directories in the same filesystem.

Looking at _scratch_mkfs($FSTYP=ceph) there is support for scratch
filesystem reinitialization, so I suppose this should be okay. With
cephfs we could actually go one step further and call "ceph fs rm/new",
but that's something for another day :-).

Cheers, David




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Ceph Dev]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux