Re: [RFC PATCH] xfs: make sure our default quota warning limits and grace periods survive quotacheck

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Feb 20, 2020 at 12:31:44PM +0800, Zorro Lang wrote:
> On Tue, Feb 18, 2020 at 04:34:23PM -0800, Darrick J. Wong wrote:
> > From: Darrick J. Wong <darrick.wong@xxxxxxxxxx>
> > 
> > Make sure that the default quota grace period and maximum warning limits
> > set by the administrator survive quotacheck.
> > 
> > Signed-off-by: Darrick J. Wong <darrick.wong@xxxxxxxxxx>
> > ---
> > This is the testcase to go with 'xfs: preserve default grace interval
> > during quotacheck', though Eric and I haven't figured out how we're
> > going to land that one...
> > ---
> >  tests/xfs/913     |   69 +++++++++++++++++++++++++++++++++++++++++++++++++++++
> >  tests/xfs/913.out |   13 ++++++++++
> >  tests/xfs/group   |    1 +
> >  3 files changed, 83 insertions(+)
> >  create mode 100755 tests/xfs/913
> >  create mode 100644 tests/xfs/913.out
> > 
> > diff --git a/tests/xfs/913 b/tests/xfs/913
> 
> Hi,
> 
> Can "_require_xfs_quota_foreign" help this case to be a generic case?
> 
> > new file mode 100755
> > index 00000000..94681b02
> > --- /dev/null
> > +++ b/tests/xfs/913
> > @@ -0,0 +1,69 @@
> > +#! /bin/bash
> > +# SPDX-License-Identifier: GPL-2.0-or-later
> > +# Copyright (c) 2020, Oracle and/or its affiliates.  All Rights Reserved.
> > +#
> > +# FS QA Test No. 913
> > +#
> > +# Make sure that the quota default grace period and maximum warning limits
> > +# survive quotacheck.
> > +
> > +seq=`basename $0`
> > +seqres=$RESULT_DIR/$seq
> > +echo "QA output created by $seq"
> > +
> > +here=`pwd`
> > +tmp=/tmp/$$
> > +status=1    # failure is the default!
> > +trap "_cleanup; exit \$status" 0 1 2 3 15
> > +
> > +_cleanup()
> > +{
> > +	cd /
> > +	rm -f $tmp.*
> > +}
> > +
> > +# get standard environment, filters and checks
> > +. ./common/rc
> > +. ./common/filter
> > +. ./common/quota
> > +
> > +# real QA test starts here
> > +_supported_fs xfs
> > +_supported_os Linux
> > +_require_quota
> > +
> > +rm -f $seqres.full
> > +
> > +# Format filesystem and set up quota limits
> > +_scratch_mkfs > $seqres.full
> > +_qmount_option "usrquota"
> > +_scratch_mount >> $seqres.full
> > +
> > +$XFS_QUOTA_PROG -x -c 'timer -u 300m' $SCRATCH_MNT
> > +$XFS_QUOTA_PROG -x -c 'state' $SCRATCH_MNT | grep 'grace time'
> > +_scratch_unmount
> > +
> > +# Remount and check the limits
> > +_scratch_mount >> $seqres.full
> > +$XFS_QUOTA_PROG -x -c 'state' $SCRATCH_MNT | grep 'grace time'
> > +_scratch_unmount
> > +
> > +# Run repair to force quota check
> > +_scratch_xfs_repair >> $seqres.full 2>&1
> 
> I've sent a case looks like do similar test as this:
>   [PATCH 1/2] generic: per-type quota timers set/get test
> 
> But it doesn't do fsck before cycle-mount. And ...[below]
> 
> > +
> > +# Remount (this time to run quotacheck) and check the limits.  There's a bug
> > +# in quotacheck where we would reset the ondisk default grace period to zero
> > +# while the incore copy stays at whatever was read in prior to quotacheck.
> > +# This will show up after the /next/ remount.
> > +_scratch_mount >> $seqres.full
> > +$XFS_QUOTA_PROG -x -c 'state' $SCRATCH_MNT | grep 'grace time'
> > +_scratch_unmount
> > +
> > +# Remount and check the limits
> > +_scratch_mount >> $seqres.full
> > +$XFS_QUOTA_PROG -x -c 'state' $SCRATCH_MNT | grep 'grace time'
> > +_scratch_unmount
> 
> It doesn't do twice cycle mount either. Do you think the fsck is necessary?

This test is looking for a bug in quotacheck, so I use xfs_repair to
force a quotacheck.

> And do you think these two cases can be merged into one case?

<shrug> Probably.  I don't see a problem in having one testcase poke
related problems, and it can always come after the bits that are already
in the growing pile of quota tests (see the one that Eric sent...)

--D

> Thanks,
> Zorro
> 
> > +
> > +# success, all done
> > +status=0
> > +exit
> > diff --git a/tests/xfs/913.out b/tests/xfs/913.out
> > new file mode 100644
> > index 00000000..ee989388
> > --- /dev/null
> > +++ b/tests/xfs/913.out
> > @@ -0,0 +1,13 @@
> > +QA output created by 913
> > +Blocks grace time: [0 days 05:00:00]
> > +Inodes grace time: [0 days 05:00:00]
> > +Realtime Blocks grace time: [0 days 05:00:00]
> > +Blocks grace time: [0 days 05:00:00]
> > +Inodes grace time: [0 days 05:00:00]
> > +Realtime Blocks grace time: [0 days 05:00:00]
> > +Blocks grace time: [0 days 05:00:00]
> > +Inodes grace time: [0 days 05:00:00]
> > +Realtime Blocks grace time: [0 days 05:00:00]
> > +Blocks grace time: [0 days 05:00:00]
> > +Inodes grace time: [0 days 05:00:00]
> > +Realtime Blocks grace time: [0 days 05:00:00]
> > diff --git a/tests/xfs/group b/tests/xfs/group
> > index 056072fb..87b3c75d 100644
> > --- a/tests/xfs/group
> > +++ b/tests/xfs/group
> > @@ -539,4 +539,5 @@
> >  910 auto quick inobtcount
> >  911 auto quick bigtime
> >  912 auto quick label
> > +913 auto quick quota
> >  997 auto quick mount
> > 
> 



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux