Re: [PATCH] shared: new test to use up free inodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 19, 2014 at 05:27:49PM +0800, Eryu Guan wrote:
> Stress test fs by using up all inodes and check fs.
> 
> Also a regression test for xfsprogs commit
> d586858 xfs_repair: fix sibling pointer tests in verify_dir2_path()
> 
> Signed-off-by: Eryu Guan <eguan@xxxxxxxxxx>
> ---
>  tests/shared/006     | 96 ++++++++++++++++++++++++++++++++++++++++++++++++++++
>  tests/shared/006.out |  2 ++
>  tests/shared/group   |  1 +
>  3 files changed, 99 insertions(+)
>  create mode 100755 tests/shared/006
>  create mode 100644 tests/shared/006.out
> 
> diff --git a/tests/shared/006 b/tests/shared/006
> new file mode 100755
> index 0000000..a3b13b6
> --- /dev/null
> +++ b/tests/shared/006
> @@ -0,0 +1,96 @@
> +#! /bin/bash
> +# FS QA Test No. shared/006
> +#
> +# Stress test fs by using up all inodes and check fs.
> +#
> +# Also a regression test for xfsprogs commit
> +# d586858 xfs_repair: fix sibling pointer tests in verify_dir2_path()
> +#
> +#-----------------------------------------------------------------------
> +# Copyright (c) 2014 Red Hat Inc.  All Rights Reserved.
> +#
> +# This program is free software; you can redistribute it and/or
> +# modify it under the terms of the GNU General Public License as
> +# published by the Free Software Foundation.
> +#
> +# This program is distributed in the hope that it would be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program; if not, write the Free Software Foundation,
> +# Inc.,  51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
> +#-----------------------------------------------------------------------
> +#
> +
> +seq=`basename $0`
> +seqres=$RESULT_DIR/$seq
> +echo "QA output created by $seq"
> +
> +here=`pwd`
> +tmp=/tmp/$$
> +status=1	# failure is the default!
> +trap "_cleanup; exit \$status" 0 1 2 3 15
> +
> +_cleanup()
> +{
> +    cd /
> +    rm -f $tmp.*
> +}
> +
> +create_file()
> +{
> +	local dir=$1
> +	local nr_file=$2
> +	local prefix=$3
> +	local i=0
> +
> +	while [ $i -lt $nr_file ]; do
> +		touch $dir/${prefix}_${i}

echo -n > $dir/${prefix}_${i}

will create a zero length file without needing to fork/exec and so
will have much lower overhead and create the files significantly
faster.

> +		let i=$i+1
> +	done
> +}
> +
> +# get standard environment, filters and checks
> +. ./common/rc
> +. ./common/filter
> +
> +# real QA test starts here
> +_supported_fs ext4 ext3 ext2 xfs
> +_supported_os Linux
> +
> +_require_scratch
> +
> +rm -f $seqres.full
> +echo "Silence is golden"
> +
> +_scratch_mkfs_sized $((1024 * 1024 * 1024)) >>$seqres.full 2>&1
> +_scratch_mount

If this is going to be a stress test, you should add a scale factor
into this.

> +
> +i=0
> +free_inode=`df -iP $SCRATCH_MNT | tail -1 | awk '{print $2}'`

$DF_PROG

> +loop=$((free_inode / 1000 + 1))

And probably a LOAD_FACTOR into this to scale parallelism.

> +mkdir -p $SCRATCH_MNT/testdir
> +
> +echo "Create $((loop * 1000)) files in $SCRATCH_MNT/testdir" >>$seqres.full
> +while [ $i -lt $loop ]; do
> +	create_file $SCRATCH_MNT/testdir 1000 $i >>$seqres.full 2>&1 &
> +	let i=$i+1
> +done
> +wait

On XFS, that will create at least 500 threads creating 1000 inodes each
all in the same directory. This doesn't give you any extra
parallelism at all over just creating $free_inode files in a single
directory with a single thread. Indeed, it will probably be slower
due to the contention on the directory mutex.

If you want to scale this in terms of parallelism to keep the
creation time down, each loop needs to write into a different
directory. i.e. something like:


echo "Create $((loop * 1000)) files in $SCRATCH_MNT/testdir" >>$seqres.full
while [ $i -lt $loop ]; do
	mkdir -p $SCRATCH_MNT/testdir/$i
	create_file $SCRATCH_MNT/testdir/$i 1000 $i >>$seqres.full 2>&1 &
	let i=$i+1
done
wait

And even then I'd suggest that you'd be much better off with 10,000
files to a sub-directory....

> +# log inode status in $seqres.full for debug purpose
> +echo "Inode status after taking all inodes" >>$seqres.full
> +df -i $SCRATCH_MNT >>$seqres.full
> +
> +_check_scratch_fs
> +
> +# Check again after removing all the files
> +rm -rf $SCRATCH_MNT/testdir

That can be parallelised as well when you have multiple subdirs:

for d in $SCRATCH_MNT/testdir/*; do
	rm -rf $d &
done
wait

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs




[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux