Re: [PATCH] xfs: test log recovery for extent frees right after growfs

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]



On Tue, Sep 10, 2024 at 12:13:29PM -0400, Brian Foster wrote:
> On Tue, Sep 10, 2024 at 05:10:53PM +0200, Christoph Hellwig wrote:
> > On Tue, Sep 10, 2024 at 10:19:50AM -0400, Brian Foster wrote:
> > > No real issue with the test, but I wonder if we could do something more
> > > generic. Various XFS shutdown and log recovery issues went undetected
> > > for a while until we started adding more of the generic stress tests
> > > currently categorized in the recoveryloop group.
> > > 
> > > So for example, I'm wondering if you took something like generic/388 or
> > > 475 and modified it to start with a smallish fs, grew it in 1GB or
> > > whatever increments on each loop iteration, and then ran the same
> > > generic stress/timeout/shutdown/recovery sequence, would that eventually
> > > reproduce the issue you've fixed? I don't think reproducibility would
> > > need to be 100% for the test to be useful, fwiw.
> > > 
> > > Note that I'm assuming we don't have something like that already. I see
> > > growfs and shutdown tests in tests/xfs/group.list, but nothing in both
> > > groups and I haven't looked through the individual tests. Just a
> > > thought.
> > 
> > It turns out reproducing this bug was surprisingly complicated.
> > After a growfs we can now dip into reserves that made the test1
> > file start filling up the existing AGs first for a while, and thus
> > the error injection would hit on that and never even reach a new
> > AG.
> > 
> > So while agree with your sentiment and like the highlevel idea, I
> > suspect it will need a fair amount of work to actually be useful.
> > Right now I'm too busy with various projects to look into it
> > unfortunately.
> > 
> 
> Fair enough, maybe I'll play with it a bit when I have some more time.
> 
> Brian
> 
> 

FWIW, here's a quick hack at such a test. This is essentially a copy of
xfs/104, tweaked to remove some of the output noise and whatnot, and
hacked in some bits from generic/388 to do a shutdown and mount cycle
per iteration.

I'm not sure if this reproduces your original problem, but this blows up
pretty quickly on 6.12.0-rc2. I see a stream of warnings that start like
this (buffer readahead path via log recovery):

[ 2807.764283] XFS (vdb2): xfs_buf_map_verify: daddr 0x3e803 out of range, EOFS 0x3e800
[ 2807.768094] ------------[ cut here ]------------
[ 2807.770629] WARNING: CPU: 0 PID: 28386 at fs/xfs/xfs_buf.c:553 xfs_buf_get_map+0x184e/0x2670 [xfs]

... and then end up with an unrecoverable/unmountable fs. From the title
it sounds like this may be a different issue though.. hm?

Brian

--- 8< ---

diff --git a/tests/xfs/609 b/tests/xfs/609
new file mode 100755
index 00000000..b9c23869
--- /dev/null
+++ b/tests/xfs/609
@@ -0,0 +1,100 @@
+#! /bin/bash
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (c) 2000-2004 Silicon Graphics, Inc.  All Rights Reserved.
+#
+# FS QA Test No. 609
+#
+# XFS online growfs-while-allocating tests (data subvol variant)
+#
+. ./common/preamble
+_begin_fstest growfs ioctl prealloc auto stress
+
+# Import common functions.
+. ./common/filter
+
+_create_scratch()
+{
+	_scratch_mkfs_xfs $@ >> $seqres.full
+
+	if ! _try_scratch_mount 2>/dev/null
+	then
+		echo "failed to mount $SCRATCH_DEV"
+		exit 1
+	fi
+
+	# fix the reserve block pool to a known size so that the enospc
+	# calculations work out correctly.
+	_scratch_resvblks 1024 >  /dev/null 2>&1
+}
+
+_fill_scratch()
+{
+	$XFS_IO_PROG -f -c "resvsp 0 ${1}" $SCRATCH_MNT/resvfile
+}
+
+_stress_scratch()
+{
+	procs=3
+	nops=1000
+	# -w ensures that the only ops are ones which cause write I/O
+	FSSTRESS_ARGS=`_scale_fsstress_args -d $SCRATCH_MNT -w -p $procs \
+	    -n $nops $FSSTRESS_AVOID`
+	$FSSTRESS_PROG $FSSTRESS_ARGS >> $seqres.full 2>&1 &
+}
+
+_require_scratch
+_require_xfs_io_command "falloc"
+
+_scratch_mkfs_xfs | tee -a $seqres.full | _filter_mkfs 2>$tmp.mkfs
+. $tmp.mkfs	# extract blocksize and data size for scratch device
+
+endsize=`expr 550 \* 1048576`	# stop after growing this big
+incsize=`expr  42 \* 1048576`	# grow in chunks of this size
+modsize=`expr   4 \* $incsize`	# pause after this many increments
+
+[ `expr $endsize / $dbsize` -lt $dblocks ] || _notrun "Scratch device too small"
+
+nags=4
+size=`expr 125 \* 1048576`	# 120 megabytes initially
+sizeb=`expr $size / $dbsize`	# in data blocks
+logblks=$(_scratch_find_xfs_min_logblocks -dsize=${size} -dagcount=${nags})
+_create_scratch -lsize=${logblks}b -dsize=${size} -dagcount=${nags}
+
+for i in `seq 125 -1 90`; do
+	fillsize=`expr $i \* 1048576`
+	out="$(_fill_scratch $fillsize 2>&1)"
+	echo "$out" | grep -q 'No space left on device' && continue
+	test -n "${out}" && echo "$out"
+	break
+done
+
+#
+# Grow the filesystem while actively stressing it...
+# Kick off more stress threads on each iteration, grow; repeat.
+#
+while [ $size -le $endsize ]; do
+	echo "*** stressing a ${sizeb} block filesystem" >> $seqres.full
+	_stress_scratch
+	size=`expr $size + $incsize`
+	sizeb=`expr $size / $dbsize`	# in data blocks
+	echo "*** growing to a ${sizeb} block filesystem" >> $seqres.full
+	xfs_growfs -D ${sizeb} $SCRATCH_MNT >> $seqres.full
+	echo AGCOUNT=$agcount >> $seqres.full
+	echo >> $seqres.full
+
+	sleep $((RANDOM % 3))
+	_scratch_shutdown
+	ps -e | grep fsstress > /dev/null 2>&1
+	while [ $? -eq 0 ]; do
+		killall -9 fsstress > /dev/null 2>&1
+		wait > /dev/null 2>&1
+		ps -e | grep fsstress > /dev/null 2>&1
+	done
+	_scratch_cycle_mount || _fail "cycle mount failed"
+done > /dev/null 2>&1
+wait	# stop for any remaining stress processes
+
+_scratch_unmount
+
+status=0
+exit
diff --git a/tests/xfs/609.out b/tests/xfs/609.out
new file mode 100644
index 00000000..1853cc65
--- /dev/null
+++ b/tests/xfs/609.out
@@ -0,0 +1,7 @@
+QA output created by 609
+meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
+data     = bsize=XXX blocks=XXX, imaxpct=PCT
+         = sunit=XXX swidth=XXX, unwritten=X
+naming   =VERN bsize=XXX
+log      =LDEV bsize=XXX blocks=XXX
+realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX





[Index of Archives]     [Linux Filesystems Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux