On Wed, Nov 01, 2017 at 02:47:42PM -0700, Darrick J. Wong wrote: > From: Darrick J. Wong <darrick.wong@xxxxxxxxxx> > > If filling up the filesystem causes us to hit ENOSPC earlier than we > thought we would (the sizing estimates become less and less accurate as > we add more metadata) then just bail out -- we're checking that the fs > is robust enough to cut us off before we actually run out of space for > writing metadata and crash the fs. > > Signed-off-by: Darrick J. Wong <darrick.wong@xxxxxxxxxx> > --- > tests/generic/204 | 11 +++++++++-- > 1 file changed, 9 insertions(+), 2 deletions(-) > > > diff --git a/tests/generic/204 b/tests/generic/204 > index 4c203a2..1e2c1e1 100755 > --- a/tests/generic/204 > +++ b/tests/generic/204 > @@ -82,8 +82,15 @@ echo files $files, resvblks $resv_blks >> $seqres.full > _scratch_resvblks $resv_blks >> $seqres.full 2>&1 > > for i in `seq 1 $files`; do > - echo -n > $SCRATCH_MNT/$i > - echo XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX > $SCRATCH_MNT/$i > + # Open/truncate file, write stuff. If we run out of space early, > + # we can bail out of the loop. > + out="$($XFS_IO_PROG \ > + -c "open -f -t $SCRATCH_MNT/$i" \ > + -c 'close' \ > + -c "open -f -t $SCRATCH_MNT/$i" \ > + -c 'pwrite -q -S 0x58 0 36' 2>&1 | _filter_scratch)" > + echo "${out}" | grep -q 'No space left on device' && break This doesn't look correct to me. This test is meant to catch spurious ENOSPC, it's designed to be a "delayed allocation ENOSPC test"[1] by "writing lots of single block files" and catch early ENOSPC. IMHO, this change ignores ENOSPC, which defeats the test purpose. And I don't quite understand the purpose of truncate open/close/truncate open/write sequence.. Also this significantly increases the test time, 4s -> ~120s for me. Thanks, Eryu [1] commit 143368a047ea ("xfstests: add test 204, a simple delayed allocation ENOSPC test") > + test -n "${out}" && echo "${out}" > done > > # success, all done > -- To unsubscribe from this list: send the line "unsubscribe linux-xfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html