[PATCH] shared/010: avoid dedupe testing blocked on large fs

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]



When test on large fs (--large-fs), xfstests preallocates a large
file in SCRATCH_MNT/ at first. Duperemove will take too long time
to deal with that large file (many days on 500T XFS). So move
working directory to a sub-dir underlying $SCRATCH_MNT/.

Signed-off-by: Zorro Lang <zlang@xxxxxxxxxx>
---

Hi,

Besides fix this issue, this patch fix another issue passingly. I left
a bad variable named "testdir" in this case. This patch can fix it.

If maintainer feels I should fix it in another patch, please tell me:-P

Thanks,
Zorro

 tests/shared/010 | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/tests/shared/010 b/tests/shared/010
index 1817081b..04f55890 100755
--- a/tests/shared/010
+++ b/tests/shared/010
@@ -65,15 +65,17 @@ function end_test()
 sleep_time=$((50 * TIME_FACTOR))
 
 # Start fsstress
+testdir="$SCRATCH_MNT/dir"
+mkdir $testdir
 fsstress_opts="-r -n 1000 -p $((5 * LOAD_FACTOR))"
-$FSSTRESS_PROG $fsstress_opts -d $SCRATCH_MNT -l 0 >> $seqres.full 2>&1 &
+$FSSTRESS_PROG $fsstress_opts -d $testdir -l 0 >> $seqres.full 2>&1 &
 dedup_pids=""
 dupe_run=$TEST_DIR/${seq}-running
 # Start several dedupe processes on same directory
 touch $dupe_run
 for ((i = 0; i < $((2 * LOAD_FACTOR)); i++)); do
 	while [ -e $dupe_run ]; do
-		$DUPEREMOVE_PROG -dr --dedupe-options=same $SCRATCH_MNT/ \
+		$DUPEREMOVE_PROG -dr --dedupe-options=same $testdir \
 			>>$seqres.full 2>&1
 	done &
 	dedup_pids="$! $dedup_pids"
-- 
2.14.4




[Index of Archives]     [Linux Filesystems Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux