Re: [PATCH] btrfs: add test case to verify the behavior with large RAID0 data chunks

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]





On 2023/6/21 14:35, Anand Jain wrote:


On 21/06/2023 14:00, Qu Wenruo wrote:


On 2023/6/21 13:47, Anand Jain wrote:



+for i in $SCRATCH_DEV_POOL; do
+    devsize=$(blockdev --getsize64 "$i")
+    if [ $devsize -lt $((2 * 1024 * 1024 * 1024)) ]; then
+        _notrun "device $i is too small, need at least 2G"



Also, you need to check if those devices support discard.


Howabout this?


btrfs/292       - output mismatch (see
/xfstests-dev/results//btrfs/292.out.bad)
     --- tests/btrfs/292.out    2023-06-21 13:27:12.764966120 +0800
     +++ /xfstests-dev/results//btrfs/292.out.bad    2023-06-21
13:54:01.863082692 +0800
     @@ -1,2 +1,3 @@
      QA output created by 292
     +fstrim: /mnt/scratch: the discard operation is not supported
      Silence is golden
     ...
     (Run 'diff -u /xfstests-dev/tests/btrfs/292.out
/xfstests-dev/results//btrfs/292.out.bad'  to see the entire diff)

HINT: You _MAY_ be missing kernel fix:
       xxxxxxxxxxxx btrfs: fix u32 overflows when left shifting @stripe_nr

That's true, I'll add the needed require line in the next update.

Thanks,
Qu




Uneven device sizes will alter the distribution of chunk allocation.
Since the default chunk allocation is also based on the device sizes
and free spaces.

That is not a big deal, if we have all 6 devices beyond 2G size, we're
already allocating the device stripe in 1G size anyway, and we're
ensured to have a 6G RAID0 chunk no matter if the sizes are uneven.

Ah. RAID0. Got it.

It's the next new data chunk going to be affected, but our workload
would only need the initial RAID0 chunk, thus it's totally fine to have
uneven disk sizes.



+    fi
+done
+
+_scratch_pool_mkfs -m raid1 -d raid0 >> $seqres.full 2>&1
+_scratch_mount
+
+# Fill the data chunk with 5G data.
+for (( i = 0; i < $nr_files; i++ )); do
+    xfs_io -f -c "pwrite -i /dev/urandom 0 $filesize"
$SCRATCH_MNT/file_$i > /dev/null
+done
+sync
+echo "=== With initial 5G data written ===" >> $seqres.full
+$BTRFS_UTIL_PROG filesystem df $SCRATCH_MNT >> $seqres.full
+
+_scratch_unmount
+
+# Make sure we haven't corrupted anything.
+$BTRFS_UTIL_PROG check --check-data-csum $SCRATCH_DEV >> $seqres.full
2>&1
+if [ $? -ne 0 ]; then
+    _fail "data corruption detected after initial data filling"
+fi
+
+_scratch_mount
+# Delete half of the data, and do discard
+rm -rf - "$SCRATCH_MNT/*[02468]"

Are there any specific chunks that need to be deleted to successfully
reproduce this test case?
 >
No, there is and will only be one data chunk.


  Right. I missed the point it is a raid0.


We're here only to create holes to cause extra trim workload.


Thanks, Anand


+sync
+$FSTRIM_PROG $SCRATCH_MNT

Do we need fstrim if we use mount -o discard=sync instead?

There is not much difference.






[Index of Archives]     [Linux Filesystems Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux