Re: [PATCH] btrfs/253: update the data chunk size to the correct one

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]



On Thu, Sep 22, 2022 at 10:49 AM Qu Wenruo <wqu@xxxxxxxx> wrote:
>
> [BUG]
> After kernel commit 5da431b71d4b ("btrfs: fix the max chunk size and
> stripe length calculation") the test case btrfs/253 will always fail.
>
> [CAUSE]
> Btrfs calculates the to-be-allocated chunk size using both
> chunk and stripe size limits.
>
> By default data chunk is limited to 10G, while data stripe size limit
> is only 1G.
> Thus the real chunk size limit would be:
>
>   min(num_data_devices * max_stripe_size, max_chunk_size)
>
> For single disk case (the test case), the value would be:
>
>   min(        1        *       1G       ,       10G) = 1G.
>
> The test case can only pass during a small window:
>
> - Commit f6fca3917b4d ("btrfs: store chunk size in space-info struct")
>
>   This changed the max chunk size limit to 1G, which would cause way
>   more chunks for profiles like RAID0/10/5/6, thus it is considered as
>   a regression.
>
> - Commit 5da431b71d4b ("btrfs: fix the max chunk size and stripe length calculation")
>
>   This is the fix for above regression, which reverts the data chunk
>   limit back to 10G.
>
> [FIX]
> Fix the failure by introducing a hardcoded expected data chunk size (1G).
>
> Since the test case has fixed fs size (10G) and is only utilizing one
> disk, that 1G size will work for all possible data profiles (SINGLE and
> DUP).
>
> The more elegant solution is to export stripe size limit through sysfs
> interfaces, but that would take at least 2 kernel cycles.
>
> For now, just use a hotfix to make the test case work again.
>
> Since we are here, also remove the leading "t" in the error message.
>
> Signed-off-by: Qu Wenruo <wqu@xxxxxxxx>

Looks good, thanks for fixing this.

Reviewed-by: Filipe Manana <fdmanana@xxxxxxxx>


> ---
>  tests/btrfs/253 | 18 ++++++++++++++++--
>  1 file changed, 16 insertions(+), 2 deletions(-)
>
> diff --git a/tests/btrfs/253 b/tests/btrfs/253
> index c746f41e..5fbce070 100755
> --- a/tests/btrfs/253
> +++ b/tests/btrfs/253
> @@ -107,7 +107,21 @@ SCRATCH_BDEV=$(_real_dev $SCRATCH_DEV)
>
>  # Get chunk sizes.
>  echo "Capture default chunk sizes."
> -FIRST_DATA_CHUNK_SIZE_B=$(_get_fs_sysfs_attr ${SCRATCH_BDEV} allocation/data/chunk_size)
> +
> +# The sysfs interface has only exported chunk_size (10G by default),
> +# but no stripe_size (1G by default) exported.
> +#
> +# Btrfs calculate the real chunk to be allocated using both limits,
> +# Thus the default 10G chunk size can only be fulfilled by something
> +# like 10 disk RAID0
> +#
> +# The proper solution should be exporting the stripe_size limit too,
> +# but that may take several cycles, here we just use hard coded 1G
> +# as the expected chunk size.
> +FIRST_DATA_CHUNK_SIZE_B=$((1024 * 1024 * 1024))
> +
> +# For metadata, we are safe to use the exported value, as the default
> +# metadata chunk size limit is already smaller than its stripe size.
>  FIRST_METADATA_CHUNK_SIZE_B=$(_get_fs_sysfs_attr ${SCRATCH_BDEV} allocation/metadata/chunk_size)
>
>  echo "Orig Data chunk size    = ${FIRST_DATA_CHUNK_SIZE_B}"     >> ${seqres}.full
> @@ -226,7 +240,7 @@ FIRST_METADATA_CHUNK_SIZE_MB=$(expr ${FIRST_METADATA_CHUNK_SIZE_B} / 1024 / 1024
>
>  # if [[ $(expr ${FIRST_DATA_CHUNK_SIZE_MB} + ${DATA_SIZE_START_MB}) -ne $(expr ${FIRST_DATA_SIZE_MB}) ]]; then
>  if [[ $(expr ${FIRST_DATA_CHUNK_SIZE_MB} + ${DATA_SIZE_START_MB}) -ne ${FIRST_DATA_SIZE_MB} ]]; then
> -       echo "tInitial data allocation not correct."
> +       echo "Initial data allocation not correct."
>  fi
>
>  if [[ $(expr ${FIRST_METADATA_CHUNK_SIZE_MB} + ${METADATA_SIZE_START_MB}) -ne ${FIRST_METADATA_SIZE_MB} ]]; then
> --
> 2.37.2
>



[Index of Archives]     [Linux Filesystems Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux