Re: [PATCH v2] fstests: btrfs/049: add regression test for compress-force mount options

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]



On Wed, Nov 24, 2021 at 12:58 PM Qu Wenruo <quwenruo.btrfs@xxxxxxx> wrote:
>
>
>
> On 2021/11/24 20:51, Filipe Manana wrote:
> > On Wed, Nov 24, 2021 at 12:28 PM Qu Wenruo <quwenruo.btrfs@xxxxxxx> wrote:
> >>
> >>
> >>
> >> On 2021/11/24 19:22, Filipe Manana wrote:
> >>> On Wed, Nov 24, 2021 at 9:24 AM Qu Wenruo <wqu@xxxxxxxx> wrote:
> >>>>
> >>>> Since kernel commit d4088803f511 ("btrfs: subpage: make lzo_compress_pages()
> >>>> compatible"), lzo compression no longer respects the max compressed page
> >>>> limit, and can cause kernel crash.
> >>>>
> >>>> The upstream fix is 6f019c0e0193 ("btrfs: fix a out-of-bound access in
> >>>> copy_compressed_data_to_page()").
> >>>>
> >>>> This patch will add such regression test for all possible compress-force
> >>>> mount options, including lzo, zstd and zlib.
> >>>>
> >>>> And since we're here, also make sure the content of the file matches
> >>>> after a mount cycle.
> >>>>
> >>>> Signed-off-by: Qu Wenruo <wqu@xxxxxxxx>
> >>>> ---
> >>>> Changelog:
> >>>> v2:
> >>>> - Also test zlib and zstd
> >>>> - Add file content verification check
> >>>> ---
> >>>>    tests/btrfs/049     | 56 +++++++++++++++++++++++++++++++++++++++++++++
> >>>>    tests/btrfs/049.out |  6 +++++
> >>>>    2 files changed, 62 insertions(+)
> >>>>    create mode 100755 tests/btrfs/049
> >>>>
> >>>> diff --git a/tests/btrfs/049 b/tests/btrfs/049
> >>>> new file mode 100755
> >>>> index 00000000..264e576f
> >>>> --- /dev/null
> >>>> +++ b/tests/btrfs/049
> >>>> @@ -0,0 +1,56 @@
> >>>> +#! /bin/bash
> >>>> +# SPDX-License-Identifier: GPL-2.0
> >>>> +# Copyright (C) 2021 SUSE Linux Products GmbH. All Rights Reserved.
> >>>> +#
> >>>> +# FS QA Test 049
> >>>> +#
> >>>> +# Test if btrfs will crash when using compress-force mount option against
> >>>> +# incompressible data
> >>>> +#
> >>>> +. ./common/preamble
> >>>> +_begin_fstest auto quick compress dangerous
> >>>> +
> >>>> +# Override the default cleanup function.
> >>>> +_cleanup()
> >>>> +{
> >>>> +       cd /
> >>>> +       rm -r -f $tmp.*
> >>>> +}
> >>>> +
> >>>> +# Import common functions.
> >>>> +. ./common/filter
> >>>> +
> >>>> +# real QA test starts here
> >>>> +
> >>>> +# Modify as appropriate.
> >>>> +_supported_fs btrfs
> >>>> +_require_scratch
> >>>> +
> >>>> +pagesize=$(get_page_size)
> >>>> +workload()
> >>>> +{
> >>>> +       local compression
> >>>> +       compression=$1
> >>>
> >>> Could be shorter by doing it in one step:
> >>>
> >>> local compression=$1
> >>>
> >>>> +
> >>>> +       echo "=== Testing compress-force=$compression ==="
> >>>> +       _scratch_mkfs -s "$pagesize">> $seqres.full
> >>>> +       _scratch_mount -o compress-force="$compression"
> >>>> +       $XFS_IO_PROG -f -c "pwrite -i /dev/urandom 0 $pagesize" \
> >>>> +               "$SCRATCH_MNT/file" > /dev/null
> >>>> +       md5sum "$SCRATCH_MNT/file" > "$tmp.$compression"
> >>>
> >>> This doesn't really check if everything we asked to write was actually written.
> >>> pwrite(2), write(2), etc, return the number of bytes written, which
> >>> can be less than what we asked for.
> >>>
> >>> And using the checksum verification in that way, we are only checking
> >>> that what we had before unmounting is the same after mounting again.
> >>> I.e. we are not checking that what was actually written is what we
> >>> have asked for.
> >>>
> >>> We could do something like:
> >>>
> >>> data=$(dd count=4096 bs=1 if=/dev/urandom)
> >>> echo -n "$data" > file
> >>
> >> The reason I didn't want to use dd is it doesn't have good enough
> >> wrapper in fstests.
> >> (Thus I guess that's also the reason why you use above command to
> >> workaround it)
> >>
> >> If you're really concerned about the block size, it can be easily
> >> changed using "-b" option of pwrite, to archive the same behavior of the
> >> dd command.
> >>
> >> Furthermore, since we're reading from urandom, isn't it already ensured
> >> we won't get blocked nor get short read until we're reading very large
> >> blocks?
> >
> > I gave dd as an example, but I don't care about dd at all, it can be
> > xfs_io or anything else.
> >
> > My whole point was about verifying that what's written to the file is
> > what we intended to write.
> >
> > Verifying the checksum is fine when we know exactly what we asked to
> > write, when the test hard codes what we want to write.
> >
> > In this case we're asking to write random content, so doing an md5sum
> > of the file after writing and
> > then comparing it to what we get after we unmount and mount again, is
> > not a safe way to test we have the
> > expected content.
> >
> >>
> >> Thus a very basic filter on the pwrite should be enough to make sure we
> >> really got page sized data written.
> >
> > It's not just about checking if the size of what was written matches
> > what we asked to write.
> > It's all about verifying the written content matches what we asked to
> > write in the first place (which implicitly ends up verifying the size
> > as well when using a checksum).
>
> OK, then what we should do is first read 4K from urandom into some known
> good place (thus I guess that's why you only use dd to read into
> stdout), then do the csum, and compare it after the mount cycle?

Yep, that's it. Maybe store the data in /tmp/ and then compare.

I now realize that the example was bad, because storing binary data
into shell variables is not safe.

Thanks.

>
> Thanks,
> Qu
>
> >
> >>
> >> Thanks,
> >> Qu
> >>
> >>>
> >>> _scratch_cycle_mount
> >>>
> >>> check that the the md5sum of file is the same as:  echo -n "$data" | md5sum
> >>>
> >>> As it is, the test is enough to trigger the original bug, but having
> >>> such additional checks is more valuable IMO for the long run, and can
> >>> help prevent other types of regressions too.
> >>>
> >>> Thanks Qu.
> >>>
> >>>
> >>>> +
> >>>> +       # When unpatched, compress-force=lzo would crash at data writeback
> >>>> +       _scratch_cycle_mount
> >>>> +
> >>>> +       # Make sure the content matches
> >>>> +       md5sum -c "$tmp.$compression" | _filter_scratch
> >>>> +       _scratch_unmount
> >>>> +}
> >>>> +
> >>>> +workload lzo
> >>>> +workload zstd
> >>>> +workload zlib
> >>>> +
> >>>> +# success, all done
> >>>> +status=0
> >>>> +exit
> >>>> diff --git a/tests/btrfs/049.out b/tests/btrfs/049.out
> >>>> index cb0061b3..258f3c09 100644
> >>>> --- a/tests/btrfs/049.out
> >>>> +++ b/tests/btrfs/049.out
> >>>> @@ -1 +1,7 @@
> >>>>    QA output created by 049
> >>>> +=== Testing compress-force=lzo ===
> >>>> +SCRATCH_MNT/file: OK
> >>>> +=== Testing compress-force=zstd ===
> >>>> +SCRATCH_MNT/file: OK
> >>>> +=== Testing compress-force=zlib ===
> >>>> +SCRATCH_MNT/file: OK
> >>>> --
> >>>> 2.34.0
> >>>>
> >>>
> >>>
> >
> >
> >



-- 
Filipe David Manana,

“Whether you think you can, or you think you can't — you're right.”




[Index of Archives]     [Linux Filesystems Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux