Re: [PATCH 3/3] btrfs: test mount fails on physical device with configured dm volume

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]



On Thu, Mar 7, 2024 at 12:51 PM Anand Jain <anand.jain@xxxxxxxxxx> wrote:
>
> When a flakey device is configured, we have access to both the physical
> device and the DM flaky device. Ensure that when the flakey device is
> configured, the physical device mount fails.

Does it need to be flakey? Can't it be any other DM type?
The way it's phrased, gives the idea that somehow this is all flakey specific.

Just be clear and mention any DM on top of the device.

>
> Signed-off-by: Anand Jain <anand.jain@xxxxxxxxxx>
> ---
>  tests/btrfs/318     | 45 +++++++++++++++++++++++++++++++++++++++++++++
>  tests/btrfs/318.out |  3 +++
>  2 files changed, 48 insertions(+)
>  create mode 100755 tests/btrfs/318
>  create mode 100644 tests/btrfs/318.out
>
> diff --git a/tests/btrfs/318 b/tests/btrfs/318
> new file mode 100755
> index 000000000000..015950fbd93c
> --- /dev/null
> +++ b/tests/btrfs/318
> @@ -0,0 +1,45 @@
> +#! /bin/bash
> +# SPDX-License-Identifier: GPL-2.0
> +# Copyright (c) 2024 Oracle.  All Rights Reserved.
> +#
> +# FS QA Test 318
> +#
> +# Create multiple device nodes with the same device try mount
> +#
> +. ./common/preamble
> +_begin_fstest auto volume tempfsid

Also 'quick' group.

> +
> +# Override the default cleanup function.
> +_cleanup()
> +{
> +       umount $extra_mnt &> /dev/null
> +       rm -rf $extra_mnt &> /dev/null
> +       _unmount_flakey
> +       _cleanup_flakey
> +       cd /
> +       rm -r -f $tmp.*
> +}
> +
> +# Import common functions.
> +. ./common/filter
> +. ./common/dmflakey
> +
> +# real QA test starts here
> +_supported_fs btrfs
> +_require_scratch
> +_require_dm_target flakey
> +
> +_scratch_mkfs >> $seqres.full
> +_init_flakey
> +
> +_mount_flakey
> +extra_mnt=$TEST_DIR/extra_mnt
> +rm -rf $extra_mnt
> +mkdir -p $extra_mnt
> +_mount $SCRATCH_DEV $extra_mnt 2>&1 | _filter_testdir_and_scratch
> +
> +_flakey_drop_and_remount

Why? Please add a comment.

Either this is a leftover from copy-paste from other tests, that
exercise fsync and power failure, or the goal is to do an unmount
followed by a mount and check that the mount succeeds.
If it's the latter case, then it's confusing to use
_flakey_drop_and_remount, because that drops writes, unmounts, allows
writes again and then mounts - i.e. we don't want to simulate power
loss by silently dropping writes.
Just call _unmount_flakey and _mount_flakey and add a comment
mentioning why we are doing it.

Finally, a link to the report that motivated this, due to a bug in a
patch that ended not being sent to Linus, would be useful:

https://lore.kernel.org/linux-btrfs/CAL3q7H5wx5rKmSzGWP7mRqaSfAY88g=35N4OBrbJB61rK0mt2w@xxxxxxxxxxxxxx/

I'm also not convinced that we need this test, because the bug could
be reliably reproduced by running all existing tests or just a subset
of the tests as I reported there.

Thanks.

> +
> +# success, all done
> +status=0
> +exit
> diff --git a/tests/btrfs/318.out b/tests/btrfs/318.out
> new file mode 100644
> index 000000000000..5cdbea8c4b2a
> --- /dev/null
> +++ b/tests/btrfs/318.out
> @@ -0,0 +1,3 @@
> +QA output created by 318
> +mount: TEST_DIR/extra_mnt: wrong fs type, bad option, bad superblock on SCRATCH_DEV, missing codepage or helper program, or other error.
> +       dmesg(1) may have more information after failed mount system call.
> --
> 2.39.3
>
>





[Index of Archives]     [Linux Filesystems Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux