Re: [PATCH v5 00/14] dm-raid/md/raid: fix v6.7 regressions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Fri 09 Feb 2024 at 14:37, Song Liu <song@xxxxxxxxxx> wrote:

On Thu, Feb 8, 2024 at 3:17 PM Benjamin Marzinski <bmarzins@xxxxxxxxxx> wrote:

[...]
>
> I am not able to get reliable results from > shell/lvconvert-repair-raid.sh > either. For 6.6.0 kernel, the test fails. On 6.8-rc1 kernel, > the test fails
> sometimes.
>
> Could you please share more information about your test > setup?
> Specifically:
> 1. Which tree/branch/tag are you testing?
> 2. What's the .config used in the tests?
> 3. How do you run the test suite? One test at a time, or all > of them
> together?
> 4. How do you handle "test passes sometimes" cases?

So, I have been able to recreate the case where lvconvert-repair-raid.sh keeps failing. It happens when I tried running the reproducer on a virtual machine made using a cloud image, instead of one that I manually installed. I'm not sure why there is a difference. But I can show you
how I can reliably recreate the errors I'm seeing.


Create a new Fedora 39 virtual machine with the following commands (I'm not sure if it is possible to reproduce this on a machine using less memory and cpus, but I can try that if you need me to. You probably also
want to pick a faster Fedora Mirror for the image location):
# virt-install --name repair-test --memory 8192 --vcpus 8 --disk size=40
--graphics none --extra-args "console=ttyS0" --osinfo
detect=on,name=fedora-unknown --location
https://download.fedoraproject.org/pub/fedora/linux/releases/39/Server/x86_64/os/


virt-install doesn't work well in the my daily dev server. I will try on a
different machine.

Install to the whole virtual drive, using the default LVM partitioning. Then ssh into the VM and run the following commands to setup the
lvm2-testsuite and 6.6.0 kernel:

[...]


Rerun the lvm2-testsuite with the same commands as before:

# mount -o remount,dev /tmp

This mount trick helped me run tests without a full image (use
CONFIG_9P_FS to reuse host file systems instead). Thanks!

# cd ~/lvm2
# make check T=lvconvert-repair-raid.sh

This fails about 20% of the time, usually at either line 146 or 164. You
can check by running the following command when the test fails.

However, I am seeing lvconvert-repair-raid.sh passes all the time
with both 6.6 kernel and 6.8+v5 patchset. My host system is
CentOS 8.


shell/lvconvert-repair-raid.sh fails for SLES 15SP5 + upstream lvm2 +
v6.8+v5 patchset but not with v6.6 kernel.

--
Su

I guess we will have to run more tests.

DM folks, please also review the set. We won't be able to ship the
dm changes without your thorough reviews.

Thanks,
Song





[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux