Re: IMSM: Drive removed during I/O is set to faulty but not removed from volume

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





在 2024/07/18 22:57, Mateusz Kusiak 写道:
Hello,
recently we discovered an issue regarding drive removal during I/O.

Description:
Drive removed during I/O from IMSM R1D2 array is being set to faulty but is not removed from a volume. I/O on the array hangs.

The scenario is as follows:
1. Create R1D2 IMSM array.
2. Create single partition, format it as ext4 and mount is somewhere.
3. Start multiple checksum tests processes (more on that below) and wait a while.
4. Unplug one RAID member.

About "Checksum test":
Checksum test creates ~3GB file and calculates it's checksum twice. It basically does the following: # dd if=/proc/kcore bs=1024 count=3052871 status=none | tee <filename> | md5sum
...and then recalculates checksum to verify if it matches.
In this scenario we use it to simulate I/O, by running multiple tests.

Expected result:
Raid member is removed from the volume and the container, array continues operation on one drive.

Actual result:
Raid member is set to faulty on volume and does not disappear (it's not removed), but it is removed from a container. I\O on mounted volume hangs.

Additional notes:
The issue reproduces on kernel-next. We bisected that potential cause of the issue might be patch "md: use new apis to suspend array for adding/removing rdev from state_store()" (cfa078c8b80d0daf8f2fd4a2ab8e26fa8c33bca1) as it's the first one we observe the issue on our reproduction setup.

Having said that, we also observed the issue for example on SLES15SP6 with kernel 6.4.0-150600.10-default, which might indicate that the problem was here, but became apparent for some reason (race-condition or something else).

Hi,

With some discussion and log collection, looks like this is a deadlock
introduced by:

https://lore.kernel.org/r/20230825031622.1530464-8-yukuai1@xxxxxxxxxxxxxxx

Root cause is that:

1) New io is blocked because array is suspended;
2) md_start_sync suspend the array, and it's waiting for inflight IO to be done;
3) inflight IO is waiting for md_start_sync to be done, from
md_start_write->flush_work().

Can you give following patch a test?

Thanks!
Kuai

diff --git a/drivers/md/md.c b/drivers/md/md.c
index 64693913ed18..10c2d816062a 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -8668,7 +8668,6 @@ void md_write_start(struct mddev *mddev, struct bio *bi)
        BUG_ON(mddev->ro == MD_RDONLY);
        if (mddev->ro == MD_AUTO_READ) {
                /* need to switch to read/write */
-               flush_work(&mddev->sync_work);
                mddev->ro = MD_RDWR;
                set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
                md_wakeup_thread(mddev->thread);


I will work on simplifying the scenario and try to provide script for reproduction.

Thank,
Mateusz

.






[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux