Re: Kernel bug in async_xor_offs during RAID5 recovery

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Hi Xiao,
I simplified the scenario.

On 06.05.2021 12:57, Xiao Ni wrote:
Hi Oleksandr Shchirskyi

Can this only happen with PPL, imsm and nvme disks? My machine doesn't support creating raid device with nvme devices.
Could you try to create array with IMSM_NO_PLATFORM=1?

And rotational disks don't have /sys/block/nvme1n1/device/device/remove. What's the meaning about setting 1
to the remove file?

I tried to create the imsm raid with rotational disks and ppl. Then remove and add disk to trigger recovery. It works
well.
I verified that drive removal is not crucial here.
The main trick here is to impose the PPL recovery. I did that by following
scenario:
1. Create array (I was able to reproduce it with 1Gb size):
#mdadm -CR imsm -e imsm -n3 /dev/nvme[456]n1
#mdadm -CR vol2 -l5 -n3 /dev/nvme[456]n1 -z 1G -c64 --assume-clean --consistency-policy=ppl

2. Get mdmon pid:
# ps -ef | grep mdmon

3. Write data to the drive, and kill mdmon after that (this will force
array to stay dirty):
#dd of=/dev/md126 if=/dev/urandom bs=4M oflag=direct; kill -9 {mdmon_pid}

4. Stop Array:
# mdadm -Ss

5. Start Array:
# mdadm -As

Thanks,
Mariusz



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux