Re: [PATCH] mdadm: fix reshape from RAID5 to RAID6 with backup file

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 3/23/2021 5:36 PM, Nigel Croxon wrote:
Oleksandr,
Can you post your dmesg output when running the commands?

I've back down from 5.11 to 5.8 and I still see:
[  +0.042694] md/raid0:md126: raid5 must have missing parity disk!
[  +0.000001] md: md126: raid0 would not accept array

Thanks, Nigel

Hello Nigel,

I've switched to 4.18.0-240.el8.x86_64 kernel (I have RHEL8.3) and I still have the same results, issue is still easily reproducible when patch 4ae96c8 is applied.

Cropped test logs with and w/o your patch:

# git log -n1 --oneline
f94df5c (HEAD -> master, origin/master, origin/HEAD) imsm: support for third Sata controller
# make clean; make; make install-systemd; make install
# mdadm -CR imsm0 -e imsm -n4 /dev/nvme[0-3]n1 && mdadm -CR volume -l0 --chunk 64 --size=10G --raid-devices=1 /dev/nvme0n1 --force
# mdadm -G /dev/md/imsm0 -n2
# dmesg -c
[  393.530389] md126: detected capacity change from 0 to 10737418240
[  407.139318] md/raid:md126: device nvme0n1 operational as raid disk 0
[ 407.153920] md/raid:md126: raid level 4 active with 1 out of 2 devices, algorithm 5
[  407.246037] md: reshape of RAID array md126
[  407.357940] md: md126: reshape interrupted.
[  407.388144] md: reshape of RAID array md126
[  407.398737] md: md126: reshape interrupted.
[  407.403486] md: reshape of RAID array md126
[  459.414250] md: md126: reshape done.
# cat /proc/mdstat
Personalities : [raid0] [raid6] [raid5] [raid4]
md126 : active raid4 nvme3n1[2] nvme0n1[0]
10485760 blocks super external:/md127/0 level 4, 64k chunk, algorithm 0 [3/2] [UU_]

md127 : inactive nvme3n1[3](S) nvme2n1[2](S) nvme1n1[1](S) nvme0n1[0](S)
      4420 blocks super external:imsm

unused devices: <none>

# mdadm -Ss; wipefs -a /dev/nvme[0-3]n1
# dmesg -C
# git revert 4ae96c802203ec3cfbb089240c56d61f7f4661b3
# make clean; make; make install-systemd; make install
# mdadm -CR imsm0 -e imsm -n4 /dev/nvme[0-3]n1 && mdadm -CR volume -l0 --chunk 64 --size=10G --raid-devices=1 /dev/nvme0n1 --force
# mdadm -G /dev/md/imsm0 -n2
# dmesg -c
[  623.772039] md126: detected capacity change from 0 to 10737418240
[  644.823245] md/raid:md126: device nvme0n1 operational as raid disk 0
[ 644.838542] md/raid:md126: raid level 4 active with 1 out of 2 devices, algorithm 5
[  644.928672] md: reshape of RAID array md126
[  697.405351] md: md126: reshape done.
[  697.409659] md126: detected capacity change from 10737418240 to 21474836480
# cat /proc/mdstat
Personalities : [raid0] [raid6] [raid5] [raid4]
md126 : active raid0 nvme3n1[2] nvme0n1[0]
      20971520 blocks super external:/md127/0 64k chunks

md127 : inactive nvme3n1[3](S) nvme2n1[2](S) nvme1n1[1](S) nvme0n1[0](S)
      4420 blocks super external:imsm


Do you need more detailed logs? My system/drives configuration details?

Regards,
Oleksandr Shchirskyi



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux