Re: [PATCH] mdadm: fix reshape from RAID5 to RAID6 with backup file

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 4/1/2021 10:49 PM, Jes Sorensen wrote:
On 3/26/21 7:59 AM, Nigel Croxon wrote:> ----- Original Message ----->
From: "Oleksandr Shchirskyi" <oleksandr.shchirskyi@xxxxxxxxxxxxxxx>> To:
"Nigel Croxon" <ncroxon@xxxxxxxxxx>
Cc: linux-raid@xxxxxxxxxxxxxxx, "Mariusz Tkaczyk" <mariusz.tkaczyk@xxxxxxxxxxxxxxx>, "Jes Sorensen" <jes@xxxxxxxxxxxxxxxxxx>
Sent: Tuesday, March 23, 2021 4:58:27 PM
Subject: Re: [PATCH] mdadm: fix reshape from RAID5 to RAID6 with backup file

On 3/23/2021 5:36 PM, Nigel Croxon wrote:
Oleksandr,
Can you post your dmesg output when running the commands?

I've back down from 5.11 to 5.8 and I still see:
[  +0.042694] md/raid0:md126: raid5 must have missing parity disk!
[  +0.000001] md: md126: raid0 would not accept array

Thanks, Nigel

Hello Nigel,

I've switched to 4.18.0-240.el8.x86_64 kernel (I have RHEL8.3) and I still
have the same results, issue is still easily reproducible when patch
4ae96c8 is applied.

Cropped test logs with and w/o your patch:

# git log -n1 --oneline
f94df5c (HEAD -> master, origin/master, origin/HEAD) imsm: support for
third Sata controller
# make clean; make; make install-systemd; make install
# mdadm -CR imsm0 -e imsm -n4 /dev/nvme[0-3]n1 && mdadm -CR volume -l0
--chunk 64 --size=10G --raid-devices=1 /dev/nvme0n1 --force
# mdadm -G /dev/md/imsm0 -n2
# dmesg -c
[  393.530389] md126: detected capacity change from 0 to 10737418240
[  407.139318] md/raid:md126: device nvme0n1 operational as raid disk 0
[  407.153920] md/raid:md126: raid level 4 active with 1 out of 2 devices,
algorithm 5
[  407.246037] md: reshape of RAID array md126
[  407.357940] md: md126: reshape interrupted.
[  407.388144] md: reshape of RAID array md126
[  407.398737] md: md126: reshape interrupted.
[  407.403486] md: reshape of RAID array md126
[  459.414250] md: md126: reshape done.
# cat /proc/mdstat
Personalities : [raid0] [raid6] [raid5] [raid4]
md126 : active raid4 nvme3n1[2] nvme0n1[0]
        10485760 blocks super external:/md127/0 level 4, 64k chunk,
algorithm 0 [3/2] [UU_]

md127 : inactive nvme3n1[3](S) nvme2n1[2](S) nvme1n1[1](S) nvme0n1[0](S)
        4420 blocks super external:imsm

unused devices: <none>

# mdadm -Ss; wipefs -a /dev/nvme[0-3]n1
# dmesg -C
# git revert 4ae96c802203ec3cfbb089240c56d61f7f4661b3
# make clean; make; make install-systemd; make install
# mdadm -CR imsm0 -e imsm -n4 /dev/nvme[0-3]n1 && mdadm -CR volume -l0
--chunk 64 --size=10G --raid-devices=1 /dev/nvme0n1 --force
# mdadm -G /dev/md/imsm0 -n2
# dmesg -c
[  623.772039] md126: detected capacity change from 0 to 10737418240
[  644.823245] md/raid:md126: device nvme0n1 operational as raid disk 0
[  644.838542] md/raid:md126: raid level 4 active with 1 out of 2 devices,
algorithm 5
[  644.928672] md: reshape of RAID array md126
[  697.405351] md: md126: reshape done.
[  697.409659] md126: detected capacity change from 10737418240 to 21474836480
# cat /proc/mdstat
Personalities : [raid0] [raid6] [raid5] [raid4]
md126 : active raid0 nvme3n1[2] nvme0n1[0]
        20971520 blocks super external:/md127/0 64k chunks

md127 : inactive nvme3n1[3](S) nvme2n1[2](S) nvme1n1[1](S) nvme0n1[0](S)
        4420 blocks super external:imsm


Do you need more detailed logs? My system/drives configuration details?

Regards,
Oleksandr Shchirskyi




 From f0c80c8e90b2ce113b6e22f919659430d3d20efa Mon Sep 17 00:00:00 2001
From: Nigel Croxon <ncroxon@xxxxxxxxxx>
Date: Fri, 26 Mar 2021 07:56:10 -0400
Subject: [PATCH] mdadm: fix growing containers

This fixes growing containers which was broken with
commit 4ae96c802203ec3c (mdadm: fix reshape from RAID5 to RAID6 with
backup file)

The issue being that containers use the function
wait_for_reshape_isms and expect a number value and not a
string value of "max".  The change is to test for external
before setting the correct value.

Signed-off-by: Nigel Croxon <ncroxon@xxxxxxxxxx>

I was about to revert the problematic patch. Oleksandr, can you confirm
if it resolves the issues you were seeing?

Thanks,
Jes


Hi Jes,

Yes, I can confirm that the issue has been resolved with this patch.

Thanks,
Oleksandr Shchirskyi



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux