Re: [PATCH] mdadm: fix reshape from RAID5 to RAID6 with backup file

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




----- Original Message -----
From: "Oleksandr Shchirskyi" <oleksandr.shchirskyi@xxxxxxxxxxxxxxx>
To: "Nigel Croxon" <ncroxon@xxxxxxxxxx>, linux-raid@xxxxxxxxxxxxxxx
Cc: "Mariusz Tkaczyk" <mariusz.tkaczyk@xxxxxxxxxxxxxxx>, "Jes Sorensen" <jes@xxxxxxxxxxxxxxxxxx>
Sent: Monday, March 22, 2021 12:21:11 PM
Subject: Re: [PATCH] mdadm: fix reshape from RAID5 to RAID6 with backup file

Hello Nigel,

I have collected more info regarding this issue.
I can confirm what Mariusz said, it's a regression caused by patch 4ae96c802203
The reason for failure during the reshape, is that in this patch sync_max 
value is set to max, but the function wait_for_reshape_imsm, used in some 
reshape scenarios, relies on this parameter, and doesn't expect, that value 
can be max. This leads to reshaping fail.
Here's an example of a debug log from this method, when the issue is hit:

mdadm: wait_for_reshape_imsm: wrong next position to set 4096 (2048)
mdadm: imsm_manage_reshape: wait_for_reshape_imsm returned error!

With this patch reverted, the issue is not observed. See my logs below:

# mdadm -CR imsm0 -e imsm -n4 /dev/nvme[0-3]n1 && mdadm -CR volume -l0 
--chunk 64 --size=10G --raid-devices=1 /dev/nvme0n1 --force
# mdadm -D /dev/md/volume 
 
                                                               /dev/md/volume:
          Container : /dev/md/imsm0, member 0
         Raid Level : raid0
         Array Size : 10485760 (10.00 GiB 10.74 GB)
       Raid Devices : 1
      Total Devices : 1
              State : clean
...
# mdadm -G /dev/md/imsm0 -n2
# mdadm -D /dev/md/volume
/dev/md/volume:
          Container : /dev/md/imsm0, member 0
         Raid Level : raid4
         Array Size : 10485760 (10.00 GiB 10.74 GB)
      Used Dev Size : 10485760 (10.00 GiB 10.74 GB)
       Raid Devices : 3
      Total Devices : 2
              State : clean, degraded
...
# git revert 4ae96c802203ec3cfbb089240c56d61f7f4661b3
Auto-merging Grow.c
[master 1166854] Revert "mdadm: fix reshape from RAID5 to RAID6 with backup 
file"
  1 file changed, 2 insertions(+), 5 deletions(-)
# mdadm -Ss; wipefs -a /dev/nvme[0-3]n1
# make clean; make; make install-systemd; make install
# mdadm -CR imsm0 -e imsm -n4 /dev/nvme[0-3]n1 && mdadm -CR volume -l0 
--chunk 64 --size=10G --raid-devices=1 /dev/nvme0n1 --force
# mdadm -G /dev/md/imsm0 -n2
# mdadm -D /dev/md/volume
/dev/md/volume:
          Container : /dev/md/imsm0, member 0
         Raid Level : raid0
         Array Size : 20971520 (20.00 GiB 21.47 GB)
       Raid Devices : 2
      Total Devices : 2

              State : clean
...
#

On 3/16/2021 4:59 PM, Nigel Croxon wrote:
> ----- Original Message -----
> From: "Mariusz Tkaczyk" <mariusz.tkaczyk@xxxxxxxxxxxxxxx>
> To: "Jes Sorensen" <jes@xxxxxxxxxxxxxxxxxx>, "Nigel Croxon" <ncroxon@redhat=
> .com>, linux-raid@xxxxxxxxxxxxxxx, xni@xxxxxxxxxx
> Sent: Tuesday, March 16, 2021 10:54:22 AM
> Subject: Re: [PATCH] mdadm: fix reshape from RAID5 to RAID6 with backup fil=
> e
> 
> Hello Nigel,
> 
> Blame told us, that yours patch introduce regression in following
> scenario:
> 
> #mdadm -CR imsm0 -e imsm -n4 /dev/nvme[0125]n1
> #mdadm -CR volume -l0 --chunk 64 --raid-devices=3D1 /dev/nvme0n1 --force
> #mdadm -G /dev/md/imsm0 -n2
> 
> At the end of reshape, level doesn't back to RAID0.
> Could you look into it?
> Let me know, if you need support.
> 
> Thanks,
> Mariusz
> 
> I=E2=80=99m trying your situation without my patch (its reverted) and I=E2=
> =80=99m not seeing success.
> See the dmesg log.
> 
> 
> [root@fedora33 mdadmupstream]# mdadm -CR volume -l0 --chunk 64 --raid-devic=
> es=3D1 /dev/nvme0n1 --force
> mdadm: /dev/nvme0n1 appears to be part of a raid array:
>        level=3Dcontainer devices=3D0 ctime=3DWed Dec 31 19:00:00 1969
> mdadm: Creating array inside imsm container md127
> mdadm: array /dev/md/volume started.
> 
> [root@fedora33 mdadmupstream]# cat /proc/mdstat=20
> Personalities : [raid6] [raid5] [raid4] [raid0]=20
> md126 : active raid0 nvme0n1[0]
>       500102144 blocks super external:/md127/0 64k chunks
> 
> md127 : inactive nvme3n1[3](S) nvme2n1[2](S) nvme1n1[1](S) nvme0n1[0](S)
>       4420 blocks super external:imsm
> 
> unused devices: <none>
> [root@fedora33 mdadmupstream]# mdadm -G /dev/md/imsm0 -n2
> [root@fedora33 mdadmupstream]# cat /proc/mdstat=20
> Personalities : [raid6] [raid5] [raid4] [raid0]=20
> md126 : active raid4 nvme3n1[2] nvme0n1[0]
>       500102144 blocks super external:-md127/0 level 4, 64k chunk, algorithm=
>   5 [2/1] [U_]
> 
> md127 : inactive nvme3n1[3](S) nvme2n1[2](S) nvme1n1[1](S) nvme0n1[0](S)
>       4420 blocks super external:imsm
> 
> unused devices: <none>
> 
> 
> dmesg says:
> [Mar16 11:46] md/raid:md126: device nvme0n1 operational as raid disk 0
> [  +0.011147] md/raid:md126: raid level 4 active with 1 out of 2 devices, a=
> lgorithm 5
> [  +0.044605] md/raid0:md126: raid5 must have missing parity disk!
> [  +0.000002] md: md126: raid0 would not accept array
> 
> -Nigel
> 

-- 
Regards,
Oleksandr Shchirskyi


I still see this in dmesg, when testing your commands..  (with my patch reverted).

[ +15.062999]  nvme3n1:
[  +0.027625]  nvme0n1:
[  +0.014124] md126: detected capacity change from 0 to 204800
[  +0.011697]  nvme0n1:
[  +0.016679]  nvme0n1:
[  +0.007536]  nvme3n1:
[  +0.022917]  md126:
[  +0.069564]  nvme0n1:
[ +10.069299] md/raid:md126: device nvme0n1 operational as raid disk 0
[  +0.010772] md/raid:md126: raid level 4 active with 1 out of 2 devices, algorithm 5
[  +0.041509] md/raid0:md126: raid5 must have missing parity disk!
[  +0.000003] md: md126: raid0 would not accept array








[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux