Re: [PATCH] mdadm: fix reshape from RAID5 to RAID6 with backup file

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 16.03.2021 16:59, Nigel Croxon wrote:


----- Original Message -----
From: "Mariusz Tkaczyk" <mariusz.tkaczyk@xxxxxxxxxxxxxxx>
To: "Jes Sorensen" <jes@xxxxxxxxxxxxxxxxxx>, "Nigel Croxon" <ncroxon@xxxxxxxxxx>, linux-raid@xxxxxxxxxxxxxxx, xni@xxxxxxxxxx
Sent: Tuesday, March 16, 2021 10:54:22 AM
Subject: Re: [PATCH] mdadm: fix reshape from RAID5 to RAID6 with backup file

Hello Nigel,

Blame told us, that yours patch introduce regression in following
scenario:

#mdadm -CR imsm0 -e imsm -n4 /dev/nvme[0125]n1
#mdadm -CR volume -l0 --chunk 64 --raid-devices=1 /dev/nvme0n1 --force
#mdadm -G /dev/md/imsm0 -n2

At the end of reshape, level doesn't back to RAID0.
Could you look into it?
Let me know, if you need support.

Thanks,
Mariusz

I’m trying your situation without my patch (its reverted) and I’m not seeing success.
See the dmesg log.


[root@fedora33 mdadmupstream]# mdadm -CR volume -l0 --chunk 64 --raid-devices=1 /dev/nvme0n1 --force
mdadm: /dev/nvme0n1 appears to be part of a raid array:
       level=container devices=0 ctime=Wed Dec 31 19:00:00 1969
mdadm: Creating array inside imsm container md127
mdadm: array /dev/md/volume started.

[root@fedora33 mdadmupstream]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid0]
md126 : active raid0 nvme0n1[0]
      500102144 blocks super external:/md127/0 64k chunks

md127 : inactive nvme3n1[3](S) nvme2n1[2](S) nvme1n1[1](S) nvme0n1[0](S)
      4420 blocks super external:imsm

unused devices: <none>
[root@fedora33 mdadmupstream]# mdadm -G /dev/md/imsm0 -n2
[root@fedora33 mdadmupstream]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid0]
md126 : active raid4 nvme3n1[2] nvme0n1[0]
      500102144 blocks super external:-md127/0 level 4, 64k chunk, algorithm 5 [2/1] [U_]

md127 : inactive nvme3n1[3](S) nvme2n1[2](S) nvme1n1[1](S) nvme0n1[0](S)
      4420 blocks super external:imsm

unused devices: <none>


dmesg says:
[Mar16 11:46] md/raid:md126: device nvme0n1 operational as raid disk 0
[  +0.011147] md/raid:md126: raid level 4 active with 1 out of 2 devices, algorithm 5
[  +0.044605] md/raid0:md126: raid5 must have missing parity disk!
[  +0.000002] md: md126: raid0 would not accept array

-Nigel

Hello Nigel,
It looks strange. Could you try to reproduce it with --size, less than
smaller drive in array (e.g. 10G)?

If it doesn't help please provide me your kernel version. I will try to
reproduce it myself.

Thanks,
Mariusz



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux