I’m trying your situation without my patch (its reverted) and I’m not seeing success. [root@fedora33 mdadmupstream]# mdadm -CR volume -l0 --chunk 64 --raid-devices=1 /dev/nvme0n1 --force mdadm: /dev/nvme0n1 appears to be part of a raid array: level=container devices=0 ctime=Wed Dec 31 19:00:00 1969 mdadm: Creating array inside imsm container md127 mdadm: array /dev/md/volume started. [root@fedora33 mdadmupstream]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [raid0] md126 : active raid0 nvme0n1[0] 500102144 blocks super external:/md127/0 64k chunks md127 : inactive nvme3n1[3](S) nvme2n1[2](S) nvme1n1[1](S) nvme0n1[0](S) 4420 blocks super external:imsm unused devices: <none> [root@fedora33 mdadmupstream]# mdadm -G /dev/md/imsm0 -n2 [root@fedora33 mdadmupstream]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [raid0] md126 : active raid4 nvme3n1[2] nvme0n1[0] 500102144 blocks super external:-md127/0 level 4, 64k chunk, algorithm 5 [2/1] [U_] md127 : inactive nvme3n1[3](S) nvme2n1[2](S) nvme1n1[1](S) nvme0n1[0](S) 4420 blocks super external:imsm unused devices: <none> dmesg says: [Mar16 11:46] md/raid:md126: device nvme0n1 operational as raid disk 0 [ +0.011147] md/raid:md126: raid level 4 active with 1 out of 2 devices, algorithm 5 [ +0.044605] md/raid0:md126: raid5 must have missing parity disk! [ +0.000002] md: md126: raid0 would not accept array > On Mar 16, 2021, at 10:54 AM, Tkaczyk, Mariusz <mariusz.tkaczyk@xxxxxxxxxxxxxxx> wrote: > > Hello Nigel, > > Blame told us, that yours patch introduce regression in following > scenario: > > #mdadm -CR imsm0 -e imsm -n4 /dev/nvme[0125]n1 > #mdadm -CR volume -l0 --chunk 64 --raid-devices=1 /dev/nvme0n1 --force > #mdadm -G /dev/md/imsm0 -n2 > > At the end of reshape, level doesn't back to RAID0. > Could you look into it? > Let me know, if you need support. > > Thanks, > Mariusz >