Hi,
Am 10.04.24 um 03:56 schrieb Li Nan:
Hi, Köhler
在 2024/4/9 7:31, Sven Köhler 写道:
Hi,
I was shocked to find that upon reboot, my Linux machine was detecting
/dev/sd[abcd] as members of a raid array. It would assign those
members to /dev/md4. It would not run the raid arrays /dev/mdX with
members /dev/sda[abcd]X for X=1,2,3,4 as it usually did for the past
couple of years.
My server was probably a unicorn in the sense that it used metadata
version 0.90. This version of software RAID metadata is stored at the
_end_ of a partition. In my case, /dev/sda4 would be the last
partition on drive /dev/sda. I confirmed with mdadm --examine that
metadata with the identical UUID would be found on both /dev/sda4 and
/dev/sda.
I am trying to reproduce it, but after reboot, md0 started with members
/dev/sd[bc]2 correctly. And mdadm will waring if assemble by 'mdadm -A'.
# mdadm -CR /dev/md0 -l1 -n2 /dev/sd[bc]2 --metadata=0.9
# mdadm -S --scan
# mdadm -A --scan
mdadm: WARNING /dev/sde2 and /dev/sde appear to have very similar
superblocks.
If they are really different, please --zero the superblock on one
If they are the same or overlap, please remove one from the
DEVICE list in mdadm.conf.
mdadm: No arrays found in config file or automatically
Can you tell me how you create and config the RAID?
I should have mentioned the mdadm and kernel version. I am using mdadm
4.3-2 and linux-lts 6.6.23-1 on Arch Linux.
I created the array very similar to what you did:
mdadm --create /dev/md4 --level=6 --raid-devices=4 --metadata=0.90
/dev/sd[abcd]4
My mdadm.conf looks like this:
DEVICE partitions
ARRAY /dev/md/4 metadata=0.90 UUID=...
And /proc/partitions looks like this:
major minor #blocks name
8 0 2930266584 sda
8 1 1048576 sda1
8 2 33554432 sda2
8 3 10485760 sda3
8 4 2885176775 sda4
8 16 2930266584 sdb
8 17 1048576 sdb1
8 18 33554432 sdb2
8 19 10485760 sdb3
8 20 2885176775 sdb4
8 32 2930266584 sdc
8 33 1048576 sdc1
8 34 33554432 sdc2
8 35 10485760 sdc3
8 36 2885176775 sdc4
8 48 2930266584 sdd
8 49 1048576 sdd1
8 50 33554432 sdd2
8 51 10485760 sdd3
8 52 2885176775 sdd4
Interestingly, sda, sdb, etc. are included. So "DEVICE partitions"
actually considers them.
Here's what I think went wrong: I believe either the kernel or mdadm
(likely the latter) was seeing the metadata at the end of /dev/sda and
ignored the fact that the location of the metadata was actually owned
by a partition (namely /dev/sda4). The same happened for /dev/sd[bcd]
and thus I ended up with /dev/md4 being started with members
/dev/sda[abcd] instead of members /dev/sda[abcd]4.
This behavior started recently. I saw in the logs that I had updated
mdadm but also the Linux kernel. mdadm and an appropriate mdadm.conf
is part of my initcpio. My mdadm.conf lists the arrays with their
metadata version and their UUID.
Starting a RAID array with members /dev/sda[abcd] somehow removed the
partitions of the drives. The partition table would still be present,
but the partitions would disappear from /dev. So /dev/sda[abcd]1-3
were not visible anymore and thus /dev/md1-3 would not be started.
I strongly believe that mdadm should ignore any metadata - regardless
of the version - that is at a location owned by any of the partitions.
While I'm not 100% sure how to implement that, the following might
also work: first scan the partitions for metadata, then ignore if the
parent device has metadata with a UUID previously found.
I did the right thing and converted my RAID arrays to metadata 1.2,
but I'd like to save other from the adrenaline shock.
Kind Regards,
Sven
.