Hello, until recently I was using Unraid with 8 disks in an XFS array.I installed Debian on a spare SSD on the same machine and started migrating disk by disk from the Unraid array to a raid5 array using a second server as temporay storage. So I switched between Debian and Unraid a lot to copy data and remove/add drives from/to arrays.
From the beginning I always had to assemble the array without /dev/sdd, add it afterwards and let it rebuild - since the array was working fine afterwards I didn't really think much of it. Appearently Unraid always overwrote the superblock of that 1 (and later 2) disks (/dev/sdc and /dev/sdd) when I switched between the two OSs and now mdadm isn't recognizing those 2 disks and I can't assemble the array obviously. At least that's what I think happened, since file tells me that the first 32k bytes are XFS:
# losetup -o 32768 -f /dev/sdd # file -s /dev/loop0 /dev/loop0: SGI XFS filesystem data (blksz 4096, inosz 512, v2 dirs)Right now mdadm only assembles 5 (instead of 7) disks as spares into an inactive array at boot:
# cat /proc/mdstatPersonalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sde[6](S) sdf1[5](S) sdg[1](S) sdh[3](S) sdi[8](S) 21487980896 blocks super 1.2 unused devices: <none> My system:Linux titan 5.10.0-8-amd64 #1 SMP Debian 5.10.46-4 (2021-08-03) x86_64 GNU/Linux
mdadm - v4.1 - 2018-10-01 Maybe I could try to assemble it with assume-clean and read-only?I found some pages in the wiki but I'm not 100% sure that they will solve my problem and I don't want to make things worse.
https://raid.wiki.kernel.org/index.php/Recovering_a_damaged_RAID https://raid.wiki.kernel.org/index.php/RAID_RecoveryI attached the remaining system information as textfiles in a zipfile, since the outputs are pretty long - i hope thats ok.
Thanks, best regards Clemens
<<attachment: missing_superblock_raid5.zip>>