On 5/16/2021 11:16 PM, Christopher Thomas wrote:
Hi all,
I've updated my system & migrated my 3 raid5 component drives from the
old to the new, but now can't reassemble the array - mdadm just
doesn't recognize that these belong to an array at all.
The scenario:
For many years, I've run a raid5 array on a virtual Linux server
(Ubuntu 12.04) in VirtualBox on a Windows 10 host, with 3 2.7TB drives
attached to the virt in "Raw Disk" mode, and assembled into an array.
I recently upgraded to a completely different physical machine, but
still running Windows 10 and VirtualBox. I'm reasonably sure that the
last time I shut it down, the array was clean. Or at they very least,
the drives had superblocks. I plugged the old drives into it,
migrated the virtual machine image to the new system, and attached
them as raw disks, just as in the old system. And they show up as
/dev/sd[b-d], as before. However, it's not recognized automatically
as an array at boot, and manual attempts to assemble & start the array
fail with 'no superblock'
The closest I've found online as a solution is to --create the array
again using the same parameters. But it sounds like if I don't get
the drive order exactly the same, I'll lose the data. Other solutions
hint at playing with the partition table, but I'm equally nervous
about that. So I thought it was a good time to stop & ask for advice.
The details:
Here's my arrangement of disks now, where sd[bcd] are the components:
==========
chris@ursula:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20.1G 0 disk
├─sda1 8:1 0 19.2G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 976M 0 part [SWAP]
sdb 8:16 0 2.7T 0 disk
└─sdb1 8:17 0 128M 0 part
sdc 8:32 0 2.7T 0 disk
└─sdc1 8:33 0 128M 0 part
sdd 8:48 0 2.7T 0 disk
└─sdd1 8:49 0 128M 0 part
sr0 11:0 1 1024M 0 rom
The first thing you need to do is copy those drives into safety media.
I know this means a new drive, but an 8T drive is not that expensive.
I would format the drive and mount it to some directory:
mkfs /dev/sdX
mkdir /Safety
mount /dev/sdX /Safety
cd /Safety
ddrescue /dev/sdb disk1 /Safety/disk1-map
ddrescue /dev/sdc disk2 /Safety/disk2-map
ddrescue /dev/sdd disk3 /Safety/disk3-map
As mentioned earlier in this thread, you can create overlays of the
original disks. That, or you can make loops of the backup files.
Actually, if it were me, I would create two sets of backups and work on
the second set, but then I am hyper-anal about such things. I don't
just employ a belt and suspenders. I use a belt, suspenders, large
numbers of staples, super glue, and a braided steel strap welded in
place. Use whatever level of redundancy with which you feel
comfortable, but I *DEFINITELY* do not recommend working with the
original media. Indeed, I do not recommend attempting to recover the
original media, at all. Once you have a solution identified, I would
employ new drives, keeping the old as backups. (Which rather begs the
question, "Why don't you have backups of the data, so you could simply
create an entirely new empty array and copy the data to the new array
from the backup?")
I would then create a loopfile from each backup file:
losetup -fP disk1
losetup -fP disk2
losetup -fP disk3
Then I would try recreating the RAID based upon the earlier Examine report:
mdadm -C -f -n 3 -l 5 -e 1.2 -c 512 -p ls /dev/md99 /dev/loop2
/dev/loop0 /dev/loop1
You may notice some of the command switches are defaults. Remember
what I said about a belt and suspenders? Personally, in such a case I
would not rely on defaults.
Now try running a check on the assembled array:
fsck /dev/md99
If that fails, shutdown the array with
mdadm -S /dev/md99
and then try creating the array with a different drive order. There
are only two other possible permutations of three disks. If none of
those work, you have some more serious problems.