On 23/06/2014 05:04 μμ, Theodotos Andreou wrote:
Hi to all,
I have a RAID 1 RAID 10 setup that failed. I booted with a recovery
usb (grml) to try to recover the system. Let me explain the setup to you.
This is my parted listing:
http://pastebin.com/6QdyXRQN
The first partitions (/dev/sd[ad]1) are for EFI. No RAID here
The second partitions (/dev/sd[ad]2) are the /boot filesystem. This
used to be /dev/md0 and it is a RAID 1 setup.
The third partitions (/dev/sd[ad]3) is the LVM physical volume which
hosts all the rest. It used to be /dev/md1 and it is a RAID 10 setup.
For the parted listing it looks like there is some partition table
corruption on /dev/sdd.
When I try 'mdadm --verbose --assembly --scan' I get:
http://pastebin.com/iqGF9En7
The output of 'mdadm -Evvvvs' is:
http://pastebin.com/kizjT7xE
Assuming I replace the sdd disk and create the appropriate partition
scheme, what is the correct methodology to restore my md devices? I
don't care much about /dev/md0 but mostly for the /dev/md1 partition
where there are all the data.
Regards
Theo
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
It turns out the sdd disk was unplugged and I mistakenly took the USB
drive as the internal disk. This explains why the UUIDs did not match
the device name.
After I plugged the sdd disk back all went back to normal.
So next time... Don't panic! :)
Sorry for the false alarm guys
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html