Re: Defective RAID

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 22 May 2018, Axel Spallek IT-Dienstleistungen wrote:

I read a howto where one wrote I should recreate the raid:
Mdadm -create --level=5 --raid-devices=4 /dev/md1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
That did it, but I could not mount it.
I realized, that the raid was marked degraded and tried to recover by writing to /dev /sde1.
I paniced, because all drives appeared ok (AAAA) when I scaned them so I stopped the RAID.
Then I reassembled them with the three devices but never managed to mount it.
I read that the order of the drives is important.
Is hat true?
Did I destroy the RAID?

If you got the order incorrect, then most likely you have destroyed your RAID, yes. It will have scrambled your filesystem blocks and overwritten lots of them with the wrong data.

https://raid.wiki.kernel.org/index.php/Linux_Raid#When_Things_Go_Wrogn is very explicit about this:

"In particular NEVER NEVER NEVER use "mdadm --create" on an already-existing array unless you are being guided by an expert."

--
Mikael Abrahamsson    email: swmike@xxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux