Re: Failed RAID 6 array advice

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 1 Mar 2011, jahammonds prost wrote:

What's the correct process for adding the failed /dev/sde1 back into the array so I can start it. I don't want to rush into this and make things worse.

There are a lot of discussions about this in the archives, but basically I recommend the following:

Make sure you're running the latest mdadm, right now it's 3.1.4. Compile it yourself if you have to. After that you stop the array and use --assemble --force to get the array up and running again with the drives you know are good (make sure you don't use the drives that was offlined a long time ago).

What's the correct process for replacing the 2 other drives?
I am presuming that I need to --fail, then --remove then --add the drives (one
at a time?), but I want to make sure.

Yes, when you have a working degraded array you just add them and a re-sync should happen and then everything should be ok if the resync succeeds.

--
Mikael Abrahamsson    email: swmike@xxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux