Re: RAID 10 with 2 failed drives

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 9/20/19 8:59 AM, Wols Lists wrote:
On 19/09/19 21:45, Liviu Petcu wrote:
Hello,

Please let me know if in this situation detailed below, there are chances of restoring the RAID 10 array and how I can do it safely.
Thank you!

This is linux raid 10, not some form of raid 1+0? That's what it looks
like to me. I notice it says the array is active! That I think is good news!

I thought that there should be a flag like 'degraded' if the raid was actually running. I can't find the kernel documentation any more.


Can you mount it read-only and read it? I would be surprised if you
can't, which means the array is running fine in degraded mode. NOT GOOD
but not a problem provided nothing further goes wrong. I notice it's
also version 0.9 - is it an old array? Have the drives themselves
failed? (which I guess is probably the case :-( I guess the drives
effectively have just the one partition - 2 - and 1 is something
unimportant?

What you said is definitely true for a near layout for an even number of devices and n=2.

I thought the offset layout meant any two adjacent raid devices failing was data loss, assuming this is accurate:

http://www.ilsistemista.net/index.php/linux-a-unix/35-linux-software-raid-10-layouts-performance-near-far-and-offset-benchmark-analysis.html?start=1

--Sarah



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux