Re: raid10 redundancy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Adam Goryachev writes:

> Jumping into this one late, but I thought the main risk was related to 
> the fact that for every read there is a chance the device will fail to 
> read the data successfully, and so the more data you need to read in 
> order to restore redundancy, the greater the risk of not being able to 
> regain redundancy.

Also the assumption that the drives tend to fail after about the same
number of reads, and since all of the drives in the array have had about
the same number of reads, by the time you get the first failure, a
second likely is not far behind.

Both of these assumptions are about as flawed as the mistaken belief
that many have that based on the bit error rates published by drive
manufacturers, that if you read the entire multi TB drive, odds are
quite good that you will get an uncorrectable error.  I've tried it
many times and it doesn't work that way.




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux