Re: raid10 redundancy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 19/5/21 23:02, Phillip Susi wrote:
Adam Goryachev writes:

Jumping into this one late, but I thought the main risk was related to
the fact that for every read there is a chance the device will fail to
read the data successfully, and so the more data you need to read in
order to restore redundancy, the greater the risk of not being able to
regain redundancy.
Also the assumption that the drives tend to fail after about the same
number of reads, and since all of the drives in the array have had about
the same number of reads, by the time you get the first failure, a
second likely is not far behind.

Both of these assumptions are about as flawed as the mistaken belief
that many have that based on the bit error rates published by drive
manufacturers, that if you read the entire multi TB drive, odds are
quite good that you will get an uncorrectable error.  I've tried it
many times and it doesn't work that way.

Except that is not what I said. I said the risk is increased for each read required. I didn't say that you *will* experience a read failure. It's all about reducing risks without increasing costs to the point that there is no benefit gained. Your costs and benefits will differ from the next person, so there is no single answer that matches for everyone. Some people will say they need a minimum of triple mirror RAID10, others will be fine with RAID5 or even RAID0.

It sounds like you are trying to say that regardless of how many reads are required you will never experience a read failure? That doesn't seem to match what the manufacturers are saying.

Regards,
Adam




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux