Re: raid10 redundancy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





Am 18.05.21 um 20:51 schrieb Phillip Susi:

Reindl Harald writes:

it's common sense that additional load on drives which have the same
history makes a failure one one of them more likely

"It's common sense" = the logical fallacy of hand waving.  Show me
statistical evidence.  I have had lightly loaded drives die in under 2
years and heavily loaded ones last 10 years.  I have replaced failed
drives in a raid and the other drives with essentially the same wear on
them lasted for years without another failure.  There does not appear to
be a strong correlation usage and drive failure.  Certainly not one that
is so strong that you can claim with a straight face that after the
first failure, a second one can be expected within X IOPS, and the IOPS
needed to rebuild the array are a significant fraction of X

do what you want - others like to be better safe then sorry especially when there is no longer redundancy and you don't surive any error until the rebuild is finished

and yes i replaced last week a 365/24 for years running Seagate *desktop drive* in a RAID10 with 50k power up hours but that don't imply that you can expect that



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux