2017-03-21 17:49 GMT+01:00 Phil Turmel <philip@xxxxxxxxxx>: > The correlation is effectively immaterial in a non-degraded raid5 and > singly-degraded raid6 because recovery will succeed as long as any two > errors are in different 4k block/sector locations. And for non-degraded > raid6, all three UREs must occur in the same block/sector to lose > data. Some participants in this discussion need to read the statistical > description of this stuff here: > > http://marc.info/?l=linux-raid&m=139050322510249&w=2 > > As long as you are 'check' scrubbing every so often (I scrub weekly), > the odds of catastrophe on raid6 are the odds of something *else* taking > out the machine or controller, not the odds of simultaneous drive > failures. This is true but disk failures happens much more than multiple UREs on the same stripe. I think that in a RAID6 is much easier to loose data due to multiple disk failures. Last years i've lose a server due to 4 (of 6) disks failures in less than an hours during a rebuild. The first failure was detected in the middle of the night. It was a disconnection/reconnaction of a single disks. The riconnection triggered a resync. During the resync another disk failed. RAID6 recovered even from this double failure but at about 60% of rebuild, the third disk failed bringing the whole raid down. I was waked up by our monitoring system and looking at the server, there was also a fourth disk down :) 4 disks down in less than a hour. All disk was enterprise: SAS 15K, not desktop drives. -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html