On Thu, 2 Jun 2016, Brad Campbell wrote:
People keep saying that. I've never encountered it. I suspect it's just not
Well, I have had drives that would occasionally throw a read error, but MD requires that read error to happen three times before re-writing the sector, and that never happened. See earlier discussions I had with Neil on the topic. But you're correct, I don't see this on normally functioning drives. Swapped out that drive (it didn't have any specific SMART errors either) and everything was fine. I don't know what was wrong with it, might have been something flying around in there causing spurious problems.
the problem that the hysterical ranting makes it out to be (either that or the pile of cheap and nasty drives I have here are model citizens). I've *never* seen a read error unless the drive was in trouble, and that includes running dd reads in a loop over multiple days continuously. If it were that bad I'd see drives failing SMART long tests routinely also, and that does not happen either.
I've seen enough read errors that I nowadays only run RAID6, never RAID5. I'd also venture to say that considering the amount of people who come on the list and who come on the #linux-raid IRC channel with "raid5, one-drive-failed, and now I have read error on another drive so my array doesn't resync, what should I do?", I'd say this is a real problem. It's not however like "if you have a good drive, reading it 5 times will yield a read error". The vendor bit error rate specification doesn't work like that, so totally agree with you there.
-- Mikael Abrahamsson email: swmike@xxxxxxxxx -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html