On 06/05/17 12:21, Ravi (Tom) Hale wrote: >> Bear in mind also, that any *within* *spec* drive can have an "accident" >> > every 10TB and still be considered perfectly okay. Which means that if >> > you do what you are supposed to do (rewrite the block) you're risking >> > the drive remapping the block - and getting closer to the drive bricking >> > itself. But if you trap the error yourself and add it to the badblocks >> > list, you are risking throwing away perfectly decent blocks that just >> > hiccuped. > For hiccups, having a bad-read-count for each suspected-bad block could > be sensible. If that number goes above <small-threshold> it's very > likely that the block is indeed bad and should be avoided in future. Except you have the second law of thermodynamics in play - "what man proposes, nature opposes". This could well screw up big time. DRAM memory needs to be refreshed by a read-write cycle every few nanoseconds. Hard drives are the same, actually, except that the interval is measured in years, not nanoseconds. Fill your brand new hard drive with data, then hammer it gently over a few years. Especially if a block's neighbours are repeatedly rewritten but this particular block is never touched, it is likely to become unreadable. So it will fail your test - reads will repeatedly fail - but if the firmware was given a look-in (by rewriting it) it wouldn't be remapped. And as Nix said, once a drive starts getting a load of errors, chances are something is catastrophically wrong and things are going to get exponentially worse. Cheers, Wol -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html