2017-03-03 22:41 GMT+01:00 Anthony Youngman <antlists@xxxxxxxxxxxxxxx>: > Isn't that what raid 5 does? nothing to do with raid-5 > Actually, iirc, it doesn't read every stripe and check parity on a read, > because it would clobber performance. But I guess you could have a switch to > turn it on. It's unlikely to achieve anything. > > Barring bugs in the firmware, it's pretty near 100% that a drive will either > return what was written, or return a read error. Drives don't return dud > data, they have quite a lot of error correction built in. This is wrong. Sometimes drives return data differently from what was stored, or, store data differently from the original. In this case, if real data is "1" and you store "0", when you read "0", no read error is made, but data is still corrupted. With a bit-rot prevention this could be fixed, you checksum "1" from the source, write that to disks and if you read back "0", the checksum would be invalid. This is what ZFS does. This is what Gluster does. This is what BRTFS does. Adding this in mdadm could be an interesting feature. -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html