On 02/02/18 14:50, David Brown wrote: > What are these cases? We have already eliminated the rebuild situation > I described. And in particular, which use-cases are you thinking of > where you not be better off with alternative integrity improvements > (like higher redundancy levels) without killing performance? > In particular, when you KNOW you've got a damaged raid, and you want to know which files are affected. The whole point of my technique is that either it uses the raid to recover (if it can) or it propagates a read error back to the application. It does NOT "fix" the data and leave a corrupted file behind. >> > > That does not make sense. The bad block list described by Neil will do > the job correctly. hdparm bad block marking could also work, but it > does so at a lower level and the sector is /not/ corrected > automatically, AFAIK. It also would not help if the raid1 were not > directly on a hard disk (think disk partition, another raid, an LVM > partition, an iSCSI disk, a remote block device, an encrypted block > device, etc.). > Nor does the bad block list correct the error automatically, if that's true then. The bad blocks list fakes a read error, the hdparm causes a real read error. When the raid-5 scrub hits, either version triggers a rewrite. Thing about the bad-block list, is that that disk block is NOT rewritten. It's moved, and that disk space is LOST. With hdparm, that block gets rewritten, and if the rewrite succeeds the space is recovered. Cheers, Wol -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html