Is there a case for better bad block handling in linux raid? Disk errors are reported to be more and more frequent in today's systems. Storage capacity grows and with that the likelihood of disk errors. I am not sure of the ststus of bad block handling in linux raid. Can somebody enlighten me? I think there are a number of techniques that could be employed. 1. checking an array - full scan, and occasionally examining the SMART error log, eg. every gigabyte, then rewrite the data with reported errors. Then check the SMART error log again and for the errors still reported move it to a bad block area and remap the affected data. 2. When irrecoverable errors have occurred, then check which drive was affected, and remap the bad data on the bad disk, using the data of the sound drive (for raids with redundant data). Is this viable techniques? I think something like this could be built into the raid check functionality. best regards keld -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html