On Fri, Feb 02, 2007 at 05:58:04PM -0500, Mark Lord wrote: > Matt Mackall wrote: > >.. > >Also worth considering is that spending minutes trying to reread > >damaged sectors is likely to accelerate your death spiral. More data > >may be recoverable if you give up quickly in a first pass, then go > >back and manually retry damaged bits with smaller I/Os. > > All good input. But what was being debated here is not so much > the retrying of known-bad sectors, but rather what to do about > the kiBs or MiBs of sectors remaining in a merged request after > hitting a single bad sector mid-way. Yep, that's precisely what was addressed in the part you snipped. My main point being that what to do about the remaining workload should be dependent on the size of the I/O. If we encounter errors on sectors 4,5,6,7,8.. of a 1MB request, we should have a threshold for giving up. It's not unreasonable for that threshold to be larger than 1, but it should not be 2048. And if we do the I/O as four 256KB requests, we should have approximately the same number of retries (assuming the whole region's bad). -- Mathematics is the supreme nostalgia of our time. - To unsubscribe from this list: send the line "unsubscribe linux-ide" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html