Sean Hildebrand wrote:
<snip>
To answer that: The drive was brand new. The thing I find odd about this failure is that it was integrated into the array without issue, meaning the disk has no issues writing to the bad sectors, just reading. Never had that before. In any case, I'm very glad to have got my data with minimal loss. And to think, all this could have been avoided if I'd just made my array a RAID6 when it was first built. Certainly when I have a new fifth disk the array will be rebuilt as such.
When you get your new disk (or any disk for that matter) run badblocks -svw on it. It takes about 8 hours on average drive sizes today, but guards precisely against the problem you faced. Additionally the drive will receive a hefty does of "break in", so you know it performed well under stress at least for several hours.
-- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html