On Fri, Sep 01, 2017 at 03:15:41PM -0500, Thomas C. Bishop wrote: > I messed up my raid5 array . That's an understatement... > I know a two of HDs are "failure > prediction" and one is out.. RAID 5 with three failed drives, chances of survival are very low. You should never let things get this far. Timeouts? Doesn't matter! You either have no disk monitoring at all or never acted on it. ddrescue the broken drives to new ones first. Then always use overlays for recovery experiments. https://raid.wiki.kernel.org/index.php/Recovering_a_failed_software_RAID#Making_the_harddisks_read-only_using_an_overlay_file Experiments for example could be: *) --assemble --force *) --assemble --update=force-no-bbl *) --create --metadata=1.0 --chunk=128 with one 'missing' drive Again, use overlays for everything. > Bad Block Log : 512 entries available at offset -8 sectors - bad > blocks present. You have bbl entries on more than one drive, use --examine-badblocks to see if they are identical. You have to clear those or md will either not work at all or always give read errors even after replacing the drives. Bad block list issues were previously discussed on the list, you might find it when searching for "no-bbl". > === START OF READ SMART DATA SECTION === > SMART Health Status: OK Never trust this unconditionally. It's a false friend. Always look at the detailed output with reallocated etc. sectors. Run selftests regularly, detect disk errors early, replace drives immediately. Good luck Andreas Klauer -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html