Hello everybody!
Yesterday I met a problem with one of raid5 arrays build by mdadm on
three (sd[bcd]) 1.5T devices.
I found array in degraded state with sdd fail. Drive becomes fail state
after power jump.
Server supplied by UPS but this seems was not good enough - server was
not rebooted but one drive as I said becomes fail state.
I simply reattaches it and array started rebuilding but fails after
couple of %'s passed with sdc becomes fail!!!
I assemble array again with sd[bc] and tried to attach sdd again:
picture repeated rebuild fails.
So I have sdb in sync state, sdc - failed and sdd spare. I checked
SMARTs of drives to understand reason of such behavior and
found it clean on all devices. Than I tried to dd if=/dev/sd[bcd]
of=/dev/null and found that dd also fails with IO error.
After dd bad blocks started appears in SMART :)
Finally I have:
sdb - sync
sdc - fail
sdd - spare
states and a number of bads on each hdd in random places...
Could any one suggest how can I assemble this array now in read-only
mode to try to copy data?!
Theoretically data on sdd should not be rewritten and it still should be
possible to try recover data (meaning that bads appears in quite
different places)...
May be you know utility which helps recover data or the way how to start
array in read-only mode preventing becomes it to degraded state
and force md device to try recover data using readable places from each
devise???
Or any other ideas appreciated! Thanks, any way...
Best regards,
Evgeny.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html