Re: Lost raid 5 volume

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Good morning Neil,

[Etiquette on kernel.org is to trim replies and either bottom-post or
interleave.]

On 12/13/2014 11:30 PM, Neil . wrote:
> smartctl says Overall health...: passed for all drives.

This is good, but not relevant.  Lots of desktop drives will report this
even though they are part of the problem.  Please show the complete
output of "smartctl -x" for the two troublesome drives.  Maybe also for
the other two.

>     Update Time : Sat Dec  6 14:00:02 2014
>          Events : 3

>     Update Time : Sat Dec  6 14:00:02 2014
>          Events : 3

>     Update Time : Sun Dec  7 11:18:06 2014
>          Events : 8

>     Update Time : Sun Dec  7 11:18:06 2014
>          Events : 8

This looks strange.  It suggests that the two drives failed well before
your reboot.  If you have dmesg from Saturday afternoon, that might be
enlightening.

However, as a raid 5, you are stuck unless you include at least one of
the two stale drives.  The correct tool for this is forced assembly:

mdadm --assemble --force --verbose /dev/mdX /dev/sd[abcd]2

If it fails, show its output.

Whether it fails or not, you need to investigate why the drives were
dropped.  Simultaneous drops suggest a hardware problem.

Phil
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux