Re: Assembly failure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jul 10, 2012 at 05:33:45PM +0100, Brian Candler wrote:
> metadata (see below) suggests that some drives think members 1/3/4 are
> missing, but those drives think the array is fine.  The "Events" counts are
> different on some members though.

I have had this problem before; in fact, it is the usual behavior when a
drive begins to fail.  If the three drives in question fail to assemble,
it is usually because they aren't readable/writable by your system, and
therefore can't have their metadata changed to reflect the degenerate
state of the array.  I would check the SMART status of the drives and
look into your logs to see if any ATA errors exist, but my suspicion is
that, at assembly, none of those drives was talking to your system.

If you feel that the drives are fine and that this is some random fluke,
you can simply add the drives back to the array (you may have to wipe
their metadata blocks) while using --assume-clean to ensure that the
data on the newly added drives is kept.

Good luck!

pants.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux