Re: Determining if a stripe/RAID0 has failed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 9 Jul 2013 15:33:29 -0600 Curtis <serverascode@xxxxxxxxx> wrote:

> Hi All,
> 
> I'm wondering what the best way to determine when a RAID0 has failed?
> 
> We have some stateless servers that use a stripe/RAID0, but we'll need
> to know if it failed so we can pull it out of the "cluster" and
> rebuild it. It would be better to find out sooner than later that the
> stripe has failed.
> 
> I know from reading the man page that I can't use mdadm to monitor the
> stripe. Is it basically just that the device becomes unusable in some
> fashion?
> 

How would you determine if a lone drive had failed?
Presumably by error messages in the kernel logs, or similar.
Use exactly the same mechanism to test if a RAID0 has failed.

(A "RAID0" doesn't fail as whole.  Bits of it might, other bits might keep
working, just like a drive which can lose some sectors but other sectors keep
working.  Certainly a whole drive can fail if it's logic-board dies.
Similarly a whole RAID0 can fail if the SATA/SCSI/USB controller dies.)

NeilBrown

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux