raid0: "clean" state on drive failure/removal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

A basic question on raid0 behavior.

When a drive fails/removed in a raid0 array, “mdadm –detail” reports the array
state as clean instead of failed.

It appears that the drive is whacked out of the array, but that array slot
continues to show “active sync” state.

I/O to the failed drive errors out. However, it appears that I/O to other drives
in the raid0 array would succeed. How would the user know about the array state
and potential data loss? Should the array information be cached, drive
failure/removals be monitored and checked against the cached information?

Could anyone please explain the reason for this behavior of not reporting a
raid0 array as failed?

Thanks,
Sushma

________________________________

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux