Re: Monitoring for failed drives

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Brian Candler wrote:
The problem is, how to detect and report this? At the md RAID level,
`cat /proc/mdstat` and `mdadm --detail` show nothing amiss.

     # cat /proc/mdstat
     Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
     md127 : active raid0 sdk[8] sdf[4] sdb[0] sdj[9] sdc[1] sde[2] sdd[3] sdi[6] sdg[5] sdh[7] sdv[20] sdw[21] sdl[11] sdu[19] sdt[18] sdn[13] sds[17] sdq[14] sdm[10] sdx[22] sdr[16] sdo[12] sdp[15] sdy[23]
           70326362112 blocks super 1.2 512k chunks

Brian,

I know that you know this, but this is a RAID0 which does not have any redundancy. What would you expect md to do? It cannot kick the drive from the array since this would bring the entire array down.

Unlike with other RAID levels it is practicable to kick failed drives from the array because you can reconstruct their contents from the parity information.

Jan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux