Busted disks caused healthy ones to fail

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



An odd thing happened this weekend.  We were doing some heavy I/O when
one of our servers had two drives in two seperate raid1 mirrors pop.
This was not odd as these drives are old and the batch they are from
have been failing on other boxen as well.  What is odd is that our brand
new disks which the OS resides on (2 drives in raid 1) half busted.

There are 4 md devices

md/0  
md/1
md/2
md/3

md3, md2, and md1 all lost the 2nd drive in the array (sdh3, sdh6, and
sdh5).  md0 however was fine with sdh1 being fine.  Why would losing
disks cause a seemingly healthy disk to go astray?

P.S. I have pull out tons of syslogs showing the two bad disks failing
if that would help.


Thanks,
Ben

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux