RAID hot_replace behavior with another disk failure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I am trying to understand how the hot replace works because I suffered
damages from multiple disk failures before.

I initiated data migration from an active disk to a spare in RAID6
(echo want_replacement) and the source disk failed. The procedure continued
reconstructing the data from the other disks and finished as expected.

I tested hot replace when another active disk failed during the procedure.
The procedure finished, marked the original drive as faulty.
(in kernel 3.4.11 the array remained degraded, even when adding
additional spare, under 3.8.4 the automatic rebuild was performed after
adding a spare).

In an event of another disk failure I expected the hot replace to cancel
the copy from the source functioning disk to the spare, and initiate
the array rebuild using the spare.

Is my expectation wrong?
Thank you,

Vlad
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux