Quoting Leslie Rhorer <lrhorer@xxxxxxxxxxx>:
I have a pair of drives each of whose 3 partitions are members of a set of 3 RAID arrays. One of the two drives had a flaky power connection which I thought I had fixed, but I guess not, because the drive was taken offline again on Tuesday. The significant issue, however, is that both times the drive failed, mdadm behaved really oddly. The first time I thought it might just be some odd anomaly, but the second time it did precisely the same thing. Both times, when the drive was de-registered by udev, the first two arrays properly responded to the failure, but the third array did not. Here is the layout:
[snip lots of technical details]
So what gives? /dev/sdk3 no longer even exists, so why hasn't it been failed and removed on /dev /md3 like it has on /dev/md1 and /dev/md2?
Is it possible there has been no I/O request for /dev/md3 since /dev/sdk failed?
-- Jeff Woods <jeff@xxxxxxxxxxxx> -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html