> -----Original Message----- > From: linux-raid-owner@xxxxxxxxxxxxxxx [mailto:linux-raid- > owner@xxxxxxxxxxxxxxx] On Behalf Of Jeff Woods > Sent: Saturday, February 26, 2011 1:36 AM > To: lrhorer@xxxxxxxxxxx > Cc: 'Linux RAID' > Subject: Re: OK, Now this is really weird > > Quoting Leslie Rhorer <lrhorer@xxxxxxxxxxx>: > > I have a pair of drives each of whose 3 partitions are members of a > > set of 3 RAID arrays. One of the two drives had a flaky power > connection > > which I thought I had fixed, but I guess not, because the drive was > taken > > offline again on Tuesday. The significant issue, however, is that both > > times the drive failed, mdadm behaved really oddly. The first time I > > thought it might just be some odd anomaly, but the second time it did > > precisely the same thing. Both times, when the drive was de-registered > by > > udev, the first two arrays properly responded to the failure, but the > third > > array did not. Here is the layout: > > [snip lots of technical details] > > > So what gives? /dev/sdk3 no longer even exists, so why hasn't it > > been failed and removed on /dev /md3 like it has on /dev/md1 and > /dev/md2? > > Is it possible there has been no I/O request for /dev/md3 since > /dev/sdk failed? Well, I thought about that. It's swap space, so I suppose it's possible. I would have thought, however, that mdadm would fail a missing member whether there is any I/O or not. -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html