Re: Failure propagation of concatenated raids ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> I
> think in your case you're better off stopping an array that has less than
> parity drives than it should, either using a udev rule or using mdadm
> --monitor.

I actually have been unsuccessful in these attempts so far. What
happens is that you very quickly get processes that get indefinitely
stuck (indefinitely as in 'waiting on a very very long kernel
timeout') trying to write something, so that the ext4fs layer becomes
unresponsive on these threads, or take a very long time. Killing the
processes takes a very long time because they are stuck in a kernel
operation. And if potentially more processes can spawn back up, the
automated script starts an interesting game of whack-a-mole in order
to unmount the filesystem.

And you can't stop the underlying arrays without first stopping the
whole chain (umount, stop the lvm volume, etc...), otherwise you
simply get "device is busy" errors, hence the whack-a-mole process
killing. The only working method I've managed to successfully
implement is to programatically loop over the list of all the drives
involved in the filesystem, on all the raids involved, and flag all of
them as failed drives. This way, you get to really put "emergency
brakes" on. I find that to be a very, very scary method however.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux