On Tuesday July 11, htejun@xxxxxxxxx wrote: > Christian Pernegger wrote: > > The fact that the disk had changed minor numbers after it was plugged > > back in bugs me a bit. (was sdc before, sde after). Additionally udev > > removed the sdc device file, so I had to manually recreate it to be > > able to remove the 'faulty' disk from its md array. > > That's because md is stilling holding onto sdc in failed mode. A > hotplug script which checks whether a removed device is in md array and > if so removes it from the array will solve the problem. Not sure > whether that would be the correct approach though. Checking whether the to-be-removed device is in an md array or in use in any other way first definitely sounds like the right approach to me. Exactly what to do if the device is in use is somewhat less obvious. If the array is completely quiescent then you don't necessarily want to fail/remove the device from the array.... I think the best approach would be to have plug-ins that are called if an unplugged device is in use, and if it is still in use after those calls, then don't delete the device. Maybe it would also be good if hotplug was told when a device was no longer in use so it could remove the /dev entry then.... NeilBrown - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html