On Thu, 20 Jun 2013 06:22:32 +0000 "Baldysiak, Pawel" <pawel.baldysiak@xxxxxxxxx> wrote: > Hi Neil, > > We have observed a strange behavior of a RAID1 volume when all its drives failed. > Here is our test case: > > Steps to reproduce: > 1. Create 2-drives RAID1 (tested on both native and IMSM metadata) > 2. Wait for the end of the initial resync > 3. Hot-unplug both drives of the RAID1 volume > > Actual behavior: > The RAID1 volume is still present in OS as a degraded one-drive array That is what I expect. > > Expected behavior: > Should a RAID volume disappear from OS? How exactly? If the filesystem is mounted that would be impossible. > > I see that when a drive is removed from OS udev runs "mdadm -If <>" for missing member which tries to write "faulty" to the state of array's member. > I see also that md driver prevents from doing this operation for the last drive in a RAID1 array, so when two drives fail nothing really happens to the one that fails as the second one. > > It can be very dangerous, because if user has mounted file system at this array it can lead to unstable work of system or even a system crash. More over user does not have proper information about the state of an array. It shouldn't lead to a crash. But it could certainly cause problems. Unplugged active devices often does. > > How should it work according to the design? Should mdadm stop volume when all its members disappear? Have a look at the current code in my "master" branch. When this happens it will try to stop the array (which will fail if the array is mounted), and will try to get "udisks" to unmount the array (which will fail if the filesytem is in use). So it goes a little way in the direction you want, but I think that what you are asking for is impossible with Linux. NeilBrown
Attachment:
signature.asc
Description: PGP signature