Problem with Raid1 when all drives failed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Neil,

We have observed a strange behavior of a RAID1 volume when all its drives failed.
Here is our test case:

Steps to reproduce:
1. Create 2-drives RAID1 (tested on both native and IMSM metadata)
2. Wait for the end of the initial resync 
3. Hot-unplug both drives of the RAID1 volume

Actual behavior:
The RAID1 volume is still present in OS as a degraded one-drive array

Expected behavior:
Should a RAID volume disappear from OS?

I see that when a drive is removed from OS udev runs "mdadm -If <>" for missing member which tries to write "faulty" to the state of array's member.
I see also that md driver prevents from doing this operation for the last drive in a RAID1 array, so when two drives fail nothing really happens to the one that fails as the second one.

It can be very dangerous, because if user has mounted file system at this array it can lead to unstable work of system or even a system crash. More over user does not have proper information about the state of an array.

How should it work according to the design? Should mdadm stop volume when all its members disappear?

Pawel Baldysiak
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux