dm can't mount when bios raid degraded.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear all,
I have a Asus P5LD2 mainboard with Intel Raid controller onboard ( ISW ). I've defined a raid1 set with two 500Gb SATA2. In the FC6 installer i saw one volume ISW_HCHAJGCAG_RaidVolume0 which i partioned and installed FC6 on. Now everything boots normaly. But when i degrade the Raid1 by removing 1 drive ( this drive has a lot of bad sectors ), the boot goes correct till the moment that dm sees that there should be a mirror and says that it misses one drive. From that point on it won't start the mapper and my partioned can't be found and the kernel stops to a screaming halt.. End of line.
When i reconnect the second drive everything works correctly. But the reason that people use raid1 is that when one harddrive fails we atleast can go on. My bios indeed also says that is misses one drive but still defines the logical drive, so we can boot linux until device-mapper starts.
 
Does anybody know how to stop device-mapper from trying to find the raid itself and just use the raid-logical drive that the bios defines? ( i would like to send the failed drive back to the supplier for a swap, but i can't take the server down for a whole week ).
 
With kind regards,
Michel van der Breggen
 
--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux