Bug with RAID1 hot spares?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Greetings,
	I was just testing a server I was about to send into production on kernel 
2.6.18.1. The server has three SCSI disks with "md1" set to a RAID1 with 2 
mirrors and 1 spare. The mirrors are sda3 and sdb3, spare is sdc3. I manually 
failed sdb3, and as expected, sdc3 was activated. Strangely 
enough, /proc/mdstat did not indicate that sdc3 was being synced. I thought 
these spares weren't kept mirrored until needed?
	In order to further test my theory, I manually failed sda3, leaving only sdc3 
(the original spare) active. I ran "find /" for a bit to see if any errors 
cropped up and none did; however, when I added sda3 and sdb3 back to the 
array and a resync started, I was soon faced with what appeared to be a 
_very_ corrupted reiserfs.
	Strangely enough, after booting on a livecd and assembling md1 with just 
sda3, I was able to add sdb3 and sdc3, after which the array resynced and 
left sdb3 a mirror and sdc3 a spare.
	So there's definitely something odd happening here... why did no resync to 
the sdc3 spare start when I failed sdb3?

Thanks,
Chase
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux