Questions: understanding fail/remove steps from RAID1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've been experiencing as-yet-undiagnosed kernel panics on a server
with two RAID1 arrays. (x86_64; 2.6.10 FC3 kernel; Promise SATA with
Seagate drives.)

After a recent panic and reboot, the two RAID1 arrays dropped their
member partitions from a certain drive (sdb).

Based on previous experience with simulated failures, I had expected
the devices to still be listed by name as part of the array, but as
failed/faulty. Instead, they were not listed in either /proc/mdstat
or 'mdadm --detail' output.

Were my expectations wrong -- is this by design?

Looking over /var/log/messages, there's no clear indication of a
specific time or reason the sdb partitions were dropped from the array.

Are raid array events logged somewhere else?

Thanks for any insights,

- Gordon
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux