split brain mode after reboot

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all

a colleague of mine created a raid 1 on a fairly recent machine

Kernel 3.5.0-sabayon

mdadm - v3.2.3 - 23rd December 2011

During operation sda seemed to have been disconnected by the
system/motherboard/whatever but this was not detected before a reboot
was done, after the reboot, sda re-appeared but of course with a much
older version of the mirror:

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5]
[raid4] [multipath] [faulty]
md126 : active raid1 sdb1[1]
      4194240 blocks [2/1] [_U]

md127 : active raid1 sdb3[1]
      235808704 blocks [2/1] [_U]

md0 : active raid1 sda1[0]
      4194240 blocks [2/1] [U_]

md1 : active raid0 sda2[0] sdb2[1]
      8387584 blocks 512k chunks

md2 : active raid1 sda3[0]
      235808704 blocks [2/1] [U_]

unused devices: <none>

As no vital information were on these disks, my question for the list is
just if this is an expected/wanted behavior after such an event and what
one could do to prevent this (besides monitoring via mdadm).

Cheers

Carsten

-- 
Dr. Carsten Aulbert - Max Planck Institute for Gravitational Physics
Callinstrasse 38, 30167 Hannover, Germany
phone/fax: +49 511 762-17185 / -17193
https://wiki.atlas.aei.uni-hannover.de/foswiki/bin/view/ATLAS/WebHome

<<attachment: smime.p7s>>


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux