On Tue, 20 Nov 2012 15:08:45 +0100 Carsten Aulbert <Carsten.Aulbert@xxxxxxxxxx> wrote: > Hi all > > a colleague of mine created a raid 1 on a fairly recent machine > > Kernel 3.5.0-sabayon > > mdadm - v3.2.3 - 23rd December 2011 > > During operation sda seemed to have been disconnected by the > system/motherboard/whatever but this was not detected before a reboot > was done, after the reboot, sda re-appeared but of course with a much > older version of the mirror: > > Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] > [raid4] [multipath] [faulty] > md126 : active raid1 sdb1[1] > 4194240 blocks [2/1] [_U] > > md127 : active raid1 sdb3[1] > 235808704 blocks [2/1] [_U] > > md0 : active raid1 sda1[0] > 4194240 blocks [2/1] [U_] > > md1 : active raid0 sda2[0] sdb2[1] > 8387584 blocks 512k chunks > > md2 : active raid1 sda3[0] > 235808704 blocks [2/1] [U_] > > unused devices: <none> > > As no vital information were on these disks, my question for the list is > just if this is an expected/wanted behavior after such an event and what > one could do to prevent this (besides monitoring via mdadm). > Expected - probably. Wanted - no. I think one half gets assembled by "mdadm --incremental" run from udev, and the other by a subsequence "mdadm -As" or similar. mdadm-3.3, which is still under development, has a fix for this so that --incremental and --assemble don't trip over each other. http://git.neil.brown.name/?p=mdadm.git;a=commitdiff;h=0431869cec4c673309d9aa NeilBrown
Attachment:
signature.asc
Description: PGP signature