Re: degraded raid 6 (1 bad drive) showing up inactive, only spares

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 7 Jun 2012 18:49:49 +0200 Martin Ziler <martin.ziler@xxxxxxxxxxxxxx>
wrote:

> 2012/6/7 NeilBrown <neilb@xxxxxxx>
> 
> > On Thu, 7 Jun 2012 13:55:32 +0200 Martin Ziler <
> > martin.ziler@xxxxxxxxxxxxxx>
> > wrote:
> >
> > > Hello everybody,
> > >
> > > I am running a 9-disk raid6 without hot spares. I already had one drive
> > go bad, which I could replace and continue using the array without any
> > degraded raid messages. Recently I had another drive going bad by the
> > smart-info. As it wasn't quite dead I left the array as was without really
> > using it all that much waiting for a replacement drive I ordered. As I
> > booted the machine up in order to replace the drive I was greeted by an
> > inactive array with all devices showing up as spares.
> > >
> > > md0 : inactive sdh2[0](S) sdi2[7](S) sde2[6](S) sdd2[5](S) sdf2[1](S)
> > sdg2[2](S) sdc1[9](S) sdb2[3](S)
> > >       15579088439 blocks super 1.2
> > >
> > > mdadm --examine confirms that. I already searched the web quite a bit
> > and found this mailing list. Maybe someone in here can give me some input.
> > Normally a degraded raid should still be active. So I am quite surprised
> > that my array with only one drive missing goes inactive. I appended the
> > info mdadm --examine puts out for all the drives. However the first two
> > should probably suffice as only /dev/sdk differs from the rest. The faulty
> > drive - sdk - is still recognized as a raid6 member, wheres all the others
> > show up as spares. With lots of bad sectors sdk isn't accessible anymore.
> >
> > You must be running 3.2.1 or 3.3 (I think).
> >
> > You've been bitten by a rather nasty bug.
> >
> > You can get your data back, but it will require a bit of care, so don't
> > rush
> > it.
> >
> > The metadata on almost all the devices have been seriously corrupted.  The
> > only way to repair it is to recreate the array.
> > Doing this just writes new metadata and assembles the array.  It doesn't
> > touch
> > the data so if we get the --create command right, all your data will be
> > available again.
> > If we get it wrong, you won't be able to see your data, but we can easily
> > stop
> > the array and create again with different parameters until we get it right.
> >
> > First thing to do it to get a newer kernel.  I would recommend the latest
> > in
> > the 3.3.y series.
> >
> > Then you need to:
> >  - make sure you have a version of mdadm which gets the data offset to 1M
> >   (2048 sectors).  I think 3.2.3 or earlier does that - don't upgrade to
> >   3.2.5.
> >  - find the chunk size - looks like it is 4M, as sdk2 isn't corrupt.
> >  - find the order of devices.  This should be in your kernel logs in
> >    "RAID conf printout".  Hopefully device names haven't changed.
> >
> >  Then (with new kernel running)
> >
> >  mdadm --create /dev/md0 -l6 -n9 -c 4M -e 1.2 /dev/sdb2 /dev/sdc2
> > /dev/sdd2 \
> >     /dev/sde2 /dev/sdf2 /dev/sdg2 /dev/sdh2 /dev/sdi2 missing \
> >     --assume-clean
> >
> >  Make double-sure you add that --assume-clean.
> >
> >  Note the last device is 'missing'. That corresponds to sdk2 (which we
> >  know is device 8 - the last of 9 (0..8)).  It fails so it not part of the
> >  array any more.  The others I just guessed the order.  You should try to
> >  verify it before you proceed (see RAID conf printout in kernel logs).
> >
> >  After the 'create' use "mdadm -E" to look at one device and make sure
> >  the Data Offset, Avail Dev Size and Array Size are the same as we saw
> >  on sdk2.
> >  If it is, try "fsck -n /dev/md0". That assumes ext3 or ext4.  If you had
> >  something else on the array some other command might be needed.
> >
> >  If that looks bad, "mdadm -S /dev/md0" and try again with a different
> > order.
> >  If it looks good, "echo check > /sys/block/md0/md/sync_action" and watch
> >  "mismatch_cnt" in the same directory.  If it says low (few hundred at
> > most)
> >  all is good.  If it goes up to thousands something is wrong - try another
> >  order.
> >
> >  Once you have the array working again,
> >    "echo repair > /sys/block/md0/md/sync_action"
> >  then add your new device to be rebuilt.
> >
> > Good luck.
> > Please ask if you are unsure about anything.
> >
> > NeilBrown
> >
> >
> 
> Hello Neil,
> 
> thank you very much for this detailed input. My last reply didn't make it
> into the mailing list due to the format of my mail client (OSX mail). My
> kernel (Ubuntu) was 3.2.0 , I upgraded to 3.3.8. mdadm version was fine.
> 
> I searched the log files I got and was unable to find anything concerning
> my array. Maybe that sorta stuff isn't logged in ubuntu. I did find some
> mails concerning degraded raid that do not correlate with my current
> breakage. I received the following 2 messages:
> 
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> [raid4] [raid10]
> md0 : active (auto-read-only) raid6 sdi2[1] sdh2[0] sdg2[8] sdc1[9] sdd2[5]
> sdb2[3] sdf2[7] sde2[6]
>       13586485248 blocks super 1.2 level 6, 4096k chunk, algorithm 2 [9/8]
> [UU_UUUUUU]
> 
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> [raid4] [raid10]
> md0 : active (auto-read-only) raid6 sdj2[2] sdg2[8] sdd2[5] sde2[6] sdb2[3]
> sdf2[7] sdc1[9]
>       13586485248 blocks super 1.2 level 6, 4096k chunk, algorithm 2 [9/7]
> [__UUUUUUU]
> 
> I conclude that my setup must have been sdh2 [0], sdi2 [1], sdj2 [2], sdb2
> [3], sdd2 [5] , sde2 [6], sdf2 [7], sdg2 [8], sdc1 [9]

Unfortunately these number are not the roles of the device in the array.  They
are the order in which the devices were added to the array.
So 0-8 are very likely roles 0-8 in the array.  '9' is then the first spare,
and it stays as '9' even when it becomes active.  So as there is no '4', it
does look likely that 'sdc1' should come between  'sdb2' and 'sdd2'.

NeilBrown


> sdc1 is the replacement for my first drive that went bad. It's somewhat
> strange that it is now listed as device 9 and not 4, isn't it? I reckon
> that I have to rebuild in that order, notwithstanding.
> 
> regards,
> Martin

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux