Re: Recovering a raid5 array with strange event count

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Friday April 13, chris@xxxxxxx wrote:
> Dear All,
> 
> I have an 8-drive raid-5 array running under 2.6.11. This morning it 
> bombed out, and when I brought
> it up again, two drives had incorrect event counts:
> 
> 
> sda1: 0.8258715
> sdb1: 0.8258715
> sdc1: 0.8258715
> sdd1: 0.8258715
> sde1: 0.8258715
> sdf1: 0.8258715
> sdg1: 0.8258708
> sdh1: 0.8258716
> 
> 
> sdg1 is out of date (expected), but sdh1 has received an extra event.
> 
> Any attempt to restart with mdadm --assemble --force, results in an an
> un-startable array with an event count of 0.8258715.
> 
> Can anybody advise on the correct command to use to get it started again?
> I'm assuming I'll need to use mdadm --create --assume-clean - but I'm 
> not sure
> which drives should be included/excluded when I do this.

A difference of 1 in event counts is not supposed to cause a problem.
Have you tried simply assembling the array without including sdg1.
e.g.
  mdadm -A /dev/md0 /dev/sd[abcdefh]1

??

NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux