Re: Help recovering RAID6 failure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tuesday December 16, kmshanah@xxxxxxxxxxxxxx wrote:
> 
> Oh, and here's what gets added to dmesg after running that command:
> 

> raid5: cannot start dirty degraded array for md5

I thought that might be the case.  --force is meant to fix that -
remove the 'dirty' flag from the array.
> 
> This is run on Linux 2.6.26.9, mdadm 2.6.7.1 (Debian)

Hmm.. and there goes that theory.  There was a bug in mdadm prior to
2.6 which caused --force not to work for raid6 with 2 drives missing.

It looks like some of your devices are marks 'clean' and some are
'active'.  mdadm is noticing one that is 'clean' and not bothering to
mark the others as 'clean'.  The kernel is seeing one that is 'active'
and complaining.

The devices that are 'active' are sd[efl]1.  Maybe if you list one of
those last it will work.
e.g.

  mdadm -A --force --verbose /dev/md5 /dev/sd[cfghijk]1 /dev/sde1

If not, try listing it first.

I'll try to fix mdadm so that it gets this right.

Thanks,
NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux