Re: "cannot start dirty degraded array"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Jun 14, 2009 at 08:32:22PM -0300, Carlos Carvalho wrote:

> You can try to use mdadm -A -f /dev/md2 <list of devices> to force the
> array to assemble. Should work if all disks stopped simultaneously.

I appreciate your response, Carlos.  I did try that before sending the
machine for recovery.  We're now working with a service that seems good
to me.  Here's their initial report.
	Md0; is made up of the first 16 physical drives, and the first 8
	drives are out of sync with the second eight.  Event codes are
	incorrect.  It appears that someone tried to start the raid (as
	in force) with only eight drives.  This raid will not reassemble
	without fixing the superblock hex structure and getting it back
	into alignment.

	Md1; is made up of the next 16 physical drives.  The first 8
	drives think the second set of 8 are faulty, but the event codes
	are OK.

	Md2; is made up of the last set of 16 physical drives.  The
	first two drives in this array think that everything is OK, but
	all the other drives show all manner of faults and drive removals.
$36K for standard recovery.  We're still working on it.

The drives all appear to be alright.  I suspect that there was a
kernel/controller problem.

Thank you.

--kyler
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux