I will pay money for the correct RAID recovery instructions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've been trying to fix a degraded array for a couple of months now
and it's getting frustrating enough that I'm willing to put a bounty
on the correct solution.  The array can start in a degraded state and
the data is accessible, so I know this is possible to fix.  Any
takers?  I'll bet someone could use some beer money or a contribution
to their web hosting costs.

Here's how the system is set up:  There are (6) 3 TB drives.  Each
drive has a BIOS boot partition.  The rest of the space on each drive
is a large GPT partition that is combined in a RAID 10 array.  On top
of the array there are four LVM volumes: /boot, /root, swap, and /srv.

Here's the problem:  /dev/sdf failed.  I replaced it but as it was
resyncing, read errors on /dev/sde kicked the new sdf out and made it
a spare.  The array is now in a precarious degraded state.  All it
would take for the entire array to fail is for /dev/sde to fail, and
it's already showing signs that it will.  I have tried forcing the
array to assemble using /dev/sd[abcde]2 and then forcing it to add
/dev/sdf2.  That still adds sdf2 as a spare.  I've tried "echo check >
/sys/block/md0/md/sync_action" but that finishes immediately and
changes nothing.

Can anyone solve this?  I'd be happy to pay you for your knowledge.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux