raid6 recovery with read errors

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have a rather large RAID6 array where multiple disks have developed
read errors. The problem is that the RAID6 is built from LVM-mapped disks,
which (I think) isolated the MD driver from write errors. Thus it thought
that a re-write of the bad sectors succeeded. WRONG. Thus the bad disks
were never unmapped.

The problem is that yesterday, three of these TByte disks failed in
_exactly_ the same 1k-sized spot. So, no more RAID6. :-(

So, how do I get the data back?

I've copied the individual partitions with ddrescue, which conveniently
left me a log file pointing to the sectors which need to be recovered.
However, I'm sure that there's no way to tell the kernel about individual
"bad spots".

Is there a standalone program that can do that?

-- 
Matthias Urlichs

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux