Re: Suggestion needed for fixing RAID6

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

----- Original Message ----- From: "Stefan /*St0fF*/ Hübner" <stefan.huebner@xxxxxxxxxxxxxxxxxx>
To: "Janos Haar" <janos.haar@xxxxxxxxxxxx>
Sent: Thursday, April 22, 2010 10:18 PM
Subject: Re: Suggestion needed for fixing RAID6


Hi Janos,

I'd ddrescue the failing drives one by one to replacement drives.  Set a
very high retry-count for this action.

I know what am i doing, trust me. ;-)
I have much more professional tools for this than the ddrescue, and i have
the list of defective sectors as well.
Now i am imaging the second of the failing drives, and this one have >1800
failing sectors.


The logfile ddrescue creates shows the unreadable sectors afterwards.
The hard part would now be to incorporate the raid-algorithm into some
tool to just restore the missing sectors...

I can do that, but it is not a good game for 15TB array or even some hundred
of sectors to fix by hand....
The linux md knows how to recalculate these errors, i want to find this
way....somehow...
I am thinking of making RAID1 from the defective drives, and if the kernel
will re-write the sectors, the copy will get it.
But i don't know how to prevent the copy to read it. :-/

Thanks for your suggestions,

Janos


I hope this helps a bit.
Stefan

Am 22.04.2010 12:09, schrieb Janos Haar:
Hello Neil, list,

I am trying to fix one RAID6 array wich have 12x1.5TB (samsung) drives.
Actually the array have 1 missing drive, and 3 wich have some bad
sectors!
Genearlly because it is RAID6 there is no data lost, because the bad
sectors are not in one address line, but i can't rebuild the missing
drive, because the kernel drops out the bad sector-drives one by one
during the rebuild process.

My question is, there is any way, to force the array to keep the members
in even if have some reading errors?
Or is there a way to re-add the bad sector drives after the kernel
dropped out without stopping the rebuild process?
In normal way after 18 hour sync, @ 97.9% the 3rd drive is always
dropped out and the rebuild stops.

Thanks,
Janos Haar
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux