Re: 3-disk fail on raid-6, examining my options...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Wols Lists wrote:
> On 18/07/17 18:20, Maarten wrote:
> > Now from what I've gathered over the years and from earlier incidents, I
> > have now 1 (one) chance left to rescue data off this array; by hopefully
> > cloning the bad 3rd-failed drive with the aid of dd_rescue and
> > re-assembling --force the fully-degraded array. (Only IF that drive is
> > still responsive and can be cloned)
> 
> If it clones successfully, great. If it clones, but with badblocks, I
> keep on asking - is there any way we can work together to turn
> dd-rescue's log into a utility that will flag failed blocks as "unreadable"?

I wrote a shell script that will output a device mapper table to do this. 
It will do either zero or error targets for failed blocks.  It's not
automatic and does require a block device (loop for files).  I've used this
several times at work and works for me.

I'm not sure if this is what you're talking about or not, but if you want
the script, I'll post it.

-- 
 Microsoft has beaten Volkswagen's world record.  Volkswagen only created 22
 million bugs.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux