Re: using dd (or dd_rescue) to salvage array

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2012-02-06, Stefan *St0fF* Huebner <st0ff@xxxxxxx> wrote:
>
>  From the logical point of view those lost 8k would create bad data - 
> i.e. a filesystem problem OR simply corrupted data.  That depends on 
> which blocks exactly are bad.  If you were using lvm it could even be 
> worse, like broken metadata.

I am using LVM, so I'll just have to hope for the best.  I haven't yet
done an xfs_repair, but I will do that soon.  I just made my volume
active, and vgchange didn't complain, so I'm guessing that's a good
sign.

> It would be good if those 8k were "in a row" - that way at max 3 
> fs-blocks (when using 4k fs-blocksize) would be corrupted.

It was--it looks like it was really just that one spot on the drive.  So
I am hopeful that any errors that are a result of the lost 8k will be
reparable by xfs_repair.

--keith


-- 
kkeller@xxxxxxxxxxxxxxxxxxxxxxxxxx


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux