Re: data corruption after rebuild

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 19 Jul 2011 19:05:39 +0200
Pavel Herrmann <morpheus.ibis@xxxxxxxxx> wrote:

> > How it got there and how to prevent that from
> > happening in the future - that's a whole different question.
> 
> would ZFS in raidz2 mode be much better than raid6+ext4? I understand its not 
> the topic of this list, but file-level checksummed rebuild looks like a nice 
> feature

Personally I prefer to not bother with ZFS, it brings way too many complications into software choice, and I just want to use my favorite GNU/Linux distro and not Solaris, and also not trusting 12 TB of data to a third-party kernel module or a FUSE driver which are barely tested and have uncertain future. I'd put more hope in BTRFS RAID5, but that one is a long way ahead from becoming a viable option too.

Regarding mdadm+raid6, AFAIK it currently does not try to heal itself from silent corruption inside a single chunk, even though that should be possible with RAID6. On a repair, if the data chunks are readable with no I/O error, they are considered to be the golden standard and all parity chunks are simply recalculated from data and overwritten (also incrementing mismatch_cnt, if they changed). So maybe implementing a more advanced repair feature could give protection against silent corruption not much weaker than what is offered by per-file checksumming RAID implementations.

-- 
With respect,
Roman

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux