Re: the true behavior of mdadm's raid-1 with regard to vertical parity and silent error detection/scrubbing- confirmation or feature request

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 18 Aug 2010, Michael Tokarev wrote:

But to me, the question is if there's a real reason/demand of doing so.

ZFS does it and people who are paranoid about bit rot really want it. It gives more protection against memory errors etc, ie outside the drive when the bits are in transit from the drive thru cables/controllers/drivers/block subsystem etc. Of course it's not perfect, but it gives some added protection.

If the cost/benefit analysis holds up or not I don't know, because I don't know the complexity. Having a 64k stripe in md actually use 68 k on drive and store some checksum might make sense, but it doesn't give great granularity. Perhaps those 4k can be checksum per 4k within the 64k stripe block so that a fairly fine-granular error can be given, and also if there is parity information available it can be read and the problem corrected.

--
Mikael Abrahamsson    email: swmike@xxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux