Hello list, I have been looking into the repair functionality of the raid6 implementation in recent kernels to find out how the md driver handles parity mismatches. As I understand, the handle_parity_checks6 function simply regenerates P and Q from the data blocks if they do not match. While this makes perfect sense for a single parity mismatch, when both are wrong it may indicate an error in the data blocks. When repairing a full raid6 with no missing drives (raid-devices=n+2), a single inconsistent data block could be detected: For every one of the n blocks, assume its device is missing, recover the block from P, generate Q' and compare with the actual Q. If there is exactly one block where Q' equals Q, rewrite the data block in question.* I understand the usual failure mode is to remove a drive from the array, or use IO errors from the kernel to identify incorrect data blocks. However, this assumes we recognize the error at the time and thus know which data block is incorrect. But that is not always the case: The drives could be inconsistent after a multi-drive failure, unclean shutdown, bit rot or because one raid drive was replaced outside the realm of md using dd(rescue). Is there a reason this approach is not currently chosen? The performance implications seem to be low, there is no increased io (in fact, it may decrease since write-back decreases up to a factor of 2), and number of parity calculations increases by a factor of n in the error case (both error case and n could be assumed low). Cheers Robert
Attachment:
signature.asc
Description: This is a digitally signed message part.