>> For errors occurring on the level of hard disk blocks (signature: most >> bytes of the block have D errors, all with same z), the probability for >> multidisc corruption to go undetected is ((n-1)/256)**512. This might >> pose a problem in the limiting case of n=255, however for practical >> applications, this probability is negligible as it drops off >> exponentially with decreasing n: >> > > That assumes fully random data distribution, which is almost certainly a > false assumption. Agreed. This means, that the formula only serves to specify a lower limit to the probability. However, is there an argumentation, why a pathologic case would be probable, i.e. why the probability would be likely to *vastly* deviate from the theoretical limit? And if there is, would that argumentation not apply to other raid 6 operations (like "check") also? And would it help to use different Galois field generators at different positions in a sector instead of using a uniform generator? Kind regards, Thiemo Nagel - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html