Hi, > The docs say that for both raid 5 & 6 it the repair function simply > rewrites the parity drive(s). For raid-5 I can understand this as > there's no way to tell if the data is incorrect, or if the parity is > incorrect since there's only 1 parity. And while I dont know the > details of the algorithms involved in raid-6, couldnt you do > something like: > Calculate replacement data for both parity drives > If one of the 2 parity drives doesnt match its replacement data > assume that drive is bad > Else if both parity drives dont match their replacement data > one of the data drives must be bad > calculate replacement data for each data drive and find the one > that doesnt match > If more than 1 data drive doesnt match its replacement data > we have multiple-drive failure (could be any combination of > parity & data drives) and cant determine which ones > Else > the world is ok > > Its probably a heck of a lot more computationally expensive, but it > can isolate which drive is the bad one. But again, I'm not > knowledgeable on the the internal details of raid-6 and might just > be completely off my rocker. > Welcome to the club! It seems this topic pops up more or less each 6 months... Unfortunately, it also seems that the core developers do not have high priority on this one. BTW, did anybody else look into this? Is there any possiblities to perform this kind of check in user space? bye, -- piergiorgio -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html