> > You can get parity sets of the appropriate size (and necessary non-multiple > intersection) by calculating parity vertically as well as horizontally. > Evenodd utilizes this with fixed parity drives, IIRC. The illustration of > RAID6 at http://www.acnc.com/04_01_06.html implies this, though the > illustration they show will not work (losing the first and last drive would > lose data block A0 as well as parity A and parity 0. Rule of thumb -- a > chunk that is a member of a two parity sets cannot have both parity blocks > on the same drive). However, there ARE schemes that will work with simple > encoding. The original RAID 6 implementation did not use simple encoding, > IIRC it used Huffman codes. > > The reason it hasn't been used (I know of no commercial RAID6 > implementations) is performance, but not because of the trivial parity > calculation time. RAID6 is very performance expensive for the same reason > that RAID 5 is performance expensive. It's not the calculation of the > parity, it's the disk I/O for writes. A small write to a RAID 5 can result > in two reads followed by two writes. A small write to a RAID 6 can result > in three reads followed by three writes, and with vertical parity striping > the likelihood of a full-stripe write goes way down. > > That said, I'd like to get around to making a RAID6 driver sometime. I > think using small RAID6 chunk sizes and pulling in a full parity page on > each access might get respectable performance. But it will be quite a bit > more hairy than the RAID 5 driver. If performance is not really an issue, perhaps you should consider Reed-Solomon codes. This would allow flexible N disk redundancy and would enable one to not only reconstruct from multiple device failures, but identify a device that give erroneous information. R-S codes are what they use on CDs and tapes. This "RAID-N" would be a real CPU hog, but arrays of dozens of disks would be possible (and who's using all of his 2GHz CPU anyway?) - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html