On Dec 12, 2012, at 7:23 PM, NeilBrown wrote: >> >> This has the affect of improving the redundancy of the array. We can >> always sustain at least one failure, but sometimes more than one can >> be handled. In the first examples, the pairs of devices that CANNOT fail >> together are: >> (1,2) (2,3) (3,4) (4,5) (5,6) (1, 6) [40% of possible pairs] >> In the example where the copies are instead shifted by 3, the pairs of >> devices that cannot fail together are: >> (1,4) (2,5) (3,6) [20% of possible pairs] >> >> Performing shifting in this way produces more redundancy and works especially >> well when the number of devices is a multiple of the number of copies. > > Unfortunately it doesn't bring any benefit (I think) when the number of > devices is not a multiple of the number of copies. And if we are going to > make a change, we should do the best we can. > > An approach that has previously been suggested is to divide the devices up > into set which are ncopies in size or (for the last set) a little more and > and rotate within those sets. > So with 5 devices and two copies there are 2 sets, one of 2, one of 3. > > A B C D E > B A D E C > > The only pairs where we cannot survive failure of both are pairs that are in > the same set. This is as good as your scheme when raid_disks divides copies, > but better when it doesn't. > > So unless there is a good reason not to, I would rather we go with the scheme > that gives the best in all cases. I think I've got something now that works like this. I've got to test and re-document. I'll repost tomorrow or the day after. brassow -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html