Re: [PATCH 1 of 2] MD RAID10: Improve redundancy for 'far' and 'offset' algorithms

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 12 Dec 2012 10:45:05 -0600 Jonathan Brassow <jbrassow@xxxxxxxxxx>
wrote:

> MD RAID10:  Improve redundancy for 'far' and 'offset' algorithms
> 
> The MD RAID10 'far' and 'offset' algorithms make copies of entire stripe
> widths - copying them to a different location on the same devices after
> shifting the stripe.  An example layout of each follows below:
> 
> 	        "far" algorithm
> 	dev1 dev2 dev3 dev4 dev5 dev6
> 	==== ==== ==== ==== ==== ====
> 	 A    B    C    D    E    F
> 	 G    H    I    J    K    L
> 	            ...
> 	 F    A    B    C    D    E  --> Copy of stripe0, but shifted by 1
> 	 L    G    H    I    J    K
> 	            ...
> 
> 		"offset" algorithm
> 	dev1 dev2 dev3 dev4 dev5 dev6
> 	==== ==== ==== ==== ==== ====
> 	 A    B    C    D    E    F
> 	 F    A    B    C    D    E  --> Copy of stripe0, but shifted by 1
> 	 G    H    I    J    K    L
> 	 L    G    H    I    J    K
> 	            ...
> 
> Redundancy for these algorithms is gained by shifting the copied stripes
> a certain number of devices - in this case, 1.  This patch proposes the
> number of devices the copy be shifted by be changed from:
> 	device# + near_copies
> to
> 	device# + raid_disks/far_copies
> 
> The above "far" algorithm example would now look like:
> 	        "far" algorithm
> 	dev1 dev2 dev3 dev4 dev5 dev6
> 	==== ==== ==== ==== ==== ====
> 	 A    B    C    D    E    F
> 	 G    H    I    J    K    L
> 	            ...
> 	 D    E    F    A    B    C  --> Copy of stripe0, but shifted by 3
> 	 J    K    L    G    H    I
> 	            ...
> 
> This has the affect of improving the redundancy of the array.  We can
> always sustain at least one failure, but sometimes more than one can
> be handled.  In the first examples, the pairs of devices that CANNOT fail
> together are:
> 	(1,2) (2,3) (3,4) (4,5) (5,6) (1, 6) [40% of possible pairs]
> In the example where the copies are instead shifted by 3, the pairs of
> devices that cannot fail together are:
> 	(1,4) (2,5) (3,6)                    [20% of possible pairs]
> 
> Performing shifting in this way produces more redundancy and works especially
> well when the number of devices is a multiple of the number of copies.

Unfortunately it doesn't bring any benefit (I think) when the number of
devices is not a multiple of the number of copies.  And if we are going to
make a change, we should do the best we can.

An approach that has previously been suggested is to divide the devices up
into set which are ncopies in size or (for the last set) a little more and
and rotate within those sets.
So with 5 devices and two copies there are 2 sets, one of 2, one of 3.

  A  B  C  D  E
  B  A  D  E  C

The  only pairs where we cannot survive failure of both are pairs that are in
the same set.  This is as good as your scheme when raid_disks divides copies,
but better when it doesn't.

So unless there is a good reason not to, I would rather we go with the scheme
that gives the best in all cases.


NeilBrown

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux