Re: expand raid10

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Apr 14, 2011 at 09:36:57AM +1000, NeilBrown wrote:
> On Wed, 13 Apr 2011 14:34:15 +0200 David Brown <david@xxxxxxxxxxxxxxx> wrote:
> 
> > On 13/04/2011 13:17, NeilBrown wrote:
> > > On Wed, 13 Apr 2011 13:10:16 +0200 Keld Jørn Simonsen<keld@xxxxxxxxxx>  wrote:
> > >
> > >> On Wed, Apr 13, 2011 at 07:47:26AM -0300, Roberto Spadim wrote:
> > >>> raid10 with other layout i could expand?
> > >>
> > >> My understanding is that you currently cannot expand raid10.
> > >> but there are things in the works. Expansion of raid10,far
> > >> was not on the list from neil, raid10,near was. But it should be fairly
> > >> easy to expand raid10,far. You can just treat one of the copies as your
> > >> refence data, and copy that data to the other raid0-like parts of the
> > >> array.  I wonder if Neil thinks he could leave that as an exersize for
> > >> me to implement... I would like  to be able to combine it with a
> > >> reformat to a more robust layout of raid10,far that in some cases can survive more
> > >> than one disk failure.
> > >>
> > >
> > > I'm very happy for anyone to offer to implement anything.
> > >
> > > I will of course require the code to be of reasonable quality before I accept
> > > it, but I'm also happy to give helpful review comments and guidance.
> > >
> > > So don't wait for permission, if you want to try implementing something, just
> > > do it.
> > >
> > > Equally if there is something that I particularly want done I won't wait for
> > > ever for someone else who says they are working on it.  But RAID10 reshape is
> > > a long way from the top of my list.
> > >
> > 
> > I know you have other exciting things on your to-do list - there was 
> > lots in your roadmap thread a while back.
> > 
> > But I'd like to put in a word for raid10,far - it is an excellent choice 
> > of layout for small or medium systems with a combination of redundancy 
> > and near-raid0 speed.  It is especially ideal for 2 or 3 disk systems. 
> > The only disadvantage is that it can't be resized or re-shaped.  The 
> > algorithm suggested by Keld sounds simple to implement, but it would 
> > leave the disks in a non-redundant state during the resize/reshape. 
> > That would be good enough for some uses (and better than nothing), but 
> > not good enough for all uses.  It may also be scalable to include both 
> > resizing (replacing each disk with a bigger one) and adding another disk 
> > to the array.
> > 
> > Currently, it /is/ possible to get an approximate raid10,far layout that 
> > is resizeable and reshapeable.  You can divide the member disks into two 
> > partitions and pair them off appropriately in mirrors.  Then use these 
> > mirrors to form a degraded raid5 with "parity-last" layout and a missing 
> > last disk - this is, as far as I can see, equivalent to a raid0 layout 
> > but can be re-shaped to more disks and resized to use bigger disks.
> > 
> 
> There is an interesting idea in here....
> 
> Currently if the devices in an md/raid array with redundancy (1,4,5,6,10) are
> of difference sizes, they are all treated as being the size of the smallest
> device.
> However this doesn't really make sense for RAID10-far.
> 
> For RAID10-far, it would make the offset where the second slab of data
> appeared not be 50% of the smallest device (in the far-2 case), but 50% of
> the current device.
> 
> Then replacing all the devices in a RAID10-far with larger devices would mean
> that the size of the array could then be increased with no further data
> rearrangement.
> 
> A lot of care would be needed to implement this as the assumption that all
> drives are only as big as the smallest is pretty deep.  But it could be done
> and would be sensible.
> 
> That would make point 2 of http://neil.brown.name/blog/20110216044002#11 a
> lot simpler.

Hmm, I am not sure I understand. Eg for the simple case of growing a 2
disk raid10-far to a 3 disk or 4 disk, how would that be done? I think
you need to rewrite the whole array. But I think you also need to do
that when growing most of the other array types.

Quoting point 2 of http://neil.brown.name/blog/20110216044002#11:

> 2/ Device size of 'far' arrays cannot be changed easily. Increasing
> device size of 'far' would require re-laying out a lot of data. We would
> need to record the 'old' and 'new' sizes which metadata doesn't
> currently allow. If we spent 8 bytes on this we could possibly manage a
> 'reverse reshape' style conversion here.
> 
> EDIT: if we stored data on drives a little differently this could be a
> lot easier. Instead of starting the second slab of data at the same
> location on all devices, we start it an appropriate fraction into the
> size of 'this' device, then replacing all devices in a raid10-far with
> larger drives would be very effective. However just increasing the size
> of the device (e.g. using LVM) would not work very well 

I am not sure I understand the problem here. Are you saying that there
is no room in the metadata to hold info on the reshaping while it is
processed? 

For a simple grow with more partitions of the same size I see problems 
in just keeping the old data. I think that would damage the striping
performance.

And I don't understand what is meant with "we start it an appropriate
fraction" - what fraction would that be? Eg growing from 2 to 3 disks?

If you want integrity of the data, understood as always having the
required number of copies available, then you could copy from the end of
the half  array and then have a pointer that tells whereto the process
is completed. There may be some initial problems with consistency, but
maybe there is some recovery areas in the new array data that could be
used for bootstrapping the process - once you are over an initial size,
you are not overwriting old data.

Best regards
keld
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux