Re: Resize RAID-0?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 11, 2010 at 7:39 PM, Patrick J. LoPresti <lopresti@xxxxxxxxx> wrote:
> From reading the documentation and source code, I gather "mdadm
> --grow" is not supported for RAID-0 devices.
>
> In my application, I am using md RAID-0 to stripe among several
> (identical) hardware RAID-0 chassis.  I would like to extend my setup
> by adding another chassis, but apparently, I cannot (?).
>
> The next time I build a system like this, is there any way to get what
> I want using Linux?  That is, striping among devices, no parity, but
> with the ability to grow in the future?
>
> Thanks!
>
>  - Pat
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

I THINK that LVM may allow you to do that.  I know the newer version
of the labels supports /mixing/ how many stripes are in a given
segment of storage. You may be able to, via some slightly non-trivial
complexity but with existing commands and a little free space, shuffle
your data out of an M device set of storage in to an N device set of
storage striping.

I don't actually see any technical reason why restriping raid0 or
another raid level to any other should be impossible to support. I
just don't believe the feature currently exists, and possibly that the
metadata might need changes to allow for different segments of backing
storage and use within the same md container.  However were it even
possible to store two types then only a small critical window would
need backup, and everything else could sync to the largest portion of
non-overlap.  The worse case would be same size to same size reshape;
a literal nightmare performance wise since the whole set would be a
critical section.  Even reserving a fraction of a megabyte at creation
time would at least allow the operation on /that/ to be done as a
non-critical section.  Alternatively if a write-intent map were in use
it could instead be destroyed and it's space used for the critical
section.  The map might then flop between front and back during
different reshapes or shifts.

I just got done glancing at a few locations (website, documentation,
and kernel code) and didn't see anything like a representation of what
the metablock might look like, so I can't say for sure if looks like I
expect from the behavior.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux