Re: Roadmap for md/raid ???

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It needn't be all that difficult. Consider the following:

you have a block layer driver which performs the following duties:

it manages a pool of N block devices
it exposes a portion of that storage as a single block device
the unexposed portion goes towards parity (in the vein of providing at
least N bits of parity protection)
the level of parity protection is applied to each block of size X. For
example a 64K block might consist of 48K of data and 16K of parity.
on top of that, the data+parity could be mirrored on N other devices,
anywhere, so long as it is mirrored on at least M other devices.

I'm just kinda thinking out loud. The duplicate block stuff could just
as easily be an md1 or md10 layer, so a new raid level might only have
to handle the parity stuff.


On Mon, Jan 19, 2009 at 2:18 PM, Greg Freemyer <greg.freemyer@xxxxxxxxx> wrote:
> On Mon, Jan 19, 2009 at 3:00 PM, Jon Nelson
> <jnelson-linux-raid@xxxxxxxxxxx> wrote:
>> I guess what I'd like to see more than anything else is not raid5 or
>> raid6 but raidN where N can be specified at the start and grown. While
>> I'm not a fan of ZFS's rulebreaking one thing it does (or claims to
>> do) strikes me as "the future" - the ability to specify X protection
>> bits and to increase or decrease X as needs see fit. It also strikes
>> me that there are several ways to do this but fundamentally it boils
>> down to "we don't trust our drives anymore and there we wish to
>> protect our data against their failure". Given 5 or 10 or 50 drives
>> how does one really protect their data effectively and allow their
>> data pool to grow without large quantities of hoop-jumping?
>>
>> I'd really like to see a re-thinking of data protection (parity or
>> data duplication) at the block layer - it need not be RAID as we know
>> it but IMO something has to be done - rapidly do we near the
>> precipice!
>
> If I understand what your saying, HP started supporting "Raid
> Equivalent" protection in some of their storage arrays years ago.
>
> You simply put a bunch of disk drives in the unit, then tell you want
> a 50GB logical volume with Raid 5 equivalent protection, etc.
>
> I think it might for example allocate 10 GB from each of 6 disk
> drives.  (ie. 5 for data + 1 for parity).
>
> Then you ask for 30 GB with raid 10 equivalent protection and it might
> allocate 10GB more from those same 6 drives.
>
> Then you go back and increase the size of th 50GB raid5 and it would
> allocate more space on the drives as required, but always ensuring the
> data was protected at least at raid5 levels.
>
> I sort of thought of it as an integrated LVM and Raid manager.
>
> I suspect putting that together is a pretty large amount of effort.
>
> Greg
> --
> Greg Freemyer
> Litigation Triage Solutions Specialist
> http://www.linkedin.com/in/gregfreemyer
> First 99 Days Litigation White Paper -
> http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf
>
> The Norcross Group
> The Intersection of Evidence & Technology
> http://www.norcrossgroup.com
>



-- 
Jon
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux