Re: [dm-devel] raid10 ... (was: Re: ANNOUNCE: mdadm 1.7.0)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



That is what you get from a LVM2 setting with 1 VG containing all your
disks and mirrored LVs.

I didn't check lately, but PE allocators certainly need to be more
intelligent with regard to not allowing mirror members on the same spin.

regards,
cvaroqui

> Imagine having a pool of drives, where chunks of data are distributed evenly
> across all drives in a redundant manner. If one drive dies, the chunks that
> are not redundant anymore get their copies on the remaining drives, provided
> that there's enough space left; if one or more drives are added to the
> array, new chunks are written there until the balance is reached again.
> 
> Disk space could be the first key for balancing across the drives, with
> transfer rate or seek time maybe added later. Maybe the pool could even
> adapt dinamically to the i/o patterns ... 
> 
> Am i dreaming (it's well over 4am here :) ? Or is something like this
> possible? Maybe not with a md personality, but by some daemon that would be
> taking care of a dm map?
> 

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux