Re: Full use of varying drive sizes?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 22/09/2009 14:07, Majed B. wrote:
When I first put up a storage box, it was built out of 4x 500GB disks,
later on, I expanded to 1TB disks.

What I did was partition the 1TB disks into 2x 500GB partitions, then
create 2 RAID arrays: Each array out of partitions:
md0: sda1, sdb1, sdc1, ...etc.
md1: sda2, sdb2, sdc2, ...etc.

All of those below LVM.

This worked for a while, but when more 1TB disks started making way
into the array, performance dropped because the disk had to read from
2 partitions on the same disk, and even worse: When a disk fail, both
arrays were affected, and things only got nastier and worse with time.

Sorry, I don't quite see what you mean. Sure, if half your drives are 500GB and half are 1TB, and you therefore have 2 arrays on the 1TB drives, with the arrays as PVs for LVM, and one filesystem over the lot, you're going to get twice as many read/write ops on the larger drives, but you'd get that just concatenating the drives with JBOD. I wasn't suggesting you let LVM stripe across the arrays, though - that would be performance suicide.

I would not recommend that you create arrays of partitions that rely
on each other.

Again I don't see what you mean by "rely on each other", they're just PVs to LVM.

Cheers,

John.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux