Re: RAID-10 explicitly defined drive pairs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 1/6/2012 2:11 PM, Jan Kasprzak wrote:
> And I suspect that XFS swidth/sunit
> settings will still work with RAID-10 parameters even over plain
> LVM logical volume on top of that RAID 10, while the settings would
> be more tricky when used with interleaved LVM logical volume on top
> of several RAID-1 pairs (LVM interleaving uses LE/PE-sized stripes, IIRC).

If one is using many RAID1 pair s/he probably isn't after single large
file performance anyway, or s/he would just use RAID10.  Thus
sunit/swidth settings aren't tricky in this case.  One would use a
linear concatenation and drive parallelism with XFS allocation groups,
i.e. for a 24 drive chassis you'd setup an mdraid or lvm linear array of
12 RAID1 pairs and format with something like:

$ mkfs.xfs -d agcount=24 [device]

As long as one's workload writes files relatively evenly across 24 or
more directories, one receives fantastic concurrency/parallelism, in
this case 24 concurrent transactions, 2 to each mirror pair.  In the
case of 15K SAS drives this is far more than sufficient to saturate the
seek bandwidth of the drives.  One may need more AGs to achieve the
concurrency necessary to saturate good SSDs.

-- 
Stan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux