Re: RAID-10 explicitly defined drive pairs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>> And I suspect that XFS swidth/sunit settings will still work
>> with RAID-10 parameters even over plain LVM logical volume on
>> top of that RAID 10, while the settings would be more tricky
>> when used with interleaved LVM logical volume on top of
>> several RAID-1 pairs (LVM interleaving uses LE/PE-sized
>> stripes, IIRC).

Stripe alignment is only relevant for parity RAID types, as it
is meant to minimize read-modify-write. There is no RMW problem
with RAID0, RAID1 or combinations. But there is a case for
'sunit'/'swidth' with single flash based SSDs as they do have a
RMW-like issue with erase blocks. In other cases whether they
are of benefit is rather questionable.

> One would use a linear concatenation and drive parallelism
> with XFS allocation groups, i.e. for a 24 drive chassis you'd
> setup an mdraid or lvm linear array of 12 RAID1 pairs and
> format with something like: $ mkfs.xfs -d agcount=24 [device]
> As long as one's workload writes files relatively evenly
> across 24 or more directories, one receives fantastic
> concurrency/parallelism, in this case 24 concurrent
> transactions, 2 to each mirror pair.

That to me sounds a bit too fragile ; RAID0 is almost always
preferable to "concat", even with AG multiplication, and I would
be avoiding LVM more than avoiding MD.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux