Oliver Martin wrote:
Interesting. I'm seeing a 20% performance drop too, with default RAID
and LVM chunk sizes of 64K and 4M, respectively. Since 64K divides 4M
evenly, I'd think there shouldn't be such a big performance penalty.
I am no expert, but as far as I have read you must not only have compatible
chunk sizes (which is easy and most often the case). You also must stripe
align the LVM chunks, so every chunk spans an even number of raid stripes (not
raid chunks). Check the output of `dmsetup table`. The last number is the
offset of the underlying block device at which the LVM data portion starts. It
must be divisible by the raid stripe length (the length varies for different
raid types).
Currently LVM does not offer an easy way to do such alignment, you have to do
it manually upon executing pvcreate. By using the option --metadatasize one
can specify the size of the area between the LVM header (64KiB) and the start
of the data area. So one would supply STRIPE_SIZE - 64 for metadatasize[*],
and the result will be a stripe aligned LVM.
This information is unverified, I just compiled it from different list threads
and whatnot. I did this to my own arrays/volumes and I get near 100% raw
speed. If someone else can confirm the validity of this - it would be great.
Peter
* The supplied number is always rounded up to be divisible by 64KiB, so the
smallest total LVM header is at least 128KiB
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html