On Fri, Apr 23, 2010 at 06:26:20PM +0400, Michael Tokarev wrote:
This is most likely due to read-modify-write cycle which is present on lvm-on-raid[456] if the number of data drives is not a power of two. LVM requires the block size to be a power of two, so if you can't fit some number of LVM blocks on whole raid stripe size your write speed is expected to be ~3 times worse...
uh? PE size != block size. PE size is not used for io, it is only used for laying out data. It will influence data alignment, but i believe the issue may be bypassed if we make PE size == chunk_size and do all creation/extension of LV in multiple of data_disks, the resulting device-mapper tables should be aligned. L. -- Luca Berra -- bluca@xxxxxxxxxx Communication Media & Services S.r.l. /"\ \ / ASCII RIBBON CAMPAIGN X AGAINST HTML MAIL / \ -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html