Re: LVM on raid10,f2 performance issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Holger Mauermann wrote:
Keld Jørn Simonsen schrieb:
How is it if you use the raid10,f2 without lvm?
What are the numbers?

After a fresh installation LVM performance is now somewhat better. I
don't know what was wrong before. However, it is still not as fast as
the raid10...

dd on raw devices
-----------------

raid10,f2:
  read : 409 MB/s
  write: 212 MB/s

raid10,f2 + lvm:
  read : 249 MB/s
  write: 158 MB/s


sda:  sdb:  sdc:  sdd:
----------------------
YYYY  ....  ....  XXXX
....  ....  ....  ....
XXXX  YYYY  ....  ....
....  ....  ....  ....



Regarding the layout from your first mail - this is how it's supposed to be. LVM's header took 3*64KB (you can control that with --metadatasize, and verify with e.g. pvs -o+pe_start), and then the first 4MB extent (controlled with --physicalextentsize) of the first logical volume started - on sdd and continued on sda. Mirrored data was set "far" from that, and shifted one disk to the right - as expected from raid10,f2.

As for performance, hmmm. Overally - there're few things to consider when doing lvm on top of the raid:

- stripe vs. extent alignment
- stride vs. stripe vs. extent size
- filesystem's awareness that there's also raid a layer below
- lvm's readahead (iirc, only uppermost layer matters - functioning as a hint for the filesystem)

But this is particulary important for raid with parities. Here everything is aligned already, and parity doesn't exist.

But the last point can be relevant - and you did test with filesystem after all. Try setting readahead with blockdev or lvchange (the latter will be permananet across lv activations). E.g.

#lvchange -r 2048 /dev/mapper...

and compare to raw raid10:

#blockedv --setra 2048 /dev/md...

If you did your tests with ext2/3, also try to create it with -E stride= stripe-width= option in both cases. Similary to sunit/swidth if you used xfs.

You might also create volume group with larger extent - such as 512MB (as 4MB granularity is often an overkill). Performance wise it shouldn't matter in this case though.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux