Poor LVM performance.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have been doing some testing with KVM and Virtuozzo(containers based virtualisation)  and various storage devices and have some results I would like some help analyzing.  I have a nice big ZFS box from Oracle (Yes, evil but Solaris NFS is amazing). I have 10G and IB connecting these to my cluster. My cluster is four HP servers (E5-2670 & 144GB ram) with a RAID10 of 600k SAS drives. 

Please open these pictures side by side.

https://dl.dropbox.com/u/98200887/Screen%20Shot%202012-12-04%20at%202.50.33%20PM.png
https://dl.dropbox.com/u/98200887/Screen%20Shot%202012-12-04%20at%203.18.03%20PM.png

You will notice that using KVM/LVM on the local RAID10 completely destroys performance whereas the container based virtualisation stuff is awesome and as fast as the NFS.

4,8,12,16...VMs relates to the aggregate performance of the benchmark in that number of VMs. 4 = 1 VM on each node, 8 = 2 VM on each node. TPCC warehouses is the number of tpcc warehouses that  the benchmark used. 1 warehouse is about 150MB so 10 warehouses would mean about 1.5GB of data being held in the innodb pool.

Why does LVM performance suck so hard compared to a single filesystem approach. What am I doing wrong? 

Thanks,

Andrew
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux