RE: VM outperforming host

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have experimented a similar problem because of HT, you can try HT disabled in BIOS setting.

-----Original Message-----
From: kvm-owner@xxxxxxxxxxxxxxx [mailto:kvm-owner@xxxxxxxxxxxxxxx] On Behalf Of John Paul Walters
Sent: Monday, December 30, 2013 11:59 AM
To: kvm@xxxxxxxxxxxxxxx
Subject: VM outperforming host

HI,

I’ve been benchmarking of several GPU-enabled applications on both physical hardware and within KVM.  To my surprise, I’ve found a small subset of benchmarks that are able to outperform the host system by as much as 15% in some cases, and I’m hoping that someone may be able to offer some insight into what might be the cause, or where to start looking. The system in question:

Host:
* Arch Linux with 3.12 kernel
* qemu 1.7
* 2 Xeon E5-2670 (total of 16 cores), 48 GB RAM (split evenly over 2 NUMA nodes)
* 1 NVIDIA K20m GPU, with gigabit ethernet networking
* 10Gbe and Infiniband adapters, but neither are in use

Guest:
* CentOS 6.4 with 2.6.32-358.23.2 kernel
* 20 GB RAM and 8 physical cores from NUMA node 0
* default networking
* K20m GPU using PCIe passthrough

The qemu command line:
qemu-system-x86_64 -enable-kvm -M q35 -m 20576 -cpu host -smp 8,sockets=1,cores=8,threads=1 -device ahci,bus=pcie.0,id=ahci -bios /usr/share/qemu/bios.bin -drive file=/root/centos_6.4/centos_flat.img,id=disk,format=raw -device ide-hd,bus=ahci.0,drive=disk -vnc 0.0.0.0:1 -redir tcp:52109::22 -device pci-assign,host=08:00.0

I’ve ensured that the VM runs entirely within a single NUMA node by creating a cpuset with the appropriate physical cores and memory nodes.  I’ve done the same for the host system tests.  I’ve also loaded the host system with CentOS 6.4 and rerun the same experiments, hoping that this issue was related to the host system kernel or Arch Linux.  It wasn’t.  

So far, I’ve tried disabling unused PCIe devices on the host, hoping that doing so would speed up the host side experiments, but it didn’t.  I’ve disabled transparent huge pages after I noticed that the VM memory appears to be backed by them.  This reduced the performance of the guest slightly, but did not come close to canceling out the performance gains of the VM.  I’ve experimented with several combinations of NUMA-related scheduler options, with virtually no effect.  Drivers and libraries are identical between host and guest. Does anyone have any suggestions for tracking down either where I’m losing performance on the host, or gaining performance in the VM?

thanks for any help or suggestions,
JP--
To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux