Re: slow guest performance with build load, looking for ideas

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/02/2009 12:41 AM, Erik Jacobson wrote:
I wanted to post in to the thread the lastest test run.

Avi Kivity provided some ideas to try.  I had mixed luck.  I'd like to try
this again if we have any thoughts on the vpid/ept issue, or any other
ideas for drilling down on this.  Avi Kivity mentioned LVM in the thread.
I continued to just export the whole /dev/sdb to the guest. I'm happy to
try LVM in some form if we think it would help?

Exporting an entire drive is even better than LVM (in terms of performance; flexibility obviously suffers). Just make sure to use cache=none (which I see in your command line below).

  * I could NOT find vpid and ept parameters on the host.  They weren't here:
    /sys/module/kvm_intel/parameters
    nor here
    /sys/module/kvm/parameters
    So the check for those parameters resulted in no information.
    Didn't see them elsewhere either:
    # pwd
    /sys
    # find . -name vpid -print
    # find . -name ept -print


Apparently the parameters were only exposed in 2.6.30. Previously they were only available during modprobe. Since you're using nehalem, let's assume they're set correctly (since that's the default).


I had done some stuff to set up the test including a build I didn't count.

GUEST time (make -j12&&   make -j12 modules), work area disk no cache param
--------------------------------------------------------------------------
kvm_stat output BEFORE running this test:

kvm statistics

  efer_reload                 13       0
  exits                 27145076    1142
  fpu_reload             1298729       0
  halt_exits             2152011     189
  halt_wakeup             494689     123
  host_state_reload	4998646     837
  hypercalls                   0       0
  insn_emulation        10165593     302
  insn_emulation_fail          0       0
  invlpg                       0       0
  io_exits               2096834     643
  irq_exits              6469071       8
  irq_injections         4765189     190
  irq_window              279385       0
  largepages                   0       0
  mmio_exits                   0       0
  mmu_cache_miss           18670       0
  mmu_flooded                  0       0
  mmu_pde_zapped               0       0
  mmu_pte_updated              0       0
  mmu_pte_write            10440       0
  mmu_recycled                 0       0

Nice and quiet.

qemu-kvm command:
/usr/bin/qemu-kvm -M pc -m 4096 -smp 8 -name f11-test -uuid b7b4b7e4-9c07-22aa-0c95-d5c8a24176c5 -monitor pty -pidfile /var/run/libvirt/qemu//f11-test.pid -drive file=/var/lib/libvirt/images/f11-test.img,if=virtio,index=0,boot=on -drive file=/dev/sdb,if=virtio,index=1 -net nic,macaddr=54:52:00:46:48:0e,model=virtio -net user -serial pty -parallel none -usb -usbdevice tablet -vnc cct201:1 -soundhw es1370 -redir tcp:5555::22

-usbdevice tablet is known to cause large interrupt loads. I suggest dropping it. If it helps your vnc session, drop your vnc client and use vinagre instead.

test run timing:
real	12m36.165s
user	27m28.976s
sys	8m32.245s

12 minutes real vs 35 cpu minutes -> scaling only 3:1 on smp 8.


kvm_stat output after this test run
kvm statistics

  efer_reload                 13       0
  exits                 47097981    2003
  fpu_reload             2168308       0
  halt_exits             3378761     301
  halt_wakeup             707171     241
  host_state_reload	7545990    1538
  hypercalls                   0       0
  insn_emulation        17809066     462
  insn_emulation_fail          0       0
  invlpg                       0       0
  io_exits               2801221    1232
  irq_exits             11959063       7
  irq_injections         8395980     304
  irq_window              531641       3
  largepages                   0       0
  mmio_exits                   0       0
  mmu_cache_miss           28419       0
  mmu_flooded                  0       0
  mmu_pde_zapped               0       0
  mmu_pte_updated              0       0
  mmu_pte_write            10440       0
  mmu_recycled              7193       0


Nice and quiet too, but what's needed is kvm_stat (or kvm_stat -1) during the run. Many of the 47M exists are unaccounted for, there's a lack in the stats gathering code.

vmstat 1 on host and guest during the run would also help.

HOST time (make -j12&&   make -j12 modules) with no guest running
----------------------------------------------------------------
real	6m50.936s
user	29m12.051s
sys	5m50.867s


35 minutes cpu run on 7 minutes real time, so scaling 1:7. User time almost the same, system time different but not enough to account for the large difference in run time.

I'm due to get my own Nehalem next week, I'll try to reproduce your results here.

--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux