Re: Windows Server 2008R2 KVM guest performance issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 8/27/2013 11:09 AM, Paolo Bonzini wrote:
Il 27/08/2013 16:44, Brian Rak ha scritto:
Il 26/08/2013 21:15, Brian Rak ha scritto:
Samples: 62M of event 'cycles', Event count (approx.): 642019289177
   64.69%  [kernel]                    [k] _raw_spin_lock
    2.59%  qemu-system-x86_64          [.] 0x00000000001e688d
    1.90%  [kernel]                    [k] native_write_msr_safe
    0.84%  [kvm]                       [k] vcpu_enter_guest
    0.80%  [kernel]                    [k] __schedule
    0.77%  [kvm_intel]                 [k] vmx_vcpu_run
    0.68%  [kernel]                    [k] effective_load
    0.65%  [kernel]                    [k] update_cfs_shares
    0.62%  [kernel]                    [k] _raw_spin_lock_irq
    0.61%  [kernel]                    [k] native_read_msr_safe
    0.56%  [kernel]                    [k] enqueue_entity
Can you capture the call graphs, too (perf record -g)?
Sure.  I'm not entire certain how to use perf effectively.  I've used
`perf record`, then manually expanded the call stacks in `perf report`.
If this isn't what you wanted, please let me know.

https://gist.github.com/devicenull/7961f23e6756b647a86a/raw/a04718db2c26b31e50fb7f521d47d911610383d8/gistfile1.txt

This is actually quite useful!

-  41.41%  qemu-system-x86  [kernel.kallsyms]                                                                     0xffffffff815ef6d5 k [k] _raw_spin_lock
    - _raw_spin_lock
       - 48.06% futex_wait_setup
            futex_wait
            do_futex
            SyS_futex
            system_call_fastpath
          - __lll_lock_wait
               99.32% 0x10100000002
       - 44.71% futex_wake
            do_futex
            SyS_futex
            system_call_fastpath
          - __lll_unlock_wake
               99.33% 0x10100000002

This could be multiple VCPUs competing on QEMU's "big lock" because the pmtimer
is being read by different VCPUs at the same time.  This can be fixed, and
probably will in 1.7 or 1.8.


I've successfully applied the patch set, and have seen significant performance increases. Kernel CPU usage is no longer half of all CPU usage, and my insn_emulation counts are down to ~2000/s rather then 20,000/s.

I did end up having to patch qemu in a terrible way in order to get this working. I've just enabled the TSC optimizations whenever hv_vapic is enabled. This is far from the best way of doing it, but I'm not really a C developer and we'll always want the TSC optimizations on our windows VMs. In case anyone wants to do the same, it's a pretty simple patch:

*** clean/qemu-1.6.0/target-i386/kvm.c  2013-08-15 15:56:23.000000000 -0400
--- qemu-1.6.0/target-i386/kvm.c        2013-08-27 11:08:21.388841555 -0400
*************** int kvm_arch_init_vcpu(CPUState *cs)
*** 477,482 ****
--- 477,484 ----
          if (hyperv_vapic_recommended()) {
              c->eax |= HV_X64_MSR_HYPERCALL_AVAILABLE;
              c->eax |= HV_X64_MSR_APIC_ACCESS_AVAILABLE;
+           c->eax |= HV_X64_MSR_TIME_REF_COUNT_AVAILABLE;
+           c->eax |= 0x200;
          }

          c = &cpuid_data.entries[cpuid_i++];

It also seems that if you have useplatformclock=yes in the guest, it will not use the enlightened TSC. `bcdedit /set useplatformclock=no` and a reboot will correct that.

Are there any sort of guidelines for what I should be seeing from kvm_stat? This is pretty much average for me now:

 exits               1362839114  195453
 fpu_reload           199991016   34100
 halt_exits           187767718   33222
 halt_wakeup          198400078   35628
 host_state_reload    222907845   36212
 insn_emulation        22108942    2091
 io_exits              32094455    3132
 irq_exits             88852031   15855
 irq_injections       332358611   60694
 irq_window            61495812   12125

(all the other ones do not change frequently)

The only real way I know to judge things is based on the performance of the guest. Are there any sort of thresholds for these numbers that would indicate a problem?



--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux