Re: Windows Server 2008R2 KVM guest performance issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Il 27/08/2013 16:44, Brian Rak ha scritto:
>> Il 26/08/2013 21:15, Brian Rak ha scritto:
>>> Samples: 62M of event 'cycles', Event count (approx.): 642019289177
>>>   64.69%  [kernel]                    [k] _raw_spin_lock
>>>    2.59%  qemu-system-x86_64          [.] 0x00000000001e688d
>>>    1.90%  [kernel]                    [k] native_write_msr_safe
>>>    0.84%  [kvm]                       [k] vcpu_enter_guest
>>>    0.80%  [kernel]                    [k] __schedule
>>>    0.77%  [kvm_intel]                 [k] vmx_vcpu_run
>>>    0.68%  [kernel]                    [k] effective_load
>>>    0.65%  [kernel]                    [k] update_cfs_shares
>>>    0.62%  [kernel]                    [k] _raw_spin_lock_irq
>>>    0.61%  [kernel]                    [k] native_read_msr_safe
>>>    0.56%  [kernel]                    [k] enqueue_entity
>> Can you capture the call graphs, too (perf record -g)?
> 
> Sure.  I'm not entire certain how to use perf effectively.  I've used
> `perf record`, then manually expanded the call stacks in `perf report`. 
> If this isn't what you wanted, please let me know.
> 
> https://gist.github.com/devicenull/7961f23e6756b647a86a/raw/a04718db2c26b31e50fb7f521d47d911610383d8/gistfile1.txt
> 

This is actually quite useful!

-  41.41%  qemu-system-x86  [kernel.kallsyms]                                                                     0xffffffff815ef6d5 k [k] _raw_spin_lock
   - _raw_spin_lock
      - 48.06% futex_wait_setup
           futex_wait
           do_futex
           SyS_futex
           system_call_fastpath
         - __lll_lock_wait
              99.32% 0x10100000002
      - 44.71% futex_wake
           do_futex
           SyS_futex
           system_call_fastpath
         - __lll_unlock_wake
              99.33% 0x10100000002

This could be multiple VCPUs competing on QEMU's "big lock" because the pmtimer
is being read by different VCPUs at the same time.  This can be fixed, and
probably will in 1.7 or 1.8.

Thanks,

Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux