Are there any optimizations that I can do for EOI/APIC for a Windows
2008R2 guest? I'm seeing a significant amount of kernel CPU usage from
kvm_ioapic_update_eoi. I can't seem to find any information on further
optimizations for this.
Sample of trace output:
https://gist.github.com/devicenull/d1a918879d38955053dd/raw/3aed63b8e60e98c3e7fe21a42ca123d8bf309e0c/trace
Host setup:
3.10.9-1.el6.x86_64 #1 SMP Tue Aug 27 15:27:08 EDT 2013 x86_64 x86_64
x86_64 GNU/Linux with this patchset applied:
http://www.spinics.net/lists/kvm/msg91214.html
CentOS 6
qemu 1.6.0 (also patched with the above enlightenment)
2x Intel E5-2630 (virtualization extensions turned on, total of 24 cores
including hyperthread cores)
24GB memory
swap file is enabled, but unused
Guest setup:
Windows Server 2008R2 (64 bit)
24 vCPUs
20 GB memory
VirtIO disk drivers
SR-IOV for network (with Intel I350 network chipset)
/usr/libexec/qemu-kvm -name VMID109 -S -machine
pc-i440fx-1.6,accel=kvm,usb=off -cpu
host,hv_relaxed,hv_vapic,hv_spinlocks=0x1000 -m 20480 -smp
24,sockets=1,cores=12,threads=2 -uuid
6a7517f5-3b1c-43c2-aa71-96b143356b3d -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=//var/lib/libvirt/qemu/VMID109.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=readline -rtc
base=utc,driftfix=slew -no-hpet -boot c -usb -drive
file=/dev/vmimages/VMID109,if=none,id=drive-virtio-disk0,format=raw,cache=none,aio=native
-device
virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0
-drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -vnc
127.0.0.1:109 -k en-us -vga cirrus -device
pci-assign,host=02:10.0,id=hostdev0,bus=pci.0,addr=0x3,rombar=1,romfile=/usr/share/gpxe/80861520.rom
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
I removed a bunch of empty entries from below:
# perf stat -e 'kvm:*' -a sleep 1m
Performance counter stats for 'sleep 1m':
9,707,680 kvm:kvm_entry [100.00%]
8,199 kvm:kvm_hv_hypercall [100.00%]
188,418 kvm:kvm_pio [100.00%]
6 kvm:kvm_cpuid [100.00%]
3,983,787 kvm:kvm_apic [100.00%]
9,715,744 kvm:kvm_exit [100.00%]
4,028,354 kvm:kvm_inj_virq [100.00%]
3,245,823 kvm:kvm_msr [100.00%]
185,573 kvm:kvm_pic_set_irq [100.00%]
741,665 kvm:kvm_apic_ipi [100.00%]
2,518,242 kvm:kvm_apic_accept_irq [100.00%]
2,506,003 kvm:kvm_eoi [100.00%]
125,532 kvm:kvm_emulate_insn [100.00%]
187,912 kvm:kvm_userspace_exit [100.00%]
309,091 kvm:kvm_set_irq [100.00%]
186,014 kvm:kvm_ioapic_set_irq [100.00%]
124,458 kvm:kvm_msi_set_irq [100.00%]
1,475,484 kvm:kvm_ack_irq [100.00%]
1,295,360 kvm:kvm_fpu [100.00%]
60.001063613 seconds time elapsed
perf top -G output:
- 25.65% [kernel] [k] _raw_spin_lock
- _raw_spin_lock
- 98.63% kvm_ioapic_update_eoi
kvm_ioapic_send_eoi
apic_set_eoi
apic_reg_write
kvm_hv_vapic_msr_write
set_msr_hyperv
kvm_set_msr_common
vmx_set_msr
handle_wrmsr
vmx_handle_exit
vcpu_enter_guest
__vcpu_run
kvm_arch_vcpu_ioctl_run
kvm_vcpu_ioctl
do_vfs_ioctl
SyS_ioctl
system_call_fastpath
+ __GI___ioctl
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html