On 12/03/2009 03:46 PM, Andrew Theurer wrote:
I am running a windows workload which has 26 windows VMs running many
instances of a J2EE workload. There are 13 pairs of an application
server VM and database server VM. There seem to be quite a bit of
vm_exits, and it looks over a third of them are mmio_exit:
efer_relo 0
exits 337139
fpu_reloa 247321
halt_exit 19092
halt_wake 18611
host_stat 247332
hypercall 0
insn_emul 184265
insn_emul 184265
invlpg 0
io_exits 69184
irq_exits 52953
irq_injec 48115
irq_windo 2411
largepage 19
mmio_exit 123554
I collected a kvmtrace, and below is a very small portion of that. Is
there a way I can figure out what device the mmio's are for?
We want 'info physical_address_space' in the monitor.
Also, is it normal to have lots of ept_violations? This is a 2 socket
Nehalem system with SMT on.
So long as pf_fixed is low, these are all mmio or apic accesses.
qemu-system-x86-19673 [014] 213577.939624: kvm_page_fault: address
fed000f0 error_code 181
qemu-system-x86-19673 [014] 213577.939627: kvm_mmio: mmio
unsatisfied-read len 4 gpa 0xfed000f0 val 0x0
qemu-system-x86-19673 [014] 213577.939629: kvm_mmio: mmio read len 4
gpa 0xfed000f0 val 0xfb8f214d
hpet
qemu-system-x86-19673 [014] 213577.939631: kvm_entry: vcpu 0
qemu-system-x86-19673 [014] 213577.939633: kvm_exit: reason
ept_violation rip 0xfffff8000160ef8e
qemu-system-x86-19673 [014] 213577.939634: kvm_page_fault: address
fed000f0 error_code 181
hpet - was this the same exit? we ought to skip over the emulated
instruction.
qemu-system-x86-19673 [014] 213577.939693: kvm_page_fault: address
fed000f0 error_code 181
qemu-system-x86-19673 [014] 213577.939696: kvm_mmio: mmio
unsatisfied-read len 4 gpa 0xfed000f0 val 0x0
hpet
qemu-system-x86-19332 [008] 213577.939699: kvm_exit: reason
ept_violation rip 0xfffff80001b3af8e
qemu-system-x86-19332 [008] 213577.939700: kvm_page_fault: address
fed000f0 error_code 181
qemu-system-x86-19673 [014] 213577.939702: kvm_mmio: mmio read len 4
gpa 0xfed000f0 val 0xfb8f3da6
hpet
qemu-system-x86-19332 [008] 213577.939706: kvm_mmio: mmio
unsatisfied-read len 4 gpa 0xfed000f0 val 0x0
qemu-system-x86-19563 [010] 213577.939707: kvm_ioapic_set_irq: pin
11 dst 1 vec=130 (LowPrio|logical|level)
qemu-system-x86-19332 [008] 213577.939713: kvm_mmio: mmio read len 4
gpa 0xfed000f0 val 0x29a105de
hpet ...
qemu-system-x86-19673 [014] 213577.939908: kvm_ioapic_set_irq: pin
11 dst 1 vec=130 (LowPrio|logical|level)
qemu-system-x86-19673 [014] 213577.939910: kvm_entry: vcpu 0
qemu-system-x86-19673 [014] 213577.939912: kvm_exit: reason
apic_access rip 0xfffff800016a050c
qemu-system-x86-19673 [014] 213577.939914: kvm_mmio: mmio write len
4 gpa 0xfee000b0 val 0x0
apic eoi
qemu-system-x86-19332 [008] 213577.939958: kvm_mmio: mmio write len
4 gpa 0xfee000b0 val 0x0
qemu-system-x86-19673 [014] 213577.939958: kvm_pic_set_irq: chip 1
pin 3 (level|masked)
qemu-system-x86-19332 [008] 213577.939958: kvm_apic: apic_write
APIC_EOI = 0x0
apic eoi
qemu-system-x86-19673 [014] 213577.940010: kvm_exit: reason
cr_access rip 0xfffff800016ee2b2
qemu-system-x86-19673 [014] 213577.940011: kvm_cr: cr_write 4 = 0x678
qemu-system-x86-19673 [014] 213577.940017: kvm_entry: vcpu 0
qemu-system-x86-19673 [014] 213577.940019: kvm_exit: reason
cr_access rip 0xfffff800016ee2b5
qemu-system-x86-19673 [014] 213577.940019: kvm_cr: cr_write 4 = 0x6f8
toggling global pages, we can avoid that with CR4_GUEST_HOST_MASK.
So, tons of hpet and eois. We can accelerate both by thing the hyper-V
accelerations, we already have some (unmerged) code for eoi, so this
should be improved soon.
Here is oprofile:
4117817 62.2029 kvm-intel.ko kvm-intel.ko
vmx_vcpu_run
338198 5.1087 qemu-system-x86_64 qemu-system-x86_64
/usr/local/qemu/48bb360cc687b89b74dfb1cac0f6e8812b64841c/bin/qemu-system-x86_64
62449 0.9433 kvm.ko kvm.ko
kvm_arch_vcpu_ioctl_run
56512 0.8537
vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1
vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1
copy_user_generic_string
We ought to switch to put_user/get_user. rep movs has quite slow start-up.
52373 0.7911
vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1
vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1
native_write_msr_safe
hpet in kernel or hyper-V timers will reduce this.
34847 0.5264
vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1
vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1
schedule
34678 0.5238
vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1
vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1
fget_light
and this.
29894 0.4516 kvm.ko kvm.ko
paging64_walk_addr
27778 0.4196 kvm.ko kvm.ko
gfn_to_hva
24563 0.3710 kvm.ko kvm.ko
x86_decode_insn
23900 0.3610
vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1
vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1
do_select
21123 0.3191 libc-2.10.90.so libc-2.10.90.so
memcpy
20694 0.3126 kvm.ko kvm.ko
x86_emulate_insn
hyper-V APIC and timers will reduce all of the above (except memcpy).
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html