Re: win7 bad i/o performance, high insn_emulation and exists

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Feb 20, 2012 at 08:40:08PM +0200, Gleb Natapov wrote:
> On Mon, Feb 20, 2012 at 07:17:55PM +0100, Peter Lieven wrote:
> > Hi,
> > 
> > I came a across an issue with a Windows 7 (32-bit) as well as with a
> > Windows 2008 R2 (64-bit) guest.
> > 
> > If I transfer a file from the VM via CIFS or FTP to a remote machine,
> > i get very poor read performance (around 13MB/s). The VM peaks at 100%
> > cpu and I see a lot of insn_emulations and all kinds of exists in kvm_stat
> > 
> > efer_reload                    0         0
> > exits                    2260976     79620
> > fpu_reload                  6197        11
> > halt_exits                114734      5011
> > halt_wakeup               111195      4876
> > host_state_reload        1499659     60962
> > hypercalls                     0         0
> > insn_emulation           1577325     58488
> > insn_emulation_fail            0         0
> > invlpg                         0         0
> > io_exits                  943949     40249
> Hmm, too many of those.
> 
> > irq_exits                 108679      5434
> > irq_injections            236545     10788
> > irq_window                  7606       246
> > largepages                   672         5
> > mmio_exits                460020     16082
> > mmu_cache_miss               119         0
> > mmu_flooded                    0         0
> > mmu_pde_zapped                 0         0
> > mmu_pte_updated                0         0
> > mmu_pte_write              13474         9
> > mmu_recycled                   0         0
> > mmu_shadow_zapped            141         0
> > mmu_unsync                     0         0
> > nmi_injections                 0         0
> > nmi_window                     0         0
> > pf_fixed                   22803        35
> > pf_guest                       0         0
> > remote_tlb_flush             239         2
> > request_irq                    0         0
> > signal_exits                   0         0
> > tlb_flush                  20933         0
> > 
> > If I run the same VM with a Ubuntu 10.04.4 guest I get around 60MB/s
> > throughput. The kvm_stats look a lot more sane.
> > 
> > efer_reload                    0         0
> > exits                    6132004     17931
> > fpu_reload                 19863         3
> > halt_exits                264961      3083
> > halt_wakeup               236468      2959
> > host_state_reload        1104468      3104
> > hypercalls                     0         0
> > insn_emulation           1417443      7518
> > insn_emulation_fail            0         0
> > invlpg                         0         0
> > io_exits                  869380      2795
> > irq_exits                 253501      2362
> > irq_injections            616967      6804
> > irq_window                201186      2161
> > largepages                  1019         0
> > mmio_exits                205268         0
> > mmu_cache_miss               192         0
> > mmu_flooded                    0         0
> > mmu_pde_zapped                 0         0
> > mmu_pte_updated                0         0
> > mmu_pte_write            7440546         0
> > mmu_recycled                   0         0
> > mmu_shadow_zapped            259         0
> > mmu_unsync                     0         0
> > nmi_injections                 0         0
> > nmi_window                     0         0
> > pf_fixed                   38529        30
> > pf_guest                       0         0
> > remote_tlb_flush             761         1
> > request_irq                    0         0
> > signal_exits                   0         0
> > tlb_flush                      0         0
> > 
> > I use virtio-net (with vhost-net) and virtio-blk. I tried disabling
> > hpet (which basically illiminated the mmio_exits, but does not
> > increase
> > performance) and also commit (39a7a362e16bb27e98738d63f24d1ab5811e26a8
> > ) - no improvement.
> > 
> > My commandline:
> > /usr/bin/qemu-kvm-1.0 -netdev
> > type=tap,id=guest8,script=no,downscript=no,ifname=tap0,vhost=on
> > -device virtio-net-pci,netdev=guest8,mac=52:54:00:ff:00:d3 -drive format=host_device,file=/dev/mapper/iqn.2001-05.com.equallogic:0-8a0906-eeef4e007-a8a9f3818674f2fc-lieven-windows7-vc-r80788,if=virtio,cache=none,aio=native
> > -m 2048 -smp 2 -monitor tcp:0:4001,server,nowait -vnc :1 -name
> > lieven-win7-vc -boot order=dc,menu=off -k de -pidfile
> > /var/run/qemu/vm-187.pid -mem-path /hugepages -mem-prealloc -cpu
> > host -rtc base=localtime -vga std -usb -usbdevice tablet -no-hpet
> > 
> > What further information is needed to debug this further?
> > 
> Which kernel version (looks like something recent)?
> Which host CPU (looks like something old)?
Output of cat /proc/cpuinfo

> Which Windows' virtio drivers are you using?
> 
> Take a trace like described here http://www.linux-kvm.org/page/Tracing
> (with -no-hpet please).
> 
And also "info pci" output from qemu monitor while we are at it.

> Try to use -cpu host,+x2apic. It may help Linux guest performance.
> 
> --
> 			Gleb.
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
			Gleb.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux