On 20.02.2012 19:40, Gleb Natapov wrote:
On Mon, Feb 20, 2012 at 07:17:55PM +0100, Peter Lieven wrote:
Hi,
I came a across an issue with a Windows 7 (32-bit) as well as with a
Windows 2008 R2 (64-bit) guest.
If I transfer a file from the VM via CIFS or FTP to a remote machine,
i get very poor read performance (around 13MB/s). The VM peaks at 100%
cpu and I see a lot of insn_emulations and all kinds of exists in kvm_stat
efer_reload 0 0
exits 2260976 79620
fpu_reload 6197 11
halt_exits 114734 5011
halt_wakeup 111195 4876
host_state_reload 1499659 60962
hypercalls 0 0
insn_emulation 1577325 58488
insn_emulation_fail 0 0
invlpg 0 0
io_exits 943949 40249
Hmm, too many of those.
irq_exits 108679 5434
irq_injections 236545 10788
irq_window 7606 246
largepages 672 5
mmio_exits 460020 16082
mmu_cache_miss 119 0
mmu_flooded 0 0
mmu_pde_zapped 0 0
mmu_pte_updated 0 0
mmu_pte_write 13474 9
mmu_recycled 0 0
mmu_shadow_zapped 141 0
mmu_unsync 0 0
nmi_injections 0 0
nmi_window 0 0
pf_fixed 22803 35
pf_guest 0 0
remote_tlb_flush 239 2
request_irq 0 0
signal_exits 0 0
tlb_flush 20933 0
If I run the same VM with a Ubuntu 10.04.4 guest I get around 60MB/s
throughput. The kvm_stats look a lot more sane.
efer_reload 0 0
exits 6132004 17931
fpu_reload 19863 3
halt_exits 264961 3083
halt_wakeup 236468 2959
host_state_reload 1104468 3104
hypercalls 0 0
insn_emulation 1417443 7518
insn_emulation_fail 0 0
invlpg 0 0
io_exits 869380 2795
irq_exits 253501 2362
irq_injections 616967 6804
irq_window 201186 2161
largepages 1019 0
mmio_exits 205268 0
mmu_cache_miss 192 0
mmu_flooded 0 0
mmu_pde_zapped 0 0
mmu_pte_updated 0 0
mmu_pte_write 7440546 0
mmu_recycled 0 0
mmu_shadow_zapped 259 0
mmu_unsync 0 0
nmi_injections 0 0
nmi_window 0 0
pf_fixed 38529 30
pf_guest 0 0
remote_tlb_flush 761 1
request_irq 0 0
signal_exits 0 0
tlb_flush 0 0
I use virtio-net (with vhost-net) and virtio-blk. I tried disabling
hpet (which basically illiminated the mmio_exits, but does not
increase
performance) and also commit (39a7a362e16bb27e98738d63f24d1ab5811e26a8
) - no improvement.
My commandline:
/usr/bin/qemu-kvm-1.0 -netdev
type=tap,id=guest8,script=no,downscript=no,ifname=tap0,vhost=on
-device virtio-net-pci,netdev=guest8,mac=52:54:00:ff:00:d3 -drive format=host_device,file=/dev/mapper/iqn.2001-05.com.equallogic:0-8a0906-eeef4e007-a8a9f3818674f2fc-lieven-windows7-vc-r80788,if=virtio,cache=none,aio=native
-m 2048 -smp 2 -monitor tcp:0:4001,server,nowait -vnc :1 -name
lieven-win7-vc -boot order=dc,menu=off -k de -pidfile
/var/run/qemu/vm-187.pid -mem-path /hugepages -mem-prealloc -cpu
host -rtc base=localtime -vga std -usb -usbdevice tablet -no-hpet
What further information is needed to debug this further?
Which kernel version (looks like something recent)?
2.6.38 with kvm-kmod 3.2
Which host CPU (looks like something old)?
why? i guess its (quite) new.
vendor_id : GenuineIntel
cpu family : 6
model : 44
model name : Intel(R) Xeon(R) CPU L5640 @ 2.27GHz
stepping : 2
cpu MHz : 1596.000
cache size : 12288 KB
physical id : 1
siblings : 6
core id : 10
cpu cores : 6
apicid : 52
initial apicid : 52
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall
nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl
xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx
smx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 popcnt aes lahf_lm
arat dts tpr_shadow vnmi flexpriority ept vpid
bogomips : 2254.43
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
Which Windows' virtio drivers are you using?
i used to use 0.1-16 and today also tried 0.1-22 from
http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/
Take a trace like described here http://www.linux-kvm.org/page/Tracing
(with -no-hpet please).
will prepare this.
Try to use -cpu host,+x2apic. It may help Linux guest performance.
Thanks, it improved throughput a little while lowering the
cpu usage. Windows does not support this?
Thanks
Peter
--
Gleb.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html