Hi For some reason I am not able to get good network performance using virtio/vhost-net on Debian KVM host (perhaps also valid for Ubuntu hosts then). Disc IO is very good and the guests feels snappy so it doesn't seem like there is something really wrong, just something suboptimal with the networking. Guests are Debian Wheezy (the performance issues seems to be related to the the host only) The test: ------------ iperf -s iperf -c <iperf-server> -i 2 -t 33333 Problem description: ---------------------------- Guest to guest performance via local bridge: ~2.3 gbit/s, very high CPU usage on vhost-$PID and kvm process on host Physical server to guest on same subnet: ~940 mbit/s but with very high CPU usage on vhost-$PID and kvm process on host Physical server to guest via router: ~850 mbit/s with very high CPU usage on vhost-$PID and kvm process on host (why is routed traffic slower than switched on the guest??) Physical server to kvm host via router (just to verify that the router is not the issue): ~940 mbit/s with almost no CPU usage Expected results: ------------------------- Guest to guest performance via local bridge: ~20 gbit/s, high CPU usage Physical server to guest on same subnet: ~940 mbit/s with low CPU usage on vhost-$PID and a bit higher on kvm process on host Physical server to guest via router: ~940 mbit/s with low CPU usage on vhost-$PID and a bit higher on kvm process on host Physical server to kvm host via router (just to verify that the router is not the issue): ~940 mbit/s with almost no CPU usage (the same as my current results..) The expected results are based on this (identical guests, network setup and hardware in all cases, only the host OS changes): - Fedora 17 alpha as 1:1 replacement for Debian Wheezy as KVM host gives expected results - Proxmox 1.9 and 2.0 (Debian based distro, but using RHEL 6.xx based kernels as far as I know) gives expected results - VMware ESXi 5 with VMXNET3 gives even slightly better network performance Details on host: --------------------- OS: Debian Wheezy (testing), kernel 3.2.0-2-amd64, currently based on 3.2.12 virsh qemu-monitor-command --hmp mail 'info version' 1.0.0 (Debian qemu-kvm 1.0+dfsg-9) virsh qemu-monitor-command --hmp mail 'info kvm': kvm support: enabled lsmod | grep kvm: kvm_intel 121968 9 kvm 287572 1 kvm_intel lsmod | grep vhost: vhost_net 27436 3 tun 18337 7 vhost_net macvtap 17598 1 vhost_net Output from ps -ef of running guest: /usr/bin/kvm -S -M pc-0.15 -cpu core2duo,+lahf_lm,+rdtscp,+avx,+osxsave,+xsave,+aes,+popcnt,+x2apic,+sse4.2,+sse4.1,+pdcm,+xtpr,+cx16,+tm2,+est,+smx,+vmx,+ds_cpl,+dtes64,+pclmuldq,+pbe,+tm,+ht,+ss,+acpi,+ds -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name mail -uuid ccace357-783d-ce9f-444a-419445ee601d -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/mail.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -drive file=/dev/raid10/mail,if=none,id=drive-virtio-disk0,format=raw,cache=none -device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -netdev tap,fd=20,id=hostnet0,vhost=on,vhostfd=23 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:f7:25:33,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -usb -device usb-tablet,id=input0 -vnc 127.0.0.1:2 -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 Server hardware (seems to be the same issue regardless of server used): - Intel(R) Xeon(R) CPU E31220 @ 3.10GHz Quad Core - 16 GB ECC RAM - Supermicro X9SCI-LN4F (Quad Intel Server NICs using e1000e) - System disc: Corsair SSD Force Series 3 60GB - Storage for guests: LVM images on directly attached RAID10 Guest: --------- OS: Debian Wheezy (testing), kernel 3.2.0-2-amd64, currently based on 3.2.12 root@mail:~# lsmod | grep virtio: virtio_balloon 12832 0 virtio_blk 12874 3 virtio_net 17808 0 virtio_pci 13207 0 virtio_ring 12969 4 virtio_pci,virtio_net,virtio_blk,virtio_balloon virtio 13093 5 virtio_ring,virtio_pci,virtio_net,virtio_blk,virtio_balloon I have tried: ---------------- - Replacing Debian Wheezy with Debian Squeeze (stable, kernel 2.6.32-xx) - even worse results - Replacing kernel 3.2.0-2-amd64 with vanilla kernel 3.4-rc2 and config based on Debians included config - no apparent change - Extracted the kernel-config file from Fedora 17 alphas kernel and used this to compile a new kernel based on Debian Wheezys kernel source - slightly worse results - ...in addition to exchanging Debian with Fedora 17 alpha, Proxmox 1.9 and 2.0 and ESXi 5 which all have expected network performance using virtio. So, I am at a loss here. I does not seem to be kernel config related (as using Fedoras config on Debian kernel source didn't do anything good) so I think it must be either a kernel patch that red hat kernel based distros uses to make virtio/vhost much more efficient or perhaps something with Debians qemu-version, bridging or something. Do you have any idea how to get the same performance from virtio/vhost networking on Debian? The guest to guest performance on the KVM hosts that works as expected is the same as running iperf on localhost in the guest, so something has changed somewhere and it should be acheavable on Debian too. I would really appreciate some input. Regards, Hans-Kristian Bakke -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html