Hi Folks, We have some fundamental questions on vhost-net performance. we are testing kvm network I/O performance by connecting two server class machines back to back (cisco UCS boxes with 32GB memory and 24 cores). We tested with and without vhost-net. Qemu version 0.15 latest libvirt version Fedora core 16 virtio drivers are used in both cases. We used netperf UDP_STREAM for the test from client on UCS to server in the VM. for 64B packets, we are seeing throughput of 122 Mbps (mega bits) for system using vhost. throughput of 146Mbps for system without vhost. For 256B packets, vhost-net gives throughput of 482Mbps and non-vhost gives 404 Mbps The server had 1GE interfaces connected back to back. We have the vhost_net module loaded for the vhost test case. The interface configuration we have in domain xml is: <interface type='network'> <source network='mvnet'/ <model type='virtio'/> <driver name='vhost' txmode='iothread' ioeventfd='on' event_idx='on'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> The qemu command parameters for network interface is: -netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=24 -device virtio-net-pci,tx=bh,ioeventfd=on,event_idx=on, netdev=hostnet0,id=net0,mac=52:54:00:ba:4f:3d,bus=pci.0, addr=0x3 Question: why are we seeing such a low throughput for 64B packets? Is there a sample test scenario/machine description for the 8x improvement observed and documented at: http://www.linux-kvm.org/page/VhostNet Appreciate any pointers in the right path. thx -a -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html