On Mon, Sep 13, 2010 at 4:32 AM, Thibault VINCENT <thibault.vincent@xxxxxxxxxxxx> wrote: > Hello > > I'm trying to achieve higher than gigabit transferts over a virtio NIC > with no success, and I can't find a recent bug or discussion about such > an issue. > > The simpler test consist of two VM running on a high-end blade server > with 4 cores and 4GB RAM each, and a virtio NIC dedicated to the > inter-VM communication. On the host, the two vnet interfaces are > enslaved into a bridge. I use a combination of 2.6.35 on the host and > 2.6.32 in the VMs. > Running iperf or netperf on these VMs, with TCP or UDP, result in > ~900Mbits/s transferts. This is what could be expected of a 1G > interface, and indeed the e1000 emulation performs similar. > > Changing the txqueuelen, MTU, and offloading settings on every interface > (bridge/tap/virtio_net) didn't improve the speed, nor did the > installation of irqbalance and the increase in CPU and RAM. > > Is this normal ? Is the multiple queue patch intended to address this ? > It's quite possible I missed something :) I'm able to achieve quite a bit more than 1Gbps using virtio-net between 2 guests on the same host connected via an internal bridge. With the virtio-net TX bottom half handler I can easily hit 7Gbps TCP and 10+Gbps UDP using netperf (TCP_STREAM/UDP_STREAM tests). Even without the bottom half patches (not yet in qemu-kvm.git), I can get ~5Gbps. Maybe you could describe your setup further, host details, bridge setup, guests, specific tests, etc... Thanks, Alex -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html