RE: Network throughput limits for local VM <-> VM communication

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Subject: Re: Network throughput limits for local VM <-> VM
> communication
> 
> On Tue, 2009-06-09 at 11:06 +0000, Fischer, Anna wrote:
> 
> > I am testing network throughput between two guests residing on the
> > same physical machine. I use a bridge to pass packets between those
> > guests and the virtio NIC model. I am wondering why the throughput
> > only goes up to about 970Mbps. Should we not be able to achieve much
> > higher throughput if the packets do not actually go out on the
> > physical wire? What are the limitations for throughput performance
> > under KVM/virtio? I can see that by default the interfaces (the tap
> > devices) have TX queue length set to 500, and I wonder if increasing
> > this would make any difference? Also, are there other things I would
> > need to consider to achieve higher throughput numbers for local guest
> > <-> guest communication? The CPU is not maxed out at all, and shows
> as
> > being idle for most of the time while the throughput does not
> increase
> > any more.
> >
> > I run KVM under standard Fedora Core 10 with a Linux kernel 2.6.27.
> 
> The first thing to check is that GSO is enabled - you can check with
> "ethtool -k eth0" in the guests.
> 
> Are you starting qemu from the command line or e.g. using libvirt? The
> libvirt version in F-10 didn't know how to enable IFF_VNET_HDR on the
> tapfd before passing it to qemu.
> 
> Really, I'd suggest updating to F-11 before digging further - you'll
> have qemu-kvm-0.10.5, linux-2.6.29.4 and libvirt-0.6.2.

I use libvirt (virt-manager), but I do run the latest libvirt 0.6.4 compiled from source. Does that make any difference? Upgrading to FC11 would not be ideal for me at this point, so it would be nice to get around that.

GSO was enabled. Switching it off does not make any difference.

Something else I notice - I run 3 KVM guests, one of these being the router (with 2 vNICs), and the other two communicating with each other. So packets pass from one guest to the router and then to the other guest. When I stress this set-up with high-throughput network testing tools, then the network hangs after a few seconds. This happens for all TCP tests. When I do UDP and rate limit to something < 10Mbps then it works fine. However, when I don't do any rate limiting, then it completely breaks. I have tried using virtio and using the emulated QEMU virtual NICs. It does not make a difference. It seems as if there is an overflow somewhere when QEMU/virtio cannot cope with the network load any more, and then the virtual interfaces don't seem to transmit anything anymore. It seems to mostly work again when I shut down and start up the interfaces of the router inside of the guest. I use two bridges (and VLANs) that pass packets between sending/receiving guests and the routing guest. The set-up works fine for simple ping and other communication that is low-throughput type traffic.

Cheers,
Anna

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux