Re: Network throughput limits for local VM <-> VM communication

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/17/2009 10:36 AM, Fischer, Anna wrote:

/usr/bin/qemu-system-x86_64 -m 1024 -smp 2 -name FC10-2 -uuid b811b278-fae2-a3cc-d51d-8f5b078b2477 -boot c -drive file=,if=ide,media=cdrom,index=2 -drive file=/var/lib/libvirt/images/FC10-2.img,if=virtio,index=0,boot=on -net nic,macaddr=54:52:00:11:ae:79,model=e1000 -net tap net nic,macaddr=54:52:00:11:ae:78,model=e1000 -net tap  -serial pty -parallel none -usb -vnc 127.0.0.1:2 -k en-gb -soundhw es1370


Okay, like I suspected, qemu has a trap here and you walked into it. The -net option plugs the device you specify into a virtual hub. The command line you provided plugs the two virtual NICs and the two tap devices into one virtual hub, so any packet received from any of the four clients will be propagated to the other three.

To get this to work right, specify the vlan= parameter which says which virtual hub a component is plugged into. Note this has nothing to do with 802.blah vlans.

So your command line should look like

qemu ... -net nic,...,vlan=0 -net tap,...,vlan=0 -net nic,...,vlan=1 -net tap,...,vlan=1

This will give you two virtual hubs, each bridging a virtual nic to a tap device.


This is my "routing VM" that has two network interfaces and routes packets between two subnets. It has one interface plugged into bridge virbr0 and the other interface is plugged into virbr1:

brctl show
bridge name     bridge id               STP enabled     interfaces
virbr0          8000.8ac1d18c63ec       no              vnet0
                                                         vnet1
virbr1          8000.2ebfcbb9ed70       no              vnet2
                                                         vnet3

Please redo the tests with qemu vlans but without 802.blah vlans, so we see what happens without packet duplication.

If I use the e1000 virtual NIC model, I see performance drop significantly compared to using virtio_net. However, with virtio_net I have the network stalling after a few seconds of high-throughput traffic (as I mentioned in my previous post). Just to reiterate my scenario: I run three guests on the same physical machine, one guest is my routing VM that is routing IP network traffic between the other two guests.

I am also wondering about the fact that I do not seem to get CPU utilization maxed out in this case while throughput does not go any higher. I do not understand what is stopping KVM from using more CPU for guest I/O processing? There is nothing else running on my machine. I have analyzed the amount of CPU that each KVM thread is using, and I can see that the thread that is running the VCPU of the routing VM which is processing interrupts of the e1000 virtual network card is using the highest amount of CPU. Is there any way that I can optimize my network set-up? Maybe some specific configuration of the e1000 driver within the guest? Are there any known issues with this?

There are known issues with lack of flow control while sending packets out of a guest. If the guest runs tcp that tends to correct for it, but if you run a lower level protocol that doesn't have its own flow control, the guest may spend a lot of cpu generating packets that are eventually dropped. We are working on fixing this.

I also see very difference CPU utilization and network throughput figures when pinning threads to CPU cores using taskset. At one point I managed to double the throughput, but I could not reproduce that setup for some reason. What are the major issues that I would need to pay attention to when pinning threads to cores in order to optimize my specific set-up so that I can achieve better network I/O performance?

It's black magic, unfortunately. But please retry with the fixed configuration and we'll continue from there.

--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux