Re: virtio + vhost-net performance issue - preadv ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



So, far. I gave another try to this.

After correcting permissions...

When you create a VM (using qemu-kvm 1.1 or 1.2, with a modern
libvirtd ) you get this:

 qemu-kvm: virtio_pci_set_host_notifier_internal: unable to init event notifier:
vhost VQ 0 notifier binding failed: 38
qemu-kvm: unable to start vhost net: 38: falling back on userspace virtio


Seems related to ioeventfd  present in Redhat 6.1 but not in Redhat 5.X.

Using the elrepo kernel with vhost_net module for these tests.

So event if you change the network driver to "vhost" in the XML it
falls back to qemu.


Disabling ioeventfd=off in the network XML block has no effect.

And, that's all for these tests.

David

2012/11/14 Ben Clay <rbclay@xxxxxxxx>:
> I have a working copy of libvirt 0.10.2 + qemu 1.2 installed on a vanilla
> up-to-date (2.6.32-279.9.1) CentOS 6 host, and get very good VM <-> VM
> network performance (both running on the same host) using virtio.  I have
> cgroups set to cap the VMs at 10Gbps and iperf shows I'm getting exactly
> 10Gbps.
>
> I copied these VMs to a CentOS 5 host and installed libvirt 1.0 + qemu 1.2.
> However, the best performance I can get in between the VMs (again running on
> the same host) is ~2Gbps.  In both cases, this is over a bridged interface
> with static IPs assigned to each VM.  I've also tried virtual networking
> with NAT or routing, yielding the same results.
>
> I figured it was due to vhost-net missing on the older CentOS 5 kernel, so I
> installed 2.6.39-4.2 from ELRepo and got the /dev/vhost-net device and vhost
> processes associated with each VM:
>
> ]$ lsmod | grep vhost
> vhost_net              28446  2
> tun                    23888  7 vhost_net
>
> ]$ ps aux | grep vhost-
> root      9628  0.0  0.0      0     0 ?        S    17:57   0:00
> [vhost-9626]
> root      9671  0.0  0.0      0     0 ?        S    17:57   0:00
> [vhost-9670]
>
> ]$ ls /dev/vhost-net -al
> crw------- 1 root root 10, 58 Nov 13 15:19 /dev/vhost-net
>
> After installing the new kernel, I also tried rebuilding libvirt and qemu,
> to no avail.  I also disabled cgroups, just in case it was getting in the
> way, as well as iptables.  I can see the virtio_net module loaded inside the
> guest, and using virtio raises my performance from <400Mbps to 2Gbps, so it
> does make some improvement.
>
> The only differences between the two physical hosts that I can find are:
>
> - qemu on the CentOS 5 host builds without preadv support - would this make
> such a huge performance difference?  CentOS5 only comes with an old version
> of glibc, which is missing preadv
> - qemu on the CentOS 5 host builds without PIE
> - libvirt 1.0 was required on the CentOS 5 host, since 0.10.2 had a build
> bug. This shouldn't matter I don't think.
> - I haven't tried rebuilding the VMs from scratch on the CentOS5 host, which
> I guess is worth a try.
>
> The qemu process is being started with virtio + vhost:
>
> /usr/bin/qemu-system-x86_64 -name vmname -S -M pc-1.2 -enable-kvm -m 4096
> -smp 8,sockets=8,cores=1,threads=1 -uuid
> 212915ed-a34a-4d6d-68f5-2216083a7693 -no-user-config -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/vmname.monitor,server,nowai
> t -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc
> -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
> file=/mnt/vmname/disk.img,if=none,id=drive-virtio-disk0,format=raw,cache=non
> e -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virti
> o-disk0,bootindex=1 -netdev tap,fd=16,id=hostnet0,vhost=on,vhostfd=18
> -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:11:22:33:44:55,bus=pci.0,addr=
> 0x3 -chardev pty,id=charserial0 -device
> isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0 -vnc
> 127.0.0.1:1 -vga cirrus -device
> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
>
> The relevant part of my libvirt config, of which I've tried omitting the
> target, alias and address elements with no difference in performance:
>
>    <interface type="bridge">
>       <mac address="00:11:22:33:44:55"/>
>       <source bridge="br0"/>
>       <target dev="vnet0"/>
>       <model type="virtio"/>
>       <alias name="net0"/>
>       <address type="pci" domain="0x0000" bus="0x00" slot="0x03"
> function="0x0"/>
>     </interface>
>
> Is there something else which could be getting in the way here?
>
> Thanks!
>
> Ben Clay
> rbclay@xxxxxxxx
>
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux