Re: vhost-[pid] 100% CPU

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 2014-04-08 at 16:49 -0400, Simon Chen wrote:
> A little update on this..
> 
> I turned on multiqueue of vhost-net. Now the receiving VM is getting
> traffic over all four queues - based on the CPU usage of the four
> vhost-[pid] threads. For some reason, the sender is now pegging 100%
> on one vhost-[pid] thread, although four are available.
> 

Need to check how many vcpus does the sender use, multiqueue choose txq
based on processor id. If only one vcpu is used, the result is expected.
> Do I need to change anything inside of the VM to leverage all four TX
> queues? I did do "ethtool -L eth0 combined 4" and that doesn't seem to
> be sufficient.

No need any other configurations. I don't do iperf, but I can easily
make full usage of all queues when I start multiple sessions of netperf.

btw. In my Xeon(R) CPU E5-2650 machine, I can easily get 15Gbps+ of VM
to VM throughput without any optimization with net-next tree.
> 
> Thanks.
> -Simon
> 
> 
> On Sun, Apr 6, 2014 at 3:03 PM, Simon Chen <simonchennj@xxxxxxxxx> wrote:
> > Yes, I am aware of SR-IOV and its pros and cons.. I don't think
> > OpenStack supports the orchestration very well at this point, and you
> > lose the flexible filtering provided by iptables at hypervisor layer.
> >
> > At this point, I am trying to see how much throughput a more
> > software-base solution can achieve. Like I said, I've seen people
> > achieving 6Gbps+ VM to VM throughput using OpenVSwitch and VXLAN
> > software tunneling. I am more curious to find out why my setup cannot
> > do that...
> >
> > Thanks.
> >
> > On Sun, Apr 6, 2014 at 1:35 PM, Bronek Kozicki <brok@xxxxxxxxxxx> wrote:
> >> On 06/04/2014 15:06, Simon Chen wrote:
> >>>
> >>> Hello,
> >>>
> >>> I am using QEMU 1.6.0 on Linux 3.10.21. My VMs are using vhost-net in
> >>> a typical OpenStack setup: VM1->tap->linux
> >>> bridge->OVS->host1->physical network->host2->OVS->linux
> >>> bridge->tap->VM2.
> >>>
> >>> It seems that under heavy network load, the vhost-[pid] processes on
> >>> the receiving side is using 100% CPU. The sender side has over 85%
> >>> utilized.
> >>>
> >>> I am seeing unsatisfactory VM to VM network performance (using iperf
> >>> 16 concurrent TCP connections, I can only get 1.5Gbps, while I've
> >>> heard people got to over 6Gbps at least), and I wonder if it has
> >>> something to do with vhost-net maxing out on CPU. If so, is there
> >>> anything I can tune the system?
> >>
> >>
> >> You could dedicate network card to your virtual machine, using PCI
> >> passthrough.
> >>
> >>
> >> B.
> >>
> >>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux