Re: vhost-[pid] 100% CPU

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, I am aware of SR-IOV and its pros and cons.. I don't think
OpenStack supports the orchestration very well at this point, and you
lose the flexible filtering provided by iptables at hypervisor layer.

At this point, I am trying to see how much throughput a more
software-base solution can achieve. Like I said, I've seen people
achieving 6Gbps+ VM to VM throughput using OpenVSwitch and VXLAN
software tunneling. I am more curious to find out why my setup cannot
do that...

Thanks.

On Sun, Apr 6, 2014 at 1:35 PM, Bronek Kozicki <brok@xxxxxxxxxxx> wrote:
> On 06/04/2014 15:06, Simon Chen wrote:
>>
>> Hello,
>>
>> I am using QEMU 1.6.0 on Linux 3.10.21. My VMs are using vhost-net in
>> a typical OpenStack setup: VM1->tap->linux
>> bridge->OVS->host1->physical network->host2->OVS->linux
>> bridge->tap->VM2.
>>
>> It seems that under heavy network load, the vhost-[pid] processes on
>> the receiving side is using 100% CPU. The sender side has over 85%
>> utilized.
>>
>> I am seeing unsatisfactory VM to VM network performance (using iperf
>> 16 concurrent TCP connections, I can only get 1.5Gbps, while I've
>> heard people got to over 6Gbps at least), and I wonder if it has
>> something to do with vhost-net maxing out on CPU. If so, is there
>> anything I can tune the system?
>
>
> You could dedicate network card to your virtual machine, using PCI
> passthrough.
>
>
> B.
>
>
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux