Re: provide vhost thread per virtqueue for forwarding scenario

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2013/5/20 15:43, Michael S. Tsirkin wrote:
On Mon, May 20, 2013 at 02:11:19AM +0000, Qinchuanyu wrote:
Vhost thread provide both tx and rx ability for virtio-net.
In the forwarding scenarios, tx and rx share the vhost thread, and throughput is limited by single thread.

So I did a patch for provide vhost thread per virtqueue, not per vhost_net.

Of course, multi-queue virtio-net is final solution, but it require new version of virtio-net working in guest.
If you have to work with suse10,11, redhat 5.x as guest, and want to improve the forward throughput,
using vhost thread per queue seems to be the only solution.
Why is it? If multi-queue works well for you, just update the drivers in
the guests that you care about. Guest driver backport is not so hard.

In my testing, performance of thread per vq varies: some workloads might
gain throughput but you get more IPIs and more scheduling overhead, so
you waste more host CPU per byte. As you create more VMs, this stops
being a win.

I did the test with kernel 3.0.27 and qemu-1.4.0, guest is suse11-sp2, and then two vhost thread provide
double tx/rx forwarding performance than signal vhost thread.
The virtqueue of vhost_blk is 1, so it still use one vhost thread without change.

Is there something wrong in this solution? If not, I would list patch later.

Best regards
King
Yes, I don't think we want to create threads even more aggressively
in all cases. I'm worried about scalability as it is.
I think we should explore a flexible approach, use a thread pool
(for example, a wq) to share threads between virtqueues,
switch to a separate thread only if there's free CPU and existing
threads are busy. Hopefully share threads between vhost instances too.
On Xen platform, network backend pv driver model has evolved to this way. Netbacks from all DomUs share a thread pool,
and thread number eaqual to cpu core number.
Is there any plan for kvm paltform?



--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux