Re: [PATCH] virtio-net: parameterize min ring num_free for virtio receive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2019/7/18 下午10:43, Michael S. Tsirkin wrote:
On Thu, Jul 18, 2019 at 10:42:47AM -0400, Michael S. Tsirkin wrote:
On Thu, Jul 18, 2019 at 10:01:05PM +0800, Jason Wang wrote:
On 2019/7/18 下午9:04, Michael S. Tsirkin wrote:
On Thu, Jul 18, 2019 at 12:55:50PM +0000, ? jiang wrote:
This change makes ring buffer reclaim threshold num_free configurable
for better performance, while it's hard coded as 1/2 * queue now.
According to our test with qemu + dpdk, packet dropping happens when
the guest is not able to provide free buffer in avail ring timely.
Smaller value of num_free does decrease the number of packet dropping
during our test as it makes virtio_net reclaim buffer earlier.

At least, we should leave the value changeable to user while the
default value as 1/2 * queue is kept.

Signed-off-by: jiangkidd<jiangkidd@xxxxxxxxxxx>
That would be one reason, but I suspect it's not the
true one. If you need more buffer due to jitter
then just increase the queue size. Would be cleaner.


However are you sure this is the reason for
packet drops? Do you see them dropped by dpdk
due to lack of space in the ring? As opposed to
by guest?


Besides those, this patch depends on the user to choose a suitable threshold
which is not good. You need either a good value with demonstrated numbers or
something smarter.

Thanks
I do however think that we have a problem right now: try_fill_recv can
take up a long time during which net stack does not run at all. Imagine
a 1K queue - we are talking 512 packets. That's exceessive.


Yes, we will starve a fast host in this case.


   napi poll
weight solves a similar problem, so it might make sense to cap this at
napi_poll_weight.

Which will allow tweaking it through a module parameter as a
side effect :) Maybe just do NAPI_POLL_WEIGHT.
Or maybe NAPI_POLL_WEIGHT/2 like we do at half the queue ;). Please
experiment, measure performance and let the list know

Need to be careful though: queues can also be small and I don't think we
want to exceed queue size / 2, or maybe queue size - napi_poll_weight.
Definitely must not exceed the full queue size.


Looking at intel, it uses 16 and i40e uses 32.  It looks to me NAPI_POLL_WEIGHT/2 is better.

Jiang, want to try that and post a new patch?

Thanks



--
MST
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization




[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux