Re: [PATCH net v2 2/2] vhost_net: fix high cpu load when sendmsg fails

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2020/12/22 下午10:24, Willem de Bruijn wrote:
On Mon, Dec 21, 2020 at 11:41 PM Jason Wang <jasowang@xxxxxxxxxx> wrote:

On 2020/12/22 上午7:07, Willem de Bruijn wrote:
On Wed, Dec 16, 2020 at 3:20 AM wangyunjian<wangyunjian@xxxxxxxxxx>  wrote:
From: Yunjian Wang<wangyunjian@xxxxxxxxxx>

Currently we break the loop and wake up the vhost_worker when
sendmsg fails. When the worker wakes up again, we'll meet the
same error.
The patch is based on the assumption that such error cases always
return EAGAIN. Can it not also be ENOMEM, such as from tun_build_skb?

This will cause high CPU load. To fix this issue,
we can skip this description by ignoring the error. When we
exceeds sndbuf, the return value of sendmsg is -EAGAIN. In
the case we don't skip the description and don't drop packet.
the -> that

here and above: description -> descriptor

Perhaps slightly revise to more explicitly state that

1. in the case of persistent failure (i.e., bad packet), the driver
drops the packet
2. in the case of transient failure (e.g,. memory pressure) the driver
schedules the worker to try again later

If we want to go with this way, we need a better time to wakeup the
worker. Otherwise it just produces more stress on the cpu that is what
this patch tries to avoid.
Perhaps I misunderstood the purpose of the patch: is it to drop
everything, regardless of transient or persistent failure, until the
ring runs out of descriptors?


My understanding is that the main motivation is to avoid high cpu utilization when sendmsg() fail due to guest reason (e.g bad packet).



I can understand both a blocking and drop strategy during memory
pressure. But partial drop strategy until exceeding ring capacity
seems like a peculiar hybrid?


Yes. So I wonder if we want to be do better when we are in the memory pressure. E.g can we let socket wake up us instead of rescheduling the workers here? At least in this case we know some memory might be freed?

Thanks




_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization




[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux