Re: error loading xdp program on virtio nic

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 22 Nov 2019 08:43:50 -0700, David Ahern wrote:
> On 11/21/19 11:09 PM, Jason Wang wrote:
> >> Doubling the number of queues for each tap device adds overhead to the
> >> hypervisor if you only want to allow XDP_DROP or XDP_DIRECT. Am I
> >> understanding that correctly?  
> > 
> > Yes, but there's almost impossible to know whether or not XDP_TX will be
> > used by the program. If we don't use per CPU TX queue, it must be
> > serialized through locks, not sure it's worth try that (not by default,
> > of course). 
> 
> This restriction is going to prevent use of XDP in VMs in general cloud
> hosting environments. 2x vhost threads for vcpus is a non-starter.
> 
> If one XDP feature has high resource needs, then we need to subdivide
> the capabilities to let some work and others fail. For example, a flag
> can be added to xdp_buff / xdp_md that indicates supported XDP features.
> If there are insufficient resources for XDP_TX, do not show support for
> it. If a program returns XDP_TX anyways, packets will be dropped.

This is covered by the always planned but never completed work on
richer queue configuration ABI by yours truly, Magnus and others.

Once we have better control over queues we can do some "manual mode"
config where queues are not automatically provisioned, are provisioned
even when program is not bound or are provisioned on a subset of cores
(if all NICs' IRQs/NAPIs are pinned to those core all should be gut).

🤷‍♂️




[Index of Archives]     [Linux Networking Development]     [Fedora Linux Users]     [Linux SCTP]     [DCCP]     [Gimp]     [Yosemite Campsites]

  Powered by Linux