Re: error loading xdp program on virtio nic

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2019/11/21 下午11:49, David Ahern wrote:
On 11/21/19 12:02 AM, Jason Wang wrote:
By specifying queues property like:

<devices>

   <interface type='network'>
     <source network='default'/>
     <target dev='vnet1'/>
     <model type='virtio'/>
     <driver name='vhost' txmode='iothread' ioeventfd='on'
event_idx='off' queues='5' rx_queue_size='256' tx_queue_size='256'>
I can not check this because the 3.0 version of libvirt does not support
tx_queue_size. It is multiqueue (queues=5 in the example) setting that
needs to be set to 2*Nvcpus for XDP, correct?


Yes.



       <host csum='off' gso='off' tso4='off' tso6='off' ecn='off'
ufo='off' mrg_rxbuf='off'/>
       <guest csum='off' tso4='off' tso6='off' ecn='off' ufo='off'/>
     </driver>
     </interface>
</devices>


The virtio_net driver suggests the queues are needed for XDP_TX:

        /* XDP requires extra queues for XDP_TX */
         if (curr_qp + xdp_qp > vi->max_queue_pairs) {
                 NL_SET_ERR_MSG_MOD(extack, "Too few free TX rings
available");
                 netdev_warn(dev, "request %i queues but max is %i\n",
                             curr_qp + xdp_qp, vi->max_queue_pairs);
                 return -ENOMEM;
         }

Doubling the number of queues for each tap device adds overhead to the
hypervisor if you only want to allow XDP_DROP or XDP_DIRECT. Am I
understanding that correctly?


Yes, but there's almost impossible to know whether or not XDP_TX will be used by the program. If we don't use per CPU TX queue, it must be serialized through locks, not sure it's worth try that (not by default, of course).

Thanks







[Index of Archives]     [Linux Networking Development]     [Fedora Linux Users]     [Linux SCTP]     [DCCP]     [Gimp]     [Yosemite Campsites]

  Powered by Linux