On Wed, 10 Feb 2021 16:40:41 -0500 "Michael S. Tsirkin" <mst@xxxxxxxxxx> wrote: > On Wed, Jan 13, 2021 at 04:08:57PM +0800, Xuan Zhuo wrote: > > The number of queues implemented by many virtio backends is limited, > > especially some machines have a large number of CPUs. In this case, it > > is often impossible to allocate a separate queue for XDP_TX. > > > > This patch allows XDP_TX to run by reuse the existing SQ with > > __netif_tx_lock() hold when there are not enough queues. I'm a little puzzled about the choice of using the netdevice TXQ lock __netif_tx_lock() / __netif_tx_unlock(). Can you explain more about this choice? > > > > Signed-off-by: Xuan Zhuo <xuanzhuo@xxxxxxxxxxxxxxxxx> > > Reviewed-by: Dust Li <dust.li@xxxxxxxxxxxxxxxxx> > > I'd like to get some advice on whether this is ok from some > XDP experts - previously my understanding was that it is > preferable to disable XDP for such devices than > use locks on XDP fast path. I think it is acceptable, because the ndo_xdp_xmit / virtnet_xdp_xmit takes a bulk of packets (currently 16). Some drivers already does this. It would have been nice if we could set a feature flag, that allow users to see that this driver uses locking in the XDP transmit (ndo_xdp_xmit) function call... but it seems like a pipe dream :-P Code related to the locking > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c > > index ba8e637..7a3b2a7 100644 > > --- a/drivers/net/virtio_net.c > > +++ b/drivers/net/virtio_net.c [...] > > @@ -481,14 +484,34 @@ static int __virtnet_xdp_xmit_one(struct virtnet_info *vi, > > return 0; > > } > > > > -static struct send_queue *virtnet_xdp_sq(struct virtnet_info *vi) > > +static struct send_queue *virtnet_get_xdp_sq(struct virtnet_info *vi) > > { > > unsigned int qp; > > + struct netdev_queue *txq; > > + > > + if (vi->curr_queue_pairs > nr_cpu_ids) { > > + qp = vi->curr_queue_pairs - vi->xdp_queue_pairs + smp_processor_id(); > > + } else { > > + qp = smp_processor_id() % vi->curr_queue_pairs; > > + txq = netdev_get_tx_queue(vi->dev, qp); > > + __netif_tx_lock(txq, raw_smp_processor_id()); > > + } > > > > - qp = vi->curr_queue_pairs - vi->xdp_queue_pairs + smp_processor_id(); > > return &vi->sq[qp]; > > } > > > > +static void virtnet_put_xdp_sq(struct virtnet_info *vi) > > +{ > > + unsigned int qp; > > + struct netdev_queue *txq; > > + > > + if (vi->curr_queue_pairs <= nr_cpu_ids) { > > + qp = smp_processor_id() % vi->curr_queue_pairs; > > + txq = netdev_get_tx_queue(vi->dev, qp); > > + __netif_tx_unlock(txq); > > + } > > +} > > + > > static int virtnet_xdp_xmit(struct net_device *dev, > > int n, struct xdp_frame **frames, u32 flags) > > { > > @@ -512,7 +535,7 @@ static int virtnet_xdp_xmit(struct net_device *dev, > > if (!xdp_prog) > > return -ENXIO; > > > > - sq = virtnet_xdp_sq(vi); > > + sq = virtnet_get_xdp_sq(vi); > > > > if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) { > > ret = -EINVAL; > > @@ -560,12 +583,13 @@ static int virtnet_xdp_xmit(struct net_device *dev, > > sq->stats.kicks += kicks; > > u64_stats_update_end(&sq->stats.syncp); > > > > + virtnet_put_xdp_sq(vi); > > return ret; > > } > > -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization