> -----Original Message----- > From: Jakub Kicinski <kuba@xxxxxxxxxx> > Sent: 2023年7月20日 11:46 > To: Wei Fang <wei.fang@xxxxxxx> > Cc: davem@xxxxxxxxxxxxx; edumazet@xxxxxxxxxx; pabeni@xxxxxxxxxx; > ast@xxxxxxxxxx; daniel@xxxxxxxxxxxxx; hawk@xxxxxxxxxx; > john.fastabend@xxxxxxxxx; Clark Wang <xiaoning.wang@xxxxxxx>; Shenwei > Wang <shenwei.wang@xxxxxxx>; netdev@xxxxxxxxxxxxxxx; dl-linux-imx > <linux-imx@xxxxxxx>; linux-kernel@xxxxxxxxxxxxxxx; bpf@xxxxxxxxxxxxxxx > Subject: Re: [PATCH net-next] net: fec: add XDP_TX feature support > > On Mon, 17 Jul 2023 18:37:09 +0800 Wei Fang wrote: > > - xdp_return_frame(xdpf); > > + if (txq->tx_buf[index].type == FEC_TXBUF_T_XDP_NDO) > > + xdp_return_frame(xdpf); > > + else > > + xdp_return_frame_rx_napi(xdpf); > > Are you taking budget into account? When NAPI is called with budget of 0 we > are *not* in napi / softirq context. You can't be processing any XDP tx under > such conditions (it may be a netpoll call from IRQ context). Actually, the fec driver never takes the budget into account for cleaning up tx BD ring. The budget is only valid for rx. > > > +static int fec_enet_xdp_tx_xmit(struct net_device *ndev, > > + struct xdp_buff *xdp) > > +{ > > + struct xdp_frame *xdpf = xdp_convert_buff_to_frame(xdp); > > + struct fec_enet_private *fep = netdev_priv(ndev); > > + struct fec_enet_priv_tx_q *txq; > > + int cpu = smp_processor_id(); > > + struct netdev_queue *nq; > > + int queue, ret; > > + > > + queue = fec_enet_xdp_get_tx_queue(fep, cpu); > > + txq = fep->tx_queue[queue]; > > + nq = netdev_get_tx_queue(fep->netdev, queue); > > + > > + __netif_tx_lock(nq, cpu); > > + > > + ret = fec_enet_txq_xmit_frame(fep, txq, xdpf, false); > > + > > + __netif_tx_unlock(nq); > > If you're reusing the same queues as the stack you need to call > txq_trans_cond_update() at some point, otherwise the stack may print a splat > complaining the queue got stuck. Yes, you are absolutely right. I'll add txq_trans_cond_update() in the next version. Thanks!