On Wed, Mar 05, 2025 at 05:21:31PM +0100, Alexander Lobakin wrote: > Use libeth XDP infra to implement .ndo_xdp_xmit() in idpf. > The Tx callbacks are reused from XDP_TX code. XDP redirect target > feature is set/cleared depending on the XDP prog presence, as for now > we still don't allocate XDP Tx queues when there's no program. > > Signed-off-by: Alexander Lobakin <aleksander.lobakin@xxxxxxxxx> > --- > drivers/net/ethernet/intel/idpf/xdp.h | 2 ++ > drivers/net/ethernet/intel/idpf/idpf_lib.c | 1 + > drivers/net/ethernet/intel/idpf/xdp.c | 29 ++++++++++++++++++++++ > 3 files changed, 32 insertions(+) > > diff --git a/drivers/net/ethernet/intel/idpf/xdp.h b/drivers/net/ethernet/intel/idpf/xdp.h > index fde85528a315..a2ac1b2f334f 100644 > --- a/drivers/net/ethernet/intel/idpf/xdp.h > +++ b/drivers/net/ethernet/intel/idpf/xdp.h > @@ -110,5 +110,7 @@ static inline void idpf_xdp_tx_finalize(void *_xdpq, bool sent, bool flush) > void idpf_xdp_set_features(const struct idpf_vport *vport); > > int idpf_xdp(struct net_device *dev, struct netdev_bpf *xdp); > +int idpf_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, > + u32 flags); > > #endif /* _IDPF_XDP_H_ */ > diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c > index 2d1efcb854be..39b9885293a9 100644 > --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c > +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c > @@ -2371,4 +2371,5 @@ static const struct net_device_ops idpf_netdev_ops = { > .ndo_set_features = idpf_set_features, > .ndo_tx_timeout = idpf_tx_timeout, > .ndo_bpf = idpf_xdp, > + .ndo_xdp_xmit = idpf_xdp_xmit, > }; > diff --git a/drivers/net/ethernet/intel/idpf/xdp.c b/drivers/net/ethernet/intel/idpf/xdp.c > index abf75e840c0a..1834f217a07f 100644 > --- a/drivers/net/ethernet/intel/idpf/xdp.c > +++ b/drivers/net/ethernet/intel/idpf/xdp.c > @@ -357,8 +357,35 @@ LIBETH_XDP_DEFINE_START(); > LIBETH_XDP_DEFINE_TIMER(static idpf_xdp_tx_timer, idpf_clean_xdp_irq); > LIBETH_XDP_DEFINE_FLUSH_TX(idpf_xdp_tx_flush_bulk, idpf_xdp_tx_prep, > idpf_xdp_tx_xmit); > +LIBETH_XDP_DEFINE_FLUSH_XMIT(static idpf_xdp_xmit_flush_bulk, idpf_xdp_tx_prep, > + idpf_xdp_tx_xmit); > LIBETH_XDP_DEFINE_END(); > > +/** > + * idpf_xdp_xmit - send frames queued by ``XDP_REDIRECT`` to this interface > + * @dev: network device > + * @n: number of frames to transmit > + * @frames: frames to transmit > + * @flags: transmit flags (``XDP_XMIT_FLUSH`` or zero) > + * > + * Return: number of frames successfully sent or -errno on error. > + */ > +int idpf_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, > + u32 flags) > +{ > + const struct idpf_netdev_priv *np = netdev_priv(dev); > + const struct idpf_vport *vport = np->vport; > + > + if (unlikely(!netif_carrier_ok(dev) || !vport->link_up)) > + return -ENETDOWN; > + > + return libeth_xdp_xmit_do_bulk(dev, n, frames, flags, > + &vport->txqs[vport->xdp_txq_offset], > + vport->num_xdp_txq, Have you considered in some future libeth being stateful where you could provide some initialization data such as vport->num_xdp_txq which is rather constant so that we wouldn't have to pass this all the time? I got a bit puzzled here as it took me some digging that it is only used a bound check and libeth_xdpsq_id() uses cpu id as an index. Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@xxxxxxxxx> > + idpf_xdp_xmit_flush_bulk, > + idpf_xdp_tx_finalize); > +} > + > void idpf_xdp_set_features(const struct idpf_vport *vport) > { > if (!idpf_is_queue_model_split(vport->rxq_model)) > @@ -417,6 +444,8 @@ idpf_xdp_setup_prog(struct idpf_vport *vport, const struct netdev_bpf *xdp) > cfg->user_config.xdp_prog = old; > } > > + libeth_xdp_set_redirect(vport->netdev, vport->xdp_prog); > + > return ret; > } > > -- > 2.48.1 >