On Thu, 2022-06-09 at 14:57 -0700, Haiyang Zhang wrote: > Support XDP_REDIRECT action > > Signed-off-by: Haiyang Zhang <haiyangz@xxxxxxxxxxxxx> You really should expand the changelog a little bit... > --- > drivers/net/ethernet/microsoft/mana/mana.h | 6 ++ > .../net/ethernet/microsoft/mana/mana_bpf.c | 64 +++++++++++++++++++ > drivers/net/ethernet/microsoft/mana/mana_en.c | 13 +++- > .../ethernet/microsoft/mana/mana_ethtool.c | 12 +++- > 4 files changed, 93 insertions(+), 2 deletions(-) > > diff --git a/drivers/net/ethernet/microsoft/mana/mana.h b/drivers/net/ethernet/microsoft/mana/mana.h > index f198b34c232f..d58be64374c8 100644 > --- a/drivers/net/ethernet/microsoft/mana/mana.h > +++ b/drivers/net/ethernet/microsoft/mana/mana.h > @@ -53,12 +53,14 @@ struct mana_stats_rx { > u64 bytes; > u64 xdp_drop; > u64 xdp_tx; > + u64 xdp_redirect; > struct u64_stats_sync syncp; > }; > > struct mana_stats_tx { > u64 packets; > u64 bytes; > + u64 xdp_xmit; > struct u64_stats_sync syncp; > }; > > @@ -311,6 +313,8 @@ struct mana_rxq { > struct bpf_prog __rcu *bpf_prog; > struct xdp_rxq_info xdp_rxq; > struct page *xdp_save_page; > + bool xdp_flush; > + int xdp_rc; /* XDP redirect return code */ > > /* MUST BE THE LAST MEMBER: > * Each receive buffer has an associated mana_recv_buf_oob. > @@ -396,6 +400,8 @@ int mana_probe(struct gdma_dev *gd, bool resuming); > void mana_remove(struct gdma_dev *gd, bool suspending); > > void mana_xdp_tx(struct sk_buff *skb, struct net_device *ndev); > +int mana_xdp_xmit(struct net_device *ndev, int n, struct xdp_frame **frames, > + u32 flags); > u32 mana_run_xdp(struct net_device *ndev, struct mana_rxq *rxq, > struct xdp_buff *xdp, void *buf_va, uint pkt_len); > struct bpf_prog *mana_xdp_get(struct mana_port_context *apc); > diff --git a/drivers/net/ethernet/microsoft/mana/mana_bpf.c b/drivers/net/ethernet/microsoft/mana/mana_bpf.c > index 1d2f948b5c00..421fd39ff3a8 100644 > --- a/drivers/net/ethernet/microsoft/mana/mana_bpf.c > +++ b/drivers/net/ethernet/microsoft/mana/mana_bpf.c > @@ -32,9 +32,55 @@ void mana_xdp_tx(struct sk_buff *skb, struct net_device *ndev) > ndev->stats.tx_dropped++; > } > > +static int mana_xdp_xmit_fm(struct net_device *ndev, struct xdp_frame *frame, > + u16 q_idx) > +{ > + struct sk_buff *skb; > + > + skb = xdp_build_skb_from_frame(frame, ndev); > + if (unlikely(!skb)) > + return -ENOMEM; ... especially considering this implementation choice: converting the xdp frame to an skb in very bad for performances. You could implement a mana xmit helper working on top of the xdp_frame struct, and use it here. Additionally you could consider revisiting the XDP_TX path: currently it builds a skb from the xdp_buff to xmit it locally, while it could resort to a much cheaper xdp_buff to xdp_frame conversion. The traditional way to handle all the above is keep all the XDP_TX/XDP_REDIRECT bits in the device-specific _run_xdp helper, that will additional avoid several conditionals in mana_rx_skb(). The above refactoring would probably require a bit of work, but it will pay-off for sure and will become more costily with time. Your choice ;) But at the very least we need a better changelog here. Cheers, Paolo