On Thu, Jun 06, 2019 at 10:08:50AM +0200, Jesper Dangaard Brouer wrote:
On Wed, 05 Jun 2019 12:14:50 -0700 (PDT) David Miller <davem@xxxxxxxxxxxxx> wrote:From: Ivan Khoronzhuk <ivan.khoronzhuk@xxxxxxxxxx> Date: Wed, 5 Jun 2019 16:20:02 +0300 > This patchset adds XDP support for TI cpsw driver and base it on > page_pool allocator. It was verified on af_xdp socket drop, > af_xdp l2f, ebpf XDP_DROP, XDP_REDIRECT, XDP_PASS, XDP_TX. Jesper et al., please give this a good once over.The issue with merging this, is that I recently discovered two bug with page_pool API, when using DMA-mappings, which result in missing DMA-unmap's. These bugs are not "exposed" yet, but will get exposed now with this drivers. The two bugs are: #1: in-flight packet-pages can still be on remote drivers TX queue, while XDP RX driver manage to unregister the page_pool (waiting 1 RCU period is not enough). #2: this patchset also introduce page_pool_unmap_page(), which is called before an XDP frame travel into networks stack (as no callback exist, yet). But the CPUMAP redirect *also* needs to call this, else we "leak"/miss DMA-unmap. I do have a working prototype, that fixes these two bugs. I guess, I'm under pressure to send this to the list soon...
In particular "cpsw" case no dma unmap issue and if no changes in page_pool API then no changes to the driver required. page_pool_unmap_page() is used here for consistency reasons with attention that it can be inherited/reused by other SoCs for what it can be relevant. One potential change as you mentioned is with dropping page_pool_destroy() that, now, can look like: @@ -571,7 +571,6 @@ static void cpsw_destroy_rx_pool(struct cpsw_priv *priv, int ch) return; xdp_rxq_info_unreg(&priv->xdp_rxq[ch]); - page_pool_destroy(priv->page_pool[ch]); priv->page_pool[ch] = NULL; }
From what I know there is ongoing change for adding switchdev to cpsw that can
change a lot and can require more work to rebase / test this patchset, so I want to believe it can be merged before this. -- Regards, Ivan Khoronzhuk