On Fri, 28 Jul 2023 16:18:27 -0700 Michael Chan wrote: > From: Somnath Kotur <somnath.kotur@xxxxxxxxxxxx> > > The RXBD length field on all bnxt chips is 16-bit and so we cannot > support a full page when the native page size is 64K or greater. > The non-XDP (non page pool) code path has logic to handle this but > the XDP page pool code path does not handle this. Add the missing > logic to use page_pool_dev_alloc_frag() to allocate 32K chunks if > the page size is 64K or greater. > > Fixes: 9f4b28301ce6 ("bnxt: XDP multibuffer enablement") > Reviewed-by: Andy Gospodarek <andrew.gospodarek@xxxxxxxxxxxx> > Signed-off-by: Somnath Kotur <somnath.kotur@xxxxxxxxxxxx> > Signed-off-by: Michael Chan <michael.chan@xxxxxxxxxxxx> Fix is a fix... Let's get this into net, first. > - dma_unmap_page_attrs(&bp->pdev->dev, dma_addr, PAGE_SIZE, bp->rx_dir, > + dma_unmap_page_attrs(&bp->pdev->dev, dma_addr, BNXT_RX_PAGE_SIZE, bp->rx_dir, > DMA_ATTR_WEAK_ORDERING); this > - dma_unmap_page_attrs(&bp->pdev->dev, dma_addr, PAGE_SIZE, bp->rx_dir, > + dma_unmap_page_attrs(&bp->pdev->dev, dma_addr, BNXT_RX_PAGE_SIZE, bp->rx_dir, > DMA_ATTR_WEAK_ORDERING); this > - dma_unmap_page_attrs(&pdev->dev, mapping, PAGE_SIZE, > + dma_unmap_page_attrs(&pdev->dev, mapping, BNXT_RX_PAGE_SIZE, > bp->rx_dir, > DMA_ATTR_WEAK_ORDERING); and this - unnecessarily go over 80 chars when there's already a continuation line that could take the last argument. > @@ -185,7 +185,7 @@ void bnxt_xdp_buff_init(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, > struct xdp_buff *xdp) > { > struct bnxt_sw_rx_bd *rx_buf; > - u32 buflen = PAGE_SIZE; > + u32 buflen = BNXT_RX_PAGE_SIZE; nit: rev xmas tree here > struct pci_dev *pdev; > dma_addr_t mapping; > u32 offset; -- pw-bot: cr