On 8/2/2023 11:07 AM, Haiyang Zhang wrote: > Add page pool for RX buffers for faster buffer cycle and reduce CPU > usage. > > The standard page pool API is used. > > Signed-off-by: Haiyang Zhang <haiyangz@xxxxxxxxxxxxx> > --- > V5: > In err path, set page_pool_put_full_page(..., false) as suggested by > Jakub Kicinski > V4: > Add nid setting, remove page_pool_nid_changed(), as suggested by > Jesper Dangaard Brouer > V3: > Update xdp mem model, pool param, alloc as suggested by Jakub Kicinski > V2: > Use the standard page pool API as suggested by Jesper Dangaard Brouer > --- > diff --git a/include/net/mana/mana.h b/include/net/mana/mana.h > index 024ad8ddb27e..b12859511839 100644 > --- a/include/net/mana/mana.h > +++ b/include/net/mana/mana.h > @@ -280,6 +280,7 @@ struct mana_recv_buf_oob { > struct gdma_wqe_request wqe_req; > > void *buf_va; > + bool from_pool; /* allocated from a page pool */ suggest you use flags and not bools, as bools waste 7 bits each, plus your packing of this struct will be full of holes, made worse by this patch. (see pahole tool) > > /* SGL of the buffer going to be sent has part of the work request. */ > u32 num_sge; > @@ -330,6 +331,8 @@ struct mana_rxq { > bool xdp_flush; > int xdp_rc; /* XDP redirect return code */ > > + struct page_pool *page_pool; > + > /* MUST BE THE LAST MEMBER: > * Each receive buffer has an associated mana_recv_buf_oob. > */ The rest of the patch looks ok and is remarkably compact for a conversion to page pool. I'd prefer someone with more page pool exposure review this for correctness, but FWIW Reviewed-by: Jesse Brandeburg <jesse.brandeburg@xxxxxxxxx>