RE: [PATCH V3,net-next, 2/4] net: mana: Refactor RX buffer allocation code to prepare for various MTU

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: Leon Romanovsky <leon@xxxxxxxxxx>
> Sent: Thursday, April 13, 2023 9:04 AM
> To: Haiyang Zhang <haiyangz@xxxxxxxxxxxxx>
> Cc: linux-hyperv@xxxxxxxxxxxxxxx; netdev@xxxxxxxxxxxxxxx; Dexuan Cui
> <decui@xxxxxxxxxxxxx>; KY Srinivasan <kys@xxxxxxxxxxxxx>; Paul Rosswurm
> <paulros@xxxxxxxxxxxxx>; olaf@xxxxxxxxx; vkuznets@xxxxxxxxxx;
> davem@xxxxxxxxxxxxx; wei.liu@xxxxxxxxxx; edumazet@xxxxxxxxxx;
> kuba@xxxxxxxxxx; pabeni@xxxxxxxxxx; Long Li <longli@xxxxxxxxxxxxx>;
> ssengar@xxxxxxxxxxxxxxxxxxx; linux-rdma@xxxxxxxxxxxxxxx;
> daniel@xxxxxxxxxxxxx; john.fastabend@xxxxxxxxx; bpf@xxxxxxxxxxxxxxx;
> ast@xxxxxxxxxx; Ajay Sharma <sharmaajay@xxxxxxxxxxxxx>;
> hawk@xxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx
> Subject: Re: [PATCH V3,net-next, 2/4] net: mana: Refactor RX buffer allocation
> code to prepare for various MTU
> 
> On Wed, Apr 12, 2023 at 02:16:01PM -0700, Haiyang Zhang wrote:
> > Move out common buffer allocation code from mana_process_rx_cqe() and
> > mana_alloc_rx_wqe() to helper functions.
> > Refactor related variables so they can be changed in one place, and buffer
> > sizes are in sync.
> >
> > Signed-off-by: Haiyang Zhang <haiyangz@xxxxxxxxxxxxx>
> > Reviewed-by: Jesse Brandeburg <jesse.brandeburg@xxxxxxxxx>
> > ---
> > V3:
> > Refectored to multiple patches for readability. Suggested by Jacob Keller.
> >
> > V2:
> > Refectored to multiple patches for readability. Suggested by Yunsheng Lin.
> >
> > ---
> >  drivers/net/ethernet/microsoft/mana/mana_en.c | 154 ++++++++++-------
> -
> >  include/net/mana/mana.h                       |   6 +-
> >  2 files changed, 91 insertions(+), 69 deletions(-)
> 
> <...>
> 
> > +static void *mana_get_rxfrag(struct mana_rxq *rxq, struct device *dev,
> > +			     dma_addr_t *da, bool is_napi)
> > +{
> > +	struct page *page;
> > +	void *va;
> > +
> > +	/* Reuse XDP dropped page if available */
> > +	if (rxq->xdp_save_va) {
> > +		va = rxq->xdp_save_va;
> > +		rxq->xdp_save_va = NULL;
> > +	} else {
> > +		page = dev_alloc_page();
> 
> Documentation/networking/page_pool.rst
>    10 Basic use involves replacing alloc_pages() calls with the
>    11 page_pool_alloc_pages() call.  Drivers should use
> page_pool_dev_alloc_pages()
>    12 replacing dev_alloc_pages().
> 
> General question, is this sentence applicable to all new code or only
> for XDP related paths?

Quote from the context before that sentence --

=============
Page Pool API
=============
The page_pool allocator is optimized for the XDP mode that uses one frame
per-page, but it can fallback on the regular page allocator APIs.
Basic use involves replacing alloc_pages() calls with the
page_pool_alloc_pages() call.  Drivers should use page_pool_dev_alloc_pages()
replacing dev_alloc_pages().

--unquote

So the page pool is optimized for the XDP, and that sentence is applicable to drivers
that have set up page pool for XDP optimization.
static inline struct page *page_pool_dev_alloc_pages(struct page_pool *pool)  //need a pool been set up

Back to our mana driver, we don't have page pool setup yet. (will consider in the future)
So we cannot call page_pool_dev_alloc_pages(pool) in this place yet.

Thanks,
- Haiyang





[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux