Re: Memory providers multiplexing (Was: [PATCH net-next v4 4/5] page_pool: remove PP_FLAG_PAGE_FRAG flag)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 20 Jun 2023 17:12:41 +0200 Jesper Dangaard Brouer wrote:
> > The workaround solution I had in mind would be to create a narrower API
> > for just data pages. Since we'd need to sprinkle ifs anyway, pull them
> > up close to the call site. Allowing to switch page pool for a
> > completely different implementation, like the one Jonathan coded up for
> > iouring. Basically
> > 
> > $name_alloc_page(queue)
> > {
> > 	if (queue->pp)
> > 		return page_pool_dev_alloc_pages(queue->pp);
> > 	else if (queue->iouring..)
> > 		...
> > }  
> 
> Yes, this is more the direction I'm thinking.
> In many cases, you don't need this if-statement helper in the driver, as
> driver RX side code will know the API used upfront.

Meaning that the driver "knows" if it's in the XDP, AF_XDP, iouring 
or "normal" Rx path?  I hope we can avoid extra code in the driver
completely, for data pages.

> The TX completion side will need this kind of multiplexing return
> helper, to return the pages to the correct memory allocator type (e.g.
> page_pool being one).  See concept in [1] __xdp_return().
> 
> Performance wise, function pointers are slow due to RETPOLINE, but
> switch-case statements (below certain size) becomes a jump table, which
> is fast.  See[1].
> 
> [1] https://elixir.bootlin.com/linux/v6.4-rc7/source/net/core/xdp.c#L377

SG!

> Regarding room in "struct page", notice that page->pp_magic will have
> plenty room for e.g. storing xdp_mem_type or even xdp_mem_info (which
> also contains an ID).

I was worried about fitting the DMA address, if the pages code from user
space.



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux