On Wed, 4 Dec 2024 09:21:45 -0800 David Wei wrote: > From: Pavel Begunkov <asml.silence@xxxxxxxxx> > > Add a helper that takes an array of pages and initialises passed in > memory provider's area with them, where each net_iov takes one page. > It's also responsible for setting up dma mappings. > > We keep it in page_pool.c not to leak netmem details to outside > providers like io_uring, which don't have access to netmem_priv.h > and other private helpers. User space will likely give us hugepages. Feels a bit wasteful to map and manage them 4k at a time. But okay, we can optimize this later. > diff --git a/include/net/page_pool/memory_provider.h b/include/net/page_pool/memory_provider.h > new file mode 100644 > index 000000000000..83d7eec0058d > --- /dev/null > +++ b/include/net/page_pool/memory_provider.h > @@ -0,0 +1,10 @@ nit: missing SPDX > +#ifndef _NET_PAGE_POOL_MEMORY_PROVIDER_H > +#define _NET_PAGE_POOL_MEMORY_PROVIDER_H > + > +int page_pool_mp_init_paged_area(struct page_pool *pool, > + struct net_iov_area *area, > + struct page **pages); > +void page_pool_mp_release_area(struct page_pool *pool, > + struct net_iov_area *area); > + > +#endif > +static void page_pool_release_page_dma(struct page_pool *pool, > + netmem_ref netmem) > +{ > + __page_pool_release_page_dma(pool, netmem); I'm guessing this is to save text? Because __page_pool_release_page_dma() is always_inline? Maybe add a comment? > +} > + > +int page_pool_mp_init_paged_area(struct page_pool *pool, > + struct net_iov_area *area, > + struct page **pages) > +{ > + struct net_iov *niov; > + netmem_ref netmem; > + int i, ret = 0; > + > + if (!pool->dma_map) > + return -EOPNOTSUPP; > + > + for (i = 0; i < area->num_niovs; i++) { > + niov = &area->niovs[i]; > + netmem = net_iov_to_netmem(niov); > + > + page_pool_set_pp_info(pool, netmem); Maybe move setting pp down, after we successfully mapped. Technically it's not a bug to leave it set on netmem, but it would be on a page struct. > + if (!page_pool_dma_map_page(pool, netmem, pages[i])) { > + ret = -EINVAL; > + goto err_unmap_dma; > + } > + } > + return 0; > + > +err_unmap_dma: > + while (i--) { > + netmem = net_iov_to_netmem(&area->niovs[i]); > + page_pool_release_page_dma(pool, netmem); > + } > + return ret; > +} > + > +void page_pool_mp_release_area(struct page_pool *pool, > + struct net_iov_area *area) > +{ > + int i; > + > + if (!pool->dma_map) > + return; > + > + for (i = 0; i < area->num_niovs; i++) { > + struct net_iov *niov = &area->niovs[i]; > + > + page_pool_release_page_dma(pool, net_iov_to_netmem(niov)); > + } > +}