On Mon, 9 Dec 2024 17:23:07 +0000 Mina Almasry wrote: > -static inline void page_pool_dma_sync_for_cpu(const struct page_pool *pool, > - const struct page *page, > - u32 offset, u32 dma_sync_size) > +static inline void > +page_pool_dma_sync_netmem_for_cpu(const struct page_pool *pool, > + const netmem_ref netmem, u32 offset, > + u32 dma_sync_size) > { > + if (pool->mp_priv) Let's add a dedicated bit to skip sync. The io-uring support feels quite close. Let's not force those guys to have to rejig this. > + return; > + > dma_sync_single_range_for_cpu(pool->p.dev, > - page_pool_get_dma_addr(page), > + page_pool_get_dma_addr_netmem(netmem), > offset + pool->p.offset, dma_sync_size, > page_pool_get_dma_dir(pool)); > } > > +static inline void page_pool_dma_sync_for_cpu(const struct page_pool *pool, > + struct page *page, u32 offset, > + u32 dma_sync_size) > +{ > + page_pool_dma_sync_netmem_for_cpu(pool, page_to_netmem(page), offset, > + dma_sync_size); I have the feeling Olek won't thank us for this extra condition and bit clearing. If driver calls page_pool_dma_sync_for_cpu() we don't have to check the new bit / mp_priv. Let's copy & paste the dma_sync_single_range_for_cpu() call directly here.