On Wed, 3 Mar 2021 09:18:25 +0000 Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> wrote: > On Tue, Mar 02, 2021 at 08:49:06PM +0200, Ilias Apalodimas wrote: > > On Mon, Mar 01, 2021 at 04:11:59PM +0000, Mel Gorman wrote: > > > From: Jesper Dangaard Brouer <brouer@xxxxxxxxxx> > > > > > > In preparation for next patch, move the dma mapping into its own > > > function, as this will make it easier to follow the changes. > > > > > > V2: make page_pool_dma_map return boolean (Ilias) > > > > > > > [...] > > > > > @@ -211,30 +234,14 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, > > > if (!page) > > > return NULL; > > > > > > - if (!(pool->p.flags & PP_FLAG_DMA_MAP)) > > > - goto skip_dma_map; > > > - > > > - /* Setup DMA mapping: use 'struct page' area for storing DMA-addr > > > - * since dma_addr_t can be either 32 or 64 bits and does not always fit > > > - * into page private data (i.e 32bit cpu with 64bit DMA caps) > > > - * This mapping is kept for lifetime of page, until leaving pool. > > > - */ > > > - dma = dma_map_page_attrs(pool->p.dev, page, 0, > > > - (PAGE_SIZE << pool->p.order), > > > - pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC); > > > - if (dma_mapping_error(pool->p.dev, dma)) { > > > + if (pp_flags & PP_FLAG_DMA_MAP && > > > > Nit pick but can we have if ((pp_flags & PP_FLAG_DMA_MAP) && ... > > > > Done. Thanks for fixing this nitpick, and carrying the patch. -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer