The patch titled Subject: dmapool: push new blocks in ascending order has been added to the -mm mm-unstable branch. Its filename is dmapool-link-blocks-across-pages-fix.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/dmapool-link-blocks-across-pages-fix.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Keith Busch <kbusch@xxxxxxxxxx> Subject: dmapool: push new blocks in ascending order Date: Tue, 21 Feb 2023 08:54:00 -0800 Some users of the dmapool need their allocations to happen in ascending order. The recent optimizations pushed the blocks in reverse order, so restore the previous behavior by linking the next available block from low-to-high. usb/chipidea/udc.c qh_pool called "ci_hw_qh". My initial thought was dmapool isn't the right API if you need a specific order when allocating from it, but I can't readily test any changes to that driver. Restoring the previous behavior is easy enough. Link: https://lkml.kernel.org/r/20230221165400.1595247-1-kbusch@xxxxxxxx Fixes: ced6d06a81fb69 ("dmapool: link blocks across pages") Signed-off-by: Keith Busch <kbusch@xxxxxxxxxx> Reported-by: Bryan O'Donoghue <bryan.odonoghue@xxxxxxxxxx> Tested-by: Bryan O'Donoghue <bryan.odonoghue@xxxxxxxxxx> Cc: Christoph Hellwig <hch@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- --- a/mm/dmapool.c~dmapool-link-blocks-across-pages-fix +++ b/mm/dmapool.c @@ -301,7 +301,7 @@ EXPORT_SYMBOL(dma_pool_create); static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page) { unsigned int next_boundary = pool->boundary, offset = 0; - struct dma_block *block; + struct dma_block *block, *first = NULL, *last = NULL; pool_init_page(pool, page); while (offset + pool->size <= pool->allocation) { @@ -312,11 +312,22 @@ static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page) } block = page->vaddr + offset; - pool_block_push(pool, block, page->dma + offset); + block->dma = page->dma + offset; + block->next_block = NULL; + + if (last) + last->next_block = block; + else + first = block; + last = block; + offset += pool->size; pool->nr_blocks++; } + last->next_block = pool->next_block; + pool->next_block = first; + list_add(&page->page_list, &pool->page_list); pool->nr_pages++; } _ Patches currently in -mm which might be from kbusch@xxxxxxxxxx are dmapool-add-alloc-free-performance-test.patch dmapool-move-debug-code-to-own-functions.patch dmapool-rearrange-page-alloc-failure-handling.patch dmapool-consolidate-page-initialization.patch dmapool-simplify-freeing.patch dmapool-dont-memset-on-free-twice.patch dmapool-link-blocks-across-pages.patch dmapool-link-blocks-across-pages-fix.patch dmapool-create-destroy-cleanup.patch