The patch titled Subject: mm/mempool: replace kmap_atomic() with kmap_local_page() has been added to the -mm mm-unstable branch. Its filename is mm-mempool-replace-kmap_atomic-with-kmap_local_page.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-mempool-replace-kmap_atomic-with-kmap_local_page.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: "Fabio M. De Francesco" <fabio.maria.de.francesco@xxxxxxxxxxxxxxx> Subject: mm/mempool: replace kmap_atomic() with kmap_local_page() Date: Mon, 20 Nov 2023 15:26:31 +0100 kmap_atomic() has been deprecated in favor of kmap_local_page(). Therefore, replace kmap_atomic() with kmap_local_page(). kmap_atomic() is implemented like a kmap_local_page() which also disables page-faults and preemption (the latter only in !PREEMPT_RT kernels). The kernel virtual addresses returned by these two API are only valid in the context of the callers (i.e., they cannot be handed to other threads). With kmap_local_page() the mappings are per thread and CPU local like in kmap_atomic(); however, they can handle page-faults and can be called from any context (including interrupts). The tasks that call kmap_local_page() can be preempted and, when they are scheduled to run again, the kernel virtual addresses are restored and are still valid. The code blocks between the mappings and un-mappings don't rely on the above-mentioned side effects of kmap_atomic(), so that mere replacements of the old API with the new one is all that they require (i.e., there is no need to explicitly call pagefault_disable() and/or preempt_disable()). Link: https://lkml.kernel.org/r/20231120142640.7077-1-fabio.maria.de.francesco@xxxxxxxxxxxxxxx Signed-off-by: Fabio M. De Francesco <fabio.maria.de.francesco@xxxxxxxxxxxxxxx> Cc: Ira Weiny <ira.weiny@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/mempool.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) --- a/mm/mempool.c~mm-mempool-replace-kmap_atomic-with-kmap_local_page +++ a/mm/mempool.c @@ -64,10 +64,10 @@ static void check_element(mempool_t *poo } else if (pool->free == mempool_free_pages) { /* Mempools backed by page allocator */ int order = (int)(long)pool->pool_data; - void *addr = kmap_atomic((struct page *)element); + void *addr = kmap_local_page((struct page *)element); __check_element(pool, addr, 1UL << (PAGE_SHIFT + order)); - kunmap_atomic(addr); + kunmap_local(addr); } } @@ -89,10 +89,10 @@ static void poison_element(mempool_t *po } else if (pool->alloc == mempool_alloc_pages) { /* Mempools backed by page allocator */ int order = (int)(long)pool->pool_data; - void *addr = kmap_atomic((struct page *)element); + void *addr = kmap_local_page((struct page *)element); __poison_element(addr, 1UL << (PAGE_SHIFT + order)); - kunmap_atomic(addr); + kunmap_local(addr); } } #else /* CONFIG_DEBUG_SLAB || CONFIG_SLUB_DEBUG_ON */ _ Patches currently in -mm which might be from fabio.maria.de.francesco@xxxxxxxxxxxxxxx are mm-memory-use-kmap_local_page-in-__wp_page_copy_user.patch mm-mempool-replace-kmap_atomic-with-kmap_local_page.patch mm-page_poison-replace-kmap_atomic-with-kmap_local_page.patch