The quilt patch titled Subject: mm/memmap: prevent double scanning of memmap by kmemleak has been removed from the -mm tree. Its filename was mm-memmap-prevent-double-scanning-of-memmap-by-kmemleak.patch This patch was dropped because it was merged into the mm-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm ------------------------------------------------------ From: Guo Weikang <guoweikang.kernel@xxxxxxxxx> Subject: mm/memmap: prevent double scanning of memmap by kmemleak Date: Mon, 6 Jan 2025 10:11:25 +0800 kmemleak explicitly scans the mem_map through the valid struct page objects. However, memmap_alloc() was also adding this memory to the gray object list, causing it to be scanned twice. Remove memmap_alloc() from the scan list and add a comment to clarify the behavior. Link: https://lore.kernel.org/lkml/CAOm6qn=FVeTpH54wGDFMHuCOeYtvoTx30ktnv9-w3Nh8RMofEA@xxxxxxxxxxxxxx/ Link: https://lkml.kernel.org/r/20250106021126.1678334-1-guoweikang.kernel@xxxxxxxxx Signed-off-by: Guo Weikang <guoweikang.kernel@xxxxxxxxx> Reviewed-by: Catalin Marinas <catalin.marinas@xxxxxxx> Cc: Mike Rapoport (Microsoft) <rppt@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/memblock.h | 4 ++++ mm/mm_init.c | 8 ++++++-- mm/sparse-vmemmap.c | 5 +++-- 3 files changed, 13 insertions(+), 4 deletions(-) --- a/include/linux/memblock.h~mm-memmap-prevent-double-scanning-of-memmap-by-kmemleak +++ a/include/linux/memblock.h @@ -378,6 +378,10 @@ static inline int memblock_get_region_no /* Flags for memblock allocation APIs */ #define MEMBLOCK_ALLOC_ANYWHERE (~(phys_addr_t)0) #define MEMBLOCK_ALLOC_ACCESSIBLE 0 +/* + * MEMBLOCK_ALLOC_NOLEAKTRACE avoids kmemleak tracing. It implies + * MEMBLOCK_ALLOC_ACCESSIBLE + */ #define MEMBLOCK_ALLOC_NOLEAKTRACE 1 /* We are using top down, so it is safe to use 0 here */ --- a/mm/mm_init.c~mm-memmap-prevent-double-scanning-of-memmap-by-kmemleak +++ a/mm/mm_init.c @@ -1585,13 +1585,17 @@ void __init *memmap_alloc(phys_addr_t si { void *ptr; + /* + * Kmemleak will explicitly scan mem_map by traversing all valid + * `struct *page`,so memblock does not need to be added to the scan list. + */ if (exact_nid) ptr = memblock_alloc_exact_nid_raw(size, align, min_addr, - MEMBLOCK_ALLOC_ACCESSIBLE, + MEMBLOCK_ALLOC_NOLEAKTRACE, nid); else ptr = memblock_alloc_try_nid_raw(size, align, min_addr, - MEMBLOCK_ALLOC_ACCESSIBLE, + MEMBLOCK_ALLOC_NOLEAKTRACE, nid); if (ptr && size > 0) --- a/mm/sparse-vmemmap.c~mm-memmap-prevent-double-scanning-of-memmap-by-kmemleak +++ a/mm/sparse-vmemmap.c @@ -31,6 +31,8 @@ #include <asm/dma.h> #include <asm/pgalloc.h> +#include "internal.h" + /* * Allocate a block of memory to be used to back the virtual memory map * or to back the page tables that are used to create the mapping. @@ -42,8 +44,7 @@ static void * __ref __earlyonly_bootmem_ unsigned long align, unsigned long goal) { - return memblock_alloc_try_nid_raw(size, align, goal, - MEMBLOCK_ALLOC_ACCESSIBLE, node); + return memmap_alloc(size, align, goal, node, false); } void * __meminit vmemmap_alloc_block(unsigned long size, int node) _ Patches currently in -mm which might be from guoweikang.kernel@xxxxxxxxx are mm-memblock-add-memblock_alloc_or_panic-interface.patch arch-s390-save_area_alloc-default-failure-behavior-changed-to-panic.patch