The patch titled Subject: kmemleak: powerpc: skip scanning holes in the .bss section has been added to the -mm tree. Its filename is kmemleak-powerpc-skip-scanning-holes-in-the-bss-section.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/kmemleak-powerpc-skip-scanning-holes-in-the-bss-section.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/kmemleak-powerpc-skip-scanning-holes-in-the-bss-section.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Catalin Marinas <catalin.marinas@xxxxxxx> Subject: kmemleak: powerpc: skip scanning holes in the .bss section 2d4f567103ff ("KVM: PPC: Introduce kvm_tmp framework") adds kvm_tmp[] into the .bss section and then free the rest of unused spaces back to the page allocator. kernel_init kvm_guest_init kvm_free_tmp free_reserved_area free_unref_page free_unref_page_prepare With DEBUG_PAGEALLOC=y, it will unmap those pages from kernel. As the result, kmemleak scan will trigger a panic when it scans the .bss section with unmapped pages. This patch creates dedicated kmemleak objects for the .data, .bss and potentially .data..ro_after_init sections to allow partial freeing via the kmemleak_free_part() in the powerpc kvm_free_tmp() function. Link: http://lkml.kernel.org/r/20190321171917.62049-1-catalin.marinas@xxxxxxx Signed-off-by: Catalin Marinas <catalin.marinas@xxxxxxx> Reported-by: Qian Cai <cai@xxxxxx> Acked-by: Michael Ellerman <mpe@xxxxxxxxxxxxxx> (powerpc) Cc: Paul Mackerras <paulus@xxxxxxxxx> Cc: Benjamin Herrenschmidt <benh@xxxxxxxxxxxxxxxxxxx> Cc: Alexander Graf <agraf@xxxxxxx> Cc: Avi Kivity <avi@xxxxxxxxxx> Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx> Cc: Radim Krcmar <rkrcmar@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- arch/powerpc/kernel/kvm.c | 7 +++++++ mm/kmemleak.c | 16 +++++++++++----- 2 files changed, 18 insertions(+), 5 deletions(-) --- a/arch/powerpc/kernel/kvm.c~kmemleak-powerpc-skip-scanning-holes-in-the-bss-section +++ a/arch/powerpc/kernel/kvm.c @@ -22,6 +22,7 @@ #include <linux/kvm_host.h> #include <linux/init.h> #include <linux/export.h> +#include <linux/kmemleak.h> #include <linux/kvm_para.h> #include <linux/slab.h> #include <linux/of.h> @@ -712,6 +713,12 @@ static void kvm_use_magic_page(void) static __init void kvm_free_tmp(void) { + /* + * Inform kmemleak about the hole in the .bss section since the + * corresponding pages will be unmapped with DEBUG_PAGEALLOC=y. + */ + kmemleak_free_part(&kvm_tmp[kvm_tmp_index], + ARRAY_SIZE(kvm_tmp) - kvm_tmp_index); free_reserved_area(&kvm_tmp[kvm_tmp_index], &kvm_tmp[ARRAY_SIZE(kvm_tmp)], -1, NULL); } --- a/mm/kmemleak.c~kmemleak-powerpc-skip-scanning-holes-in-the-bss-section +++ a/mm/kmemleak.c @@ -1529,11 +1529,6 @@ static void kmemleak_scan(void) } rcu_read_unlock(); - /* data/bss scanning */ - scan_large_block(_sdata, _edata); - scan_large_block(__bss_start, __bss_stop); - scan_large_block(__start_ro_after_init, __end_ro_after_init); - #ifdef CONFIG_SMP /* per-cpu sections scanning */ for_each_possible_cpu(i) @@ -2071,6 +2066,17 @@ void __init kmemleak_init(void) } local_irq_restore(flags); + /* register the data/bss sections */ + create_object((unsigned long)_sdata, _edata - _sdata, + KMEMLEAK_GREY, GFP_ATOMIC); + create_object((unsigned long)__bss_start, __bss_stop - __bss_start, + KMEMLEAK_GREY, GFP_ATOMIC); + /* only register .data..ro_after_init if not within .data */ + if (__start_ro_after_init < _sdata || __end_ro_after_init > _edata) + create_object((unsigned long)__start_ro_after_init, + __end_ro_after_init - __start_ro_after_init, + KMEMLEAK_GREY, GFP_ATOMIC); + /* * This is the point where tracking allocations is safe. Automatic * scanning is started during the late initcall. Add the early logged _ Patches currently in -mm which might be from catalin.marinas@xxxxxxx are kmemleak-powerpc-skip-scanning-holes-in-the-bss-section.patch