The patch titled Subject: arm/kasan: fix the array size of kasan_early_shadow_pte[] has been added to the -mm tree. Its filename is arm-kasan-fix-the-arry-size-of-kasan_early_shadow_pte.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/arm-kasan-fix-the-arry-size-of-kasan_early_shadow_pte.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/arm-kasan-fix-the-arry-size-of-kasan_early_shadow_pte.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Hailong Liu <liu.hailong6@xxxxxxxxxx> Subject: arm/kasan: fix the array size of kasan_early_shadow_pte[] The size of kasan_early_shadow_pte[] now is PTRS_PER_PTE which defined to 512 for arm architecture. This means that it only covers the prev Linux pte entries, but not the HWTABLE pte entries for arm. The reason it works well current is that the symbol kasan_early_shadow_page immediately following kasan_early_shadow_pte in memory is page aligned, which makes kasan_early_shadow_pte look like a 4KB size array. But we can't ensure the order always right with different compiler/linker, nor more bss symbols be introduced. We had a test with QEMU + vexpress:put a 512KB-size symbol with attribute __section(".bss..page_aligned") after kasan_early_shadow_pte, and poison it after kasan_early_init(). Then enabled CONFIG_KASAN, it failed to boot up. Link: https://lkml.kernel.org/r/20210109044622.8312-1-hailongliiu@xxxxxxxx Signed-off-by: Hailong Liu <liu.hailong6@xxxxxxxxxx> Signed-off-by: Ziliang Guo <guo.ziliang@xxxxxxxxxx> Reviewed-by: Linus Walleij <linus.walleij@xxxxxxxxxx> Cc: Andrey Ryabinin <aryabinin@xxxxxxxxxxxxx> Cc: Russell King <linux@xxxxxxxxxxxxxxx> Cc: Alexander Potapenko <glider@xxxxxxxxxx> Cc: Dmitry Vyukov <dvyukov@xxxxxxxxxx> Cc: Ard Biesheuvel <ardb@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/kasan.h | 6 +++++- mm/kasan/init.c | 3 ++- 2 files changed, 7 insertions(+), 2 deletions(-) --- a/include/linux/kasan.h~arm-kasan-fix-the-arry-size-of-kasan_early_shadow_pte +++ a/include/linux/kasan.h @@ -35,8 +35,12 @@ struct kunit_kasan_expectation { #define KASAN_SHADOW_INIT 0 #endif +#ifndef PTE_HWTABLE_PTRS +#define PTE_HWTABLE_PTRS 0 +#endif + extern unsigned char kasan_early_shadow_page[PAGE_SIZE]; -extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE]; +extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE + PTE_HWTABLE_PTRS]; extern pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD]; extern pud_t kasan_early_shadow_pud[PTRS_PER_PUD]; extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D]; --- a/mm/kasan/init.c~arm-kasan-fix-the-arry-size-of-kasan_early_shadow_pte +++ a/mm/kasan/init.c @@ -64,7 +64,8 @@ static inline bool kasan_pmd_table(pud_t return false; } #endif -pte_t kasan_early_shadow_pte[PTRS_PER_PTE] __page_aligned_bss; +pte_t kasan_early_shadow_pte[PTRS_PER_PTE + PTE_HWTABLE_PTRS] + __page_aligned_bss; static inline bool kasan_pte_table(pmd_t pmd) { _ Patches currently in -mm which might be from liu.hailong6@xxxxxxxxxx are mm-page_alloc-add-a-missing-mm_page_alloc_zone_locked-tracepoint.patch arm-kasan-fix-the-arry-size-of-kasan_early_shadow_pte.patch