Patch "powerpc/kasan: Fix CONFIG_KASAN_VMALLOC for 8xx" has been added to the 5.8-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    powerpc/kasan: Fix CONFIG_KASAN_VMALLOC for 8xx

to the 5.8-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-kasan-fix-config_kasan_vmalloc-for-8xx.patch
and it can be found in the queue-5.8 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 2afdcb85c485d57add83927bec1b453a489e1453
Author: Christophe Leroy <christophe.leroy@xxxxxxxxxx>
Date:   Fri Sep 11 05:05:38 2020 +0000

    powerpc/kasan: Fix CONFIG_KASAN_VMALLOC for 8xx
    
    [ Upstream commit 4c42dc5c69a8f24c467a6c997909d2f1d4efdc7f ]
    
    Before the commit identified below, pages tables allocation was
    performed after the allocation of final shadow area for linear memory.
    But that commit switched the order, leading to page tables being
    already allocated at the time 8xx kasan_init_shadow_8M() is called.
    Due to this, kasan_init_shadow_8M() doesn't map the needed
    shadow entries because there are already page tables.
    
    kasan_init_shadow_8M() installs huge PMD entries instead of page
    tables. We could at that time free the page tables, but there is no
    point in creating page tables that get freed before being used.
    
    Only book3s/32 hash needs early allocation of page tables. For other
    variants, we can keep the initial order and create remaining page
    tables after the allocation of final shadow memory for linear mem.
    
    Move back the allocation of shadow page tables for
    CONFIG_KASAN_VMALLOC into kasan_init() after the loop which creates
    final shadow memory for linear mem.
    
    Fixes: 41ea93cf7ba4 ("powerpc/kasan: Fix shadow pages allocation failure")
    Signed-off-by: Christophe Leroy <christophe.leroy@xxxxxxxxxx>
    Signed-off-by: Michael Ellerman <mpe@xxxxxxxxxxxxxx>
    Link: https://lore.kernel.org/r/8ae4554357da4882612644a74387ae05525b2aaa.1599800716.git.christophe.leroy@xxxxxxxxxx
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c
index 019b0c0bbbf31..ca91d04d0a7ae 100644
--- a/arch/powerpc/mm/kasan/kasan_init_32.c
+++ b/arch/powerpc/mm/kasan/kasan_init_32.c
@@ -121,8 +121,7 @@ void __init kasan_mmu_init(void)
 {
 	int ret;
 
-	if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE) ||
-	    IS_ENABLED(CONFIG_KASAN_VMALLOC)) {
+	if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE)) {
 		ret = kasan_init_shadow_page_tables(KASAN_SHADOW_START, KASAN_SHADOW_END);
 
 		if (ret)
@@ -133,11 +132,11 @@ void __init kasan_mmu_init(void)
 void __init kasan_init(void)
 {
 	struct memblock_region *reg;
+	int ret;
 
 	for_each_memblock(memory, reg) {
 		phys_addr_t base = reg->base;
 		phys_addr_t top = min(base + reg->size, total_lowmem);
-		int ret;
 
 		if (base >= top)
 			continue;
@@ -147,6 +146,13 @@ void __init kasan_init(void)
 			panic("kasan: kasan_init_region() failed");
 	}
 
+	if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) {
+		ret = kasan_init_shadow_page_tables(KASAN_SHADOW_START, KASAN_SHADOW_END);
+
+		if (ret)
+			panic("kasan: kasan_init_shadow_page_tables() failed");
+	}
+
 	kasan_remap_early_shadow_ro();
 
 	clear_page(kasan_early_shadow_page);



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux