The patch titled Subject: arm64/kasan: add and use kasan_map_populate() has been removed from the -mm tree. Its filename was arm64-kasan-add-and-use-kasan_map_populate.patch This patch was dropped because an alternative patch was merged ------------------------------------------------------ From: Pavel Tatashin <pasha.tatashin@xxxxxxxxxx> Subject: arm64/kasan: add and use kasan_map_populate() During early boot, kasan uses vmemmap_populate() to establish its shadow memory. But, that interface is intended for struct pages use. Because of the current project, vmemmap won't be zeroed during allocation, but kasan expects that memory to be zeroed. We are adding a new kasan_map_populate() function to resolve this difference. Therefore, we must use a new interface to allocate and map kasan shadow memory, that also zeroes memory for us. Link: http://lkml.kernel.org/r/20171013173214.27300-9-pasha.tatashin@xxxxxxxxxx Signed-off-by: Pavel Tatashin <pasha.tatashin@xxxxxxxxxx> Cc: Andrey Ryabinin <aryabinin@xxxxxxxxxxxxx> Cc: Alexander Potapenko <glider@xxxxxxxxxx> Cc: Dmitry Vyukov <dvyukov@xxxxxxxxxx> Cc: Catalin Marinas <catalin.marinas@xxxxxxx> Cc: Will Deacon <will.deacon@xxxxxxx> Cc: Ard Biesheuvel <ard.biesheuvel@xxxxxxxxxx> Cc: Bob Picco <bob.picco@xxxxxxxxxx> Cc: Christian Borntraeger <borntraeger@xxxxxxxxxx> Cc: Daniel Jordan <daniel.m.jordan@xxxxxxxxxx> Cc: David S. Miller <davem@xxxxxxxxxxxxx> Cc: Heiko Carstens <heiko.carstens@xxxxxxxxxx> Cc: "H. Peter Anvin" <hpa@xxxxxxxxx> Cc: Ingo Molnar <mingo@xxxxxxxxxx> Cc: Mark Rutland <mark.rutland@xxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxx> Cc: Sam Ravnborg <sam@xxxxxxxxxxxx> Cc: Steven Sistare <steven.sistare@xxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- arch/arm64/mm/kasan_init.c | 72 ++++++++++++++++++++++++++++++++--- 1 file changed, 66 insertions(+), 6 deletions(-) diff -puN arch/arm64/mm/kasan_init.c~arm64-kasan-add-and-use-kasan_map_populate arch/arm64/mm/kasan_init.c --- a/arch/arm64/mm/kasan_init.c~arm64-kasan-add-and-use-kasan_map_populate +++ a/arch/arm64/mm/kasan_init.c @@ -28,6 +28,66 @@ static pgd_t tmp_pg_dir[PTRS_PER_PGD] __initdata __aligned(PGD_SIZE); +/* Creates mappings for kasan during early boot. The mapped memory is zeroed */ +static int __meminit kasan_map_populate(unsigned long start, unsigned long end, + int node) +{ + unsigned long addr, pfn, next; + unsigned long long size; + pgd_t *pgd; + pud_t *pud; + pmd_t *pmd; + pte_t *pte; + int ret; + + ret = vmemmap_populate(start, end, node); + /* + * We might have partially populated memory, so check for no entries, + * and zero only those that actually exist. + */ + for (addr = start; addr < end; addr = next) { + pgd = pgd_offset_k(addr); + if (pgd_none(*pgd)) { + next = pgd_addr_end(addr, end); + continue; + } + + pud = pud_offset(pgd, addr); + if (pud_none(*pud)) { + next = pud_addr_end(addr, end); + continue; + } + if (pud_sect(*pud)) { + /* This is PUD size page */ + next = pud_addr_end(addr, end); + size = PUD_SIZE; + pfn = pud_pfn(*pud); + } else { + pmd = pmd_offset(pud, addr); + if (pmd_none(*pmd)) { + next = pmd_addr_end(addr, end); + continue; + } + if (pmd_sect(*pmd)) { + /* This is PMD size page */ + next = pmd_addr_end(addr, end); + size = PMD_SIZE; + pfn = pmd_pfn(*pmd); + } else { + pte = pte_offset_kernel(pmd, addr); + next = addr + PAGE_SIZE; + if (pte_none(*pte)) + continue; + /* This is base size page */ + size = PAGE_SIZE; + pfn = pte_pfn(*pte); + } + } + memset(phys_to_virt(PFN_PHYS(pfn)), 0, size); + } + return ret; +} + /* * The p*d_populate functions call virt_to_phys implicitly so they can't be used * directly on kernel symbols (bm_p*d). All the early functions are called too @@ -161,11 +221,11 @@ void __init kasan_init(void) clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END); - vmemmap_populate(kimg_shadow_start, kimg_shadow_end, - pfn_to_nid(virt_to_pfn(lm_alias(_text)))); + kasan_map_populate(kimg_shadow_start, kimg_shadow_end, + pfn_to_nid(virt_to_pfn(lm_alias(_text)))); /* - * vmemmap_populate() has populated the shadow region that covers the + * kasan_map_populate() has populated the shadow region that covers the * kernel image with SWAPPER_BLOCK_SIZE mappings, so we have to round * the start and end addresses to SWAPPER_BLOCK_SIZE as well, to prevent * kasan_populate_zero_shadow() from replacing the page table entries @@ -191,9 +251,9 @@ void __init kasan_init(void) if (start >= end) break; - vmemmap_populate((unsigned long)kasan_mem_to_shadow(start), - (unsigned long)kasan_mem_to_shadow(end), - pfn_to_nid(virt_to_pfn(start))); + kasan_map_populate((unsigned long)kasan_mem_to_shadow(start), + (unsigned long)kasan_mem_to_shadow(end), + pfn_to_nid(virt_to_pfn(start))); } /* _ Patches currently in -mm which might be from pasha.tatashin@xxxxxxxxxx are mm-deferred_init_memmap-improvements.patch mm-deferred_init_memmap-improvements-fix-3.patch x86-mm-setting-fields-in-deferred-pages.patch sparc64-mm-setting-fields-in-deferred-pages.patch sparc64-simplify-vmemmap_populate.patch mm-defining-memblock_virt_alloc_try_nid_raw.patch mm-zero-reserved-and-unavailable-struct-pages.patch x86-mm-kasan-dont-use-vmemmap_populate-to-initialize-shadow.patch arm64-mm-kasan-dont-use-vmemmap_populate-to-initialize-shadow.patch mm-stop-zeroing-memory-during-allocation-in-vmemmap.patch sparc64-optimized-struct-page-zeroing.patch mm-broken-deferred-calculation.patch sparc64-ng4-memset-32-bits-overflow.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html