The following commit has been merged into the x86/mm branch of tip: Commit-ID: 58a18fe95e83b8396605154db04d73b08063f31b Gitweb: https://git.kernel.org/tip/58a18fe95e83b8396605154db04d73b08063f31b Author: Joerg Roedel <jroedel@xxxxxxx> AuthorDate: Fri, 14 Aug 2020 17:19:46 +02:00 Committer: Ingo Molnar <mingo@xxxxxxxxxx> CommitterDate: Sat, 15 Aug 2020 13:56:16 +02:00 x86/mm/64: Do not sync vmalloc/ioremap mappings Remove the code to sync the vmalloc and ioremap ranges for x86-64. The page-table pages are all pre-allocated so that synchronization is no longer necessary. This is a patch that already went into the kernel as: commit 8bb9bf242d1f ("x86/mm/64: Do not sync vmalloc/ioremap mappings") But it had to be reverted later because it unveiled a bug from: commit 6eb82f994026 ("x86/mm: Pre-allocate P4D/PUD pages for vmalloc area") The bug in that commit causes the P4D/PUD pages not to be correctly allocated, making the synchronization still necessary. That issue got fixed meanwhile upstream: commit 995909a4e22b ("x86/mm/64: Do not dereference non-present PGD entries") With that fix it is safe again to remove the page-table synchronization for vmalloc/ioremap ranges on x86-64. Signed-off-by: Joerg Roedel <jroedel@xxxxxxx> Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx> Link: https://lore.kernel.org/r/20200814151947.26229-2-joro@xxxxxxxxxx --- arch/x86/include/asm/pgtable_64_types.h | 2 -- arch/x86/mm/init_64.c | 5 ----- 2 files changed, 7 deletions(-) diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h index 8f63efb..52e5f5f 100644 --- a/arch/x86/include/asm/pgtable_64_types.h +++ b/arch/x86/include/asm/pgtable_64_types.h @@ -159,6 +159,4 @@ extern unsigned int ptrs_per_p4d; #define PGD_KERNEL_START ((PAGE_SIZE / 2) / sizeof(pgd_t)) -#define ARCH_PAGE_TABLE_SYNC_MASK (pgtable_l5_enabled() ? PGTBL_PGD_MODIFIED : PGTBL_P4D_MODIFIED) - #endif /* _ASM_X86_PGTABLE_64_DEFS_H */ diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index a4ac13c..777d835 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -217,11 +217,6 @@ static void sync_global_pgds(unsigned long start, unsigned long end) sync_global_pgds_l4(start, end); } -void arch_sync_kernel_mappings(unsigned long start, unsigned long end) -{ - sync_global_pgds(start, end); -} - /* * NOTE: This function is marked __ref because it calls __init function * (alloc_bootmem_pages). It's safe to do it ONLY when after_bootmem == 0.