Re: [PATCH 5/5] x86, pti: filter at vma->vm_page_prot population

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> wrote:

> 
> From: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>
> 
> 0day reported warnings at boot on 32-bit systems without NX support:
> 
> [   12.349193] attempted to set unsupported pgprot: 8000000000000025 bits: 8000000000000000 supported: 7fffffffffffffff
> [   12.350792] WARNING: CPU: 0 PID: 1 at arch/x86/include/asm/pgtable.h:540 handle_mm_fault+0xfc1/0xfe0:
> 						check_pgprot at arch/x86/include/asm/pgtable.h:535
> 						 (inlined by) pfn_pte at arch/x86/include/asm/pgtable.h:549
> 						 (inlined by) do_anonymous_page at mm/memory.c:3169
> 						 (inlined by) handle_pte_fault at mm/memory.c:3961
> 						 (inlined by) __handle_mm_fault at mm/memory.c:4087
> 						 (inlined by) handle_mm_fault at mm/memory.c:4124
> 
> The problem was that we stopped massaging page permissions at PTE creation
> time, so vma->vm_page_prot was passed unfiltered to PTE creation.
> 
> To fix it, filter the page protections before they are installed in
> vma->vm_page_prot.
> 
> Signed-off-by: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>
> Reported-by: Fengguang Wu <fengguang.wu@xxxxxxxxx>
> Fixes: fb43d6cb91 ("x86/mm: Do not auto-massage page protections")
> Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx>
> Cc: Andy Lutomirski <luto@xxxxxxxxxx>
> Cc: Arjan van de Ven <arjan@xxxxxxxxxxxxxxx>
> Cc: Borislav Petkov <bp@xxxxxxxxx>
> Cc: Dan Williams <dan.j.williams@xxxxxxxxx>
> Cc: David Woodhouse <dwmw2@xxxxxxxxxxxxx>
> Cc: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
> Cc: Hugh Dickins <hughd@xxxxxxxxxx>
> Cc: Josh Poimboeuf <jpoimboe@xxxxxxxxxx>
> Cc: Juergen Gross <jgross@xxxxxxxx>
> Cc: Kees Cook <keescook@xxxxxxxxxx>
> Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
> Cc: Nadav Amit <namit@xxxxxxxxxx>
> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
> Cc: linux-mm@xxxxxxxxx
> Cc: Ingo Molnar <mingo@xxxxxxxxxx>
> ---
> 
> b/arch/x86/Kconfig               |    4 ++++
> b/arch/x86/include/asm/pgtable.h |    5 +++++
> b/mm/mmap.c                      |   11 ++++++++++-
> 3 files changed, 19 insertions(+), 1 deletion(-)
> 
> diff -puN arch/x86/include/asm/pgtable.h~pti-glb-protection_map arch/x86/include/asm/pgtable.h
> --- a/arch/x86/include/asm/pgtable.h~pti-glb-protection_map	2018-04-20 14:10:08.251749151 -0700
> +++ b/arch/x86/include/asm/pgtable.h	2018-04-20 14:10:08.260749151 -0700
> @@ -601,6 +601,11 @@ static inline pgprot_t pgprot_modify(pgp
> 
> #define canon_pgprot(p) __pgprot(massage_pgprot(p))
> 
> +static inline pgprot_t arch_filter_pgprot(pgprot_t prot)
> +{
> +	return canon_pgprot(prot);
> +}
> +
> static inline int is_new_memtype_allowed(u64 paddr, unsigned long size,
> 					 enum page_cache_mode pcm,
> 					 enum page_cache_mode new_pcm)
> diff -puN arch/x86/Kconfig~pti-glb-protection_map arch/x86/Kconfig
> --- a/arch/x86/Kconfig~pti-glb-protection_map	2018-04-20 14:10:08.253749151 -0700
> +++ b/arch/x86/Kconfig	2018-04-20 14:10:08.260749151 -0700
> @@ -52,6 +52,7 @@ config X86
> 	select ARCH_HAS_DEVMEM_IS_ALLOWED
> 	select ARCH_HAS_ELF_RANDOMIZE
> 	select ARCH_HAS_FAST_MULTIPLIER
> +	select ARCH_HAS_FILTER_PGPROT
> 	select ARCH_HAS_FORTIFY_SOURCE
> 	select ARCH_HAS_GCOV_PROFILE_ALL
> 	select ARCH_HAS_KCOV			if X86_64
> @@ -273,6 +274,9 @@ config ARCH_HAS_CPU_RELAX
> config ARCH_HAS_CACHE_LINE_SIZE
> 	def_bool y
> 
> +config ARCH_HAS_FILTER_PGPROT
> +	def_bool y
> +
> config HAVE_SETUP_PER_CPU_AREA
> 	def_bool y
> 
> diff -puN mm/mmap.c~pti-glb-protection_map mm/mmap.c
> --- a/mm/mmap.c~pti-glb-protection_map	2018-04-20 14:10:08.256749151 -0700
> +++ b/mm/mmap.c	2018-04-20 14:10:08.261749151 -0700
> @@ -100,11 +100,20 @@ pgprot_t protection_map[16] __ro_after_i
> 	__S000, __S001, __S010, __S011, __S100, __S101, __S110, __S111
> };
> 
> +#ifndef CONFIG_ARCH_HAS_FILTER_PGPROT
> +static inline pgprot_t arch_filter_pgprot(pgprot_t prot)
> +{
> +	return prot;
> +}
> +#endif
> +
> pgprot_t vm_get_page_prot(unsigned long vm_flags)
> {
> -	return __pgprot(pgprot_val(protection_map[vm_flags &
> +	pgprot_t ret = __pgprot(pgprot_val(protection_map[vm_flags &
> 				(VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]) |
> 			pgprot_val(arch_vm_get_page_prot(vm_flags)));
> +
> +	return arch_filter_pgprot(ret);
> }
> EXPORT_SYMBOL(vm_get_page_prot);

Wouldn’t it be simpler or at least cleaner to change the protection map if
NX is not supported? I presume it can be done paging_init() similarly to the
way other archs (e.g., arm, mips) do.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux