Re: [patch V2 1/5] x86/kaiser: Respect disabled CPU features

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> --- a/arch/x86/include/asm/pgtable_64.h
> +++ b/arch/x86/include/asm/pgtable_64.h
> @@ -222,7 +222,8 @@ static inline pgd_t kaiser_set_shadow_pg
>  			 * wrong CR3 value, userspace will crash
>  			 * instead of running.
>  			 */
> -			pgd.pgd |= _PAGE_NX;
> +			if (__supported_pte_mask & _PAGE_NX)
> +				pgd.pgd |= _PAGE_NX;
>  		}

Thanks for catching that.  It's definitely a bug.  Although,
practically, it's hard to hit, right?  I think everything 64-bit
supports NX unless the hypervisor disabled it or something.

>  	} else if (pgd_userspace_access(*pgdp)) {
>  		/*
> --- a/arch/x86/mm/kaiser.c
> +++ b/arch/x86/mm/kaiser.c
> @@ -42,6 +42,8 @@
>  
>  #define KAISER_WALK_ATOMIC  0x1
>  
> +static pteval_t kaiser_pte_mask __ro_after_init = ~(_PAGE_NX | _PAGE_GLOBAL);

Do we need a comment on this, like:

/*
 * The NX and GLOBAL bits are not supported on all CPUs.
 * We will add them back to this mask at runtime in
 * kaiser_init_all_pgds() if supported.
 */

>  /*
>   * At runtime, the only things we map are some things for CPU
>   * hotplug, and stacks for new processes.  No two CPUs will ever
> @@ -244,11 +246,14 @@ static pte_t *kaiser_shadow_pagetable_wa
>  int kaiser_add_user_map(const void *__start_addr, unsigned long size,
>  			unsigned long flags)
>  {
> -	pte_t *pte;
>  	unsigned long start_addr = (unsigned long)__start_addr;
>  	unsigned long address = start_addr & PAGE_MASK;
>  	unsigned long end_addr = PAGE_ALIGN(start_addr + size);
>  	unsigned long target_address;
> +	pte_t *pte;
> +
> +	/* Clear not supported bits */
> +	flags &= kaiser_pte_mask;

Should we be warning on these if we clear them?  Seems kinda funky to
silently fix them up.

>  	for (; address < end_addr; address += PAGE_SIZE) {
>  		target_address = get_pa_from_kernel_map(address);
> @@ -308,6 +313,11 @@ static void __init kaiser_init_all_pgds(
>  	pgd_t *pgd;
>  	int i;
>  
> +	if (__supported_pte_mask & _PAGE_NX)
> +		kaiser_pte_mask |= _PAGE_NX;
> +	if (boot_cpu_has(X86_FEATURE_PGE))
> +		kaiser_pte_mask |= _PAGE_GLOBAL;

Practically, I guess boot_cpu_has(X86_FEATURE_PGE) == (cr4_read() &
X86_CR4_PGE).  But, in a slow path like this, is it perhaps better to
just be checking CR4 directly?

Looks functionally fine to me, though.  Feel free to add my Reviewed-by
or Acked-by.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]
  Powered by Linux