[PATCH v2] x86/mm/kaiser: Disable global pages by default with KAISER

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* Thomas Gleixner <tglx@xxxxxxxxxxxxx> wrote:

> On Sun, 26 Nov 2017, Ingo Molnar wrote:
> >  * Disable global pages for anything using the default
> >  * __PAGE_KERNEL* macros.
> >  *
> >  * PGE will still be enabled and _PAGE_GLOBAL may still be used carefully
> >  * for a few selected kernel mappings which must be visible to userspace,
> >  * when KAISER is enabled, like the entry/exit code and data.
> >  */
> > #ifdef CONFIG_KAISER
> > #define __PAGE_KERNEL_GLOBAL	0
> > #else
> > #define __PAGE_KERNEL_GLOBAL	_PAGE_GLOBAL
> > #endif
> > 
> > ... and I've added your Reviewed-by tag which I assume now applies?
> 
> Ideally we replace the whole patch with the __supported_pte_mask one which
> I posted as a delta patch.

Yeah, so I squashed these two patches:

  09d76fc407e0: x86/mm/kaiser: Disable global pages by default with KAISER
  bac79112ee4a: x86/mm/kaiser: Simplify disabling of global pages

into a single patch, which results in the single patch below, with an updated 
changelog that reflects the cleanups. I kept Dave's authorship and credited you 
for the simplification.

Note that the squashed commit had some whitespace noise which I skipped, further 
simplifying the patch.

Is it OK this way? If yes then I'll reshuffle the tree with this variant.

Thanks,

	Ingo

====================>
>From 12cffe1598c3ebdad76453c72acb8c606f22a747 Mon Sep 17 00:00:00 2001
From: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>
Date: Wed, 22 Nov 2017 16:34:41 -0800
Subject: [PATCH] x86/mm/kaiser: Disable global pages by default with KAISER

Global pages stay in the TLB across context switches.  Since all contexts
share the same kernel mapping, these mappings are marked as global pages
so kernel entries in the TLB are not flushed out on a context switch.

But, even having these entries in the TLB opens up something that an
attacker can use, such as the double-page-fault attack:

   http://www.ieee-security.org/TC/SP2013/papers/4977a191.pdf

That means that even when KAISER switches page tables on return to user
space the global pages would stay in the TLB cache.

Disable global pages so that kernel TLB entries can be flushed before
returning to user space. This way, all accesses to kernel addresses from
userspace result in a TLB miss independent of the existence of a kernel
mapping.

Supress global pages via the __supported_pte_mask. The shadow mappings
set PAGE_GLOBAL for the minimal kernel mappings which are required
for entry/exit. These mappings are set up manually so the filtering does not
take place.

[ The __supported_pte_mask simplification was written by Thomas Gleixner. ]

Signed-off-by: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>
Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Reviewed-by: Borislav Petkov <bp@xxxxxxx>
Reviewed-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Reviewed-by: Rik van Riel <riel@xxxxxxxxxx>
Cc: Andy Lutomirski <luto@xxxxxxxxxx>
Cc: Brian Gerst <brgerst@xxxxxxxxx>
Cc: Denys Vlasenko <dvlasenk@xxxxxxxxxx>
Cc: H. Peter Anvin <hpa@xxxxxxxxx>
Cc: Josh Poimboeuf <jpoimboe@xxxxxxxxxx>
Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: daniel.gruss@xxxxxxxxxxxxxx
Cc: hughd@xxxxxxxxxx
Cc: keescook@xxxxxxxxxx
Cc: linux-mm@xxxxxxxxx
Cc: michael.schwarz@xxxxxxxxxxxxxx
Cc: moritz.lipp@xxxxxxxxxxxxxx
Cc: richard.fellner@xxxxxxxxxxxxxxxxx
Link: https://lkml.kernel.org/r/20171123003441.63DDFC6F@xxxxxxxxxxxxxxxxxx
Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
---
 arch/x86/mm/init.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index a22c2b95e513..4a2df8babd29 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -161,6 +161,13 @@ struct map_range {
 
 static int page_size_mask;
 
+static void enable_global_pages(void)
+{
+#ifndef CONFIG_KAISER
+	__supported_pte_mask |= _PAGE_GLOBAL;
+#endif
+}
+
 static void __init probe_page_size_mask(void)
 {
 	/*
@@ -179,11 +186,11 @@ static void __init probe_page_size_mask(void)
 		cr4_set_bits_and_update_boot(X86_CR4_PSE);
 
 	/* Enable PGE if available */
+	__supported_pte_mask &= ~_PAGE_GLOBAL;
 	if (boot_cpu_has(X86_FEATURE_PGE)) {
 		cr4_set_bits_and_update_boot(X86_CR4_PGE);
-		__supported_pte_mask |= _PAGE_GLOBAL;
-	} else
-		__supported_pte_mask &= ~_PAGE_GLOBAL;
+		enable_global_pages();
+	}
 
 	/* Enable 1 GB linear kernel mappings if available: */
 	if (direct_gbpages && boot_cpu_has(X86_FEATURE_GBPAGES)) {

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]
  Powered by Linux