On Wed, Feb 14, 2018 at 11:16 AM, Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> wrote: > For boot-time switching between paging modes, we need to be able to > adjust virtual mask shifts. > > The change doesn't affect the kernel image size much: > > text data bss dec hex filename > 8628892 4734340 1368064 14731296 e0c820 vmlinux.before > 8628966 4734340 1368064 14731370 e0c86a vmlinux.after > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> > --- > arch/x86/entry/entry_64.S | 12 ++++++++++++ > arch/x86/include/asm/page_64_types.h | 2 +- > arch/x86/mm/dump_pagetables.c | 12 ++++++++++-- > arch/x86/mm/kaslr.c | 4 +++- > 4 files changed, 26 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S > index cd216c9431e1..1608b13a0b36 100644 > --- a/arch/x86/entry/entry_64.S > +++ b/arch/x86/entry/entry_64.S > @@ -260,8 +260,20 @@ GLOBAL(entry_SYSCALL_64_after_hwframe) > * Change top bits to match most significant bit (47th or 56th bit > * depending on paging mode) in the address. > */ > +#ifdef CONFIG_X86_5LEVEL > + testl $1, pgtable_l5_enabled(%rip) > + jz 1f > + shl $(64 - 57), %rcx > + sar $(64 - 57), %rcx > + jmp 2f > +1: > + shl $(64 - 48), %rcx > + sar $(64 - 48), %rcx > +2: > +#else > shl $(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx > sar $(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx > +#endif Eww. Can't this be ALTERNATIVE "shl ... sar ...", "shl ... sar ...", X86_FEATURE_5LEVEL or similar? --Andy -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>