On Mon, Aug 24, 2020 at 10:54:40AM +0200, Joerg Roedel wrote: > From: Joerg Roedel <jroedel@xxxxxxx> > > Allocate and map an IST stack and an additional fall-back stack for > the #VC handler. The memory for the stacks is allocated only when > SEV-ES is active. > > The #VC handler needs to use an IST stack because it could be raised > from kernel space with unsafe stack, e.g. in the SYSCALL entry path. > > Since the #VC exception can be nested, the #VC handler switches back to > the interrupted stack when entered from kernel space. If switching back > is not possible the fall-back stack is used. > > Signed-off-by: Joerg Roedel <jroedel@xxxxxxx> > Link: https://lore.kernel.org/r/20200724160336.5435-45-joro@xxxxxxxxxx > --- > arch/x86/include/asm/cpu_entry_area.h | 33 +++++++++++++++++---------- > arch/x86/include/asm/page_64_types.h | 1 + > arch/x86/kernel/cpu/common.c | 2 ++ > arch/x86/kernel/dumpstack_64.c | 8 +++++-- > arch/x86/kernel/sev-es.c | 33 +++++++++++++++++++++++++++ > 5 files changed, 63 insertions(+), 14 deletions(-) > > diff --git a/arch/x86/include/asm/cpu_entry_area.h b/arch/x86/include/asm/cpu_entry_area.h > index 8902fdb7de13..f87e4c0c16f4 100644 > --- a/arch/x86/include/asm/cpu_entry_area.h > +++ b/arch/x86/include/asm/cpu_entry_area.h > @@ -11,25 +11,29 @@ > #ifdef CONFIG_X86_64 > > /* Macro to enforce the same ordering and stack sizes */ > -#define ESTACKS_MEMBERS(guardsize) \ > - char DF_stack_guard[guardsize]; \ > - char DF_stack[EXCEPTION_STKSZ]; \ > - char NMI_stack_guard[guardsize]; \ > - char NMI_stack[EXCEPTION_STKSZ]; \ > - char DB_stack_guard[guardsize]; \ > - char DB_stack[EXCEPTION_STKSZ]; \ > - char MCE_stack_guard[guardsize]; \ > - char MCE_stack[EXCEPTION_STKSZ]; \ > - char IST_top_guard[guardsize]; \ > +#define ESTACKS_MEMBERS(guardsize, optional_stack_size) \ > + char DF_stack_guard[guardsize]; \ > + char DF_stack[EXCEPTION_STKSZ]; \ > + char NMI_stack_guard[guardsize]; \ > + char NMI_stack[EXCEPTION_STKSZ]; \ > + char DB_stack_guard[guardsize]; \ > + char DB_stack[EXCEPTION_STKSZ]; \ > + char MCE_stack_guard[guardsize]; \ > + char MCE_stack[EXCEPTION_STKSZ]; \ > + char VC_stack_guard[guardsize]; \ > + char VC_stack[optional_stack_size]; \ > + char VC2_stack_guard[guardsize]; \ > + char VC2_stack[optional_stack_size]; \ So the VC* stuff needs to be ifdefferied and enabled only on CONFIG_AMD_MEM_ENCRYPT... here and below. I had that in my previous review too: "All those things should be under an CONFIG_AMD_MEM_ENCRYPT ifdeffery." > + char IST_top_guard[guardsize]; \ > > /* The exception stacks' physical storage. No guard pages required */ > struct exception_stacks { > - ESTACKS_MEMBERS(0) > + ESTACKS_MEMBERS(0, 0) > }; > > /* The effective cpu entry area mapping with guard pages. */ > struct cea_exception_stacks { > - ESTACKS_MEMBERS(PAGE_SIZE) > + ESTACKS_MEMBERS(PAGE_SIZE, EXCEPTION_STKSZ) > }; > > /* > @@ -40,6 +44,8 @@ enum exception_stack_ordering { > ESTACK_NMI, > ESTACK_DB, > ESTACK_MCE, > + ESTACK_VC, > + ESTACK_VC2, > N_EXCEPTION_STACKS > }; > > @@ -139,4 +145,7 @@ static inline struct entry_stack *cpu_entry_stack(int cpu) > #define __this_cpu_ist_top_va(name) \ > CEA_ESTACK_TOP(__this_cpu_read(cea_exception_stacks), name) > > +#define __this_cpu_ist_bot_va(name) \ "bottom" please. I was wondering for a bit, what "bot"? And I know it is CEA_ESTACK_BOT but that's not readable. -- Regards/Gruss, Boris. https://people.kernel.org/tglx/notes-about-netiquette _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization