On Tue, Nov 05, 2019 at 03:56:08PM -0800, Sami Tolvanen wrote: > This change implements shadow stack switching, initial SCS set-up, > and interrupt shadow stacks for arm64. Each CPU also has an overflow stack, and two SDEI stacks, which should presumably be given their own SCS. SDEI is effectively a software-NMI, so it should almost certainly have the same treatement as IRQ. > +static __always_inline void scs_save(struct task_struct *tsk) > +{ > + void *s; > + > + asm volatile("mov %0, x18" : "=r" (s)); > + task_set_scs(tsk, s); > +} An alternative would be to follow <asm/stack_pointer.h>, and have: register unsigned long *current_scs_pointer asm ("x18"); static __always_inline void scs_save(struct task_struct *tsk) { task_set_scs(tsk, current_scs_pointer); } ... which would avoid the need for a temporary register where this is used. However, given we only use this in cpu_die(), having this as-is should be fine. Maybe the asm volatile is preferable if we use this elsewhere, so that we know we have a consistent snapshot that the compiler can't reload, etc. [...] > @@ -409,6 +428,10 @@ alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0 > */ > .macro irq_stack_exit > mov sp, x19 > +#ifdef CONFIG_SHADOW_CALL_STACK > + /* x20 is also preserved */ > + mov x18, x20 > +#endif > .endm Can we please fold this comment into the one above, and have: /* * The callee-saved regs (x19-x29) should be preserved between * irq_stack_entry and irq_stack_exit. */ .macro irq_stack_exit mov sp, x19 #ifdef CONFIG_SHADOW_CALL_STACK mov x18, x20 #endif .endm Thanks, Mark.