On Fri, Oct 18, 2019 at 07:12:52PM +0200, Jann Horn wrote: > On Fri, Oct 18, 2019 at 6:16 PM Sami Tolvanen <samitolvanen@xxxxxxxxxx> wrote: > > This change implements shadow stack switching, initial SCS set-up, > > and interrupt shadow stacks for arm64. > [...] > > +static inline void scs_save(struct task_struct *tsk) > > +{ > > + void *s; > > + > > + asm volatile("mov %0, x18" : "=r" (s)); > > + task_set_scs(tsk, s); > > +} > > + > > +static inline void scs_load(struct task_struct *tsk) > > +{ > > + asm volatile("mov x18, %0" : : "r" (task_scs(tsk))); > > + task_set_scs(tsk, NULL); > > +} > > These things should probably be __always_inline or something like > that? If the compiler decides not to inline them (e.g. when called > from scs_thread_switch()), stuff will blow up, right? I think scs_save() would better live in assembly in cpu_switch_to(), where we switch the stack and current. It shouldn't matter whether scs_load() is inlined or not, since the x18 value _should_ be invariant from the PoV of the task. We just need to add a TSK_TI_SCS to asm-offsets.c, and then insert a single LDR at the end: mov sp, x9 msr sp_el0, x1 #ifdef CONFIG_SHADOW_CALL_STACK ldr x18, [x1, TSK_TI_SCS] #endif ret Thanks, Mark.