On Mon, 2022-10-03 at 13:29 -0700, Kees Cook wrote: > On Thu, Sep 29, 2022 at 03:29:22PM -0700, Rick Edgecombe wrote: > > [...] > > +#ifdef CONFIG_X86_SHADOW_STACK > > +static int update_fpu_shstk(struct task_struct *dst, unsigned long > > ssp) > > +{ > > + struct cet_user_state *xstate; > > + > > + /* If ssp update is not needed. */ > > + if (!ssp) > > + return 0; > > My brain will work to undo the collision of Shadow Stack Pointer with > Stack Smashing Protection. ;) > > > [...] > > diff --git a/arch/x86/kernel/shstk.c b/arch/x86/kernel/shstk.c > > index a0b8d4adb2bf..db4e53f9fdaf 100644 > > --- a/arch/x86/kernel/shstk.c > > +++ b/arch/x86/kernel/shstk.c > > @@ -118,6 +118,46 @@ void reset_thread_shstk(void) > > current->thread.features_locked = 0; > > } > > > > +int shstk_alloc_thread_stack(struct task_struct *tsk, unsigned > > long clone_flags, > > + unsigned long stack_size, unsigned long > > *shstk_addr) > > Er, arg 3 is "stack_size". From later: > > > + ret = shstk_alloc_thread_stack(p, clone_flags, args->flags, > > &shstk_addr); > > ^^^^^^^^^^^ > > clone_flags and args->flags are identical ... this must be > accidentally > working. I was expecting 0 there. Oh wow. A stack_size used to be passed into copy_thread(), but I messed up the rebase badly. Thanks for catching it. > > > +{ > > + struct thread_shstk *shstk = &tsk->thread.shstk; > > + unsigned long addr; > > + > > + /* > > + * If shadow stack is not enabled on the new thread, skip any > > + * switch to a new shadow stack. > > + */ > > + if (!feature_enabled(CET_SHSTK)) > > + return 0; > > + > > + /* > > + * clone() does not pass stack_size, which was added to > > clone3(). > > + * Use RLIMIT_STACK and cap to 4 GB. > > + */ > > + if (!stack_size) > > + stack_size = min_t(unsigned long long, > > rlimit(RLIMIT_STACK), SZ_4G); > > Again, perhaps the clamp should happen in alloc_shstk()? The map_shadow_stack() is kind of like mmap(). I think it shouldn't get the rlimit restriction. But I can pull the shared logic into a helper for the other two cases. > > > + > > + /* > > + * For CLONE_VM, except vfork, the child needs a separate > > shadow > > + * stack. > > + */ > > + if ((clone_flags & (CLONE_VFORK | CLONE_VM)) != CLONE_VM) > > + return 0; > > + > > + > > + stack_size = PAGE_ALIGN(stack_size); > > Uhm, I think a line went missing here. :P > > "x86/cet/shstk: Introduce map_shadow_stack syscall" adds the missing: > > + addr = alloc_shstk(0, stack_size, 0, false); > > Please add back the original. :) Yes, more rebase mangling. Thanks. > > > + if (IS_ERR_VALUE(addr)) > > + return PTR_ERR((void *)addr); > > + > > + shstk->base = addr; > > + shstk->size = stack_size; > > + > > + *shstk_addr = addr + stack_size; > > + > > + return 0; > > +} > > + > > void shstk_free(struct task_struct *tsk) > > { > > struct thread_shstk *shstk = &tsk->thread.shstk; > > @@ -126,7 +166,13 @@ void shstk_free(struct task_struct *tsk) > > !feature_enabled(CET_SHSTK)) > > return; > > > > - if (!tsk->mm) > > + /* > > + * When fork() with CLONE_VM fails, the child (tsk) already has > > a > > + * shadow stack allocated, and exit_thread() calls this > > function to > > + * free it. In this case the parent (current) and the child > > share > > + * the same mm struct. > > + */ > > + if (!tsk->mm || tsk->mm != current->mm) > > return; > > > > unmap_shadow_stack(shstk->base, shstk->size); > > -- > > 2.17.1 > > > >