On Sat, Mar 22, 2025 at 01:00:08AM +0000, Al Viro wrote: > On Fri, Mar 21, 2025 at 09:45:39AM +0100, Christian Brauner wrote: > > > Afaict, the only way this data race can happen is if we jump to the > > cleanup label and then reset current->fs->in_exec. If the execve was > > successful there's no one to race us with CLONE_FS obviously because we > > took down all other threads. > > Not really. Yeah, you found it. Thank you! > 1) A enters check_unsafe_execve(), sets ->in_exec to 1 > 2) B enters check_unsafe_execve(), sets ->in_exec to 1 With 3 threads A, B, and C already running, fs->users == 3, so steps (1) and (2) happily pass. > 3) A calls exec_binprm(), fails (bad binary) > 4) A clears ->in_exec > 5) C calls clone(2) with CLONE_FS and spawns D - ->in_exec is 0 D's creation bumps fs->users == 4. > 6) B gets through exec_binprm(), kills A and C, but not D. > 7) B clears ->in_exec, returns > > Result: B and D share ->fs, B runs suid binary. > > Had (5) happened prior to (2), (2) wouldn't have set ->in_exec; > had (5) happened prior to (4), clone() would've failed; had > (5) been delayed past (6), there wouldn't have been a thread > to call clone(). > > But in the window between (4) and (6), clone() doesn't see > execve() in progress and check_unsafe_execve() has already > been done, so it hadn't seen the extra thread. > > IOW, it really is racy. It's a counter, not a flag. Yeah, I would agree. Totally untested patch: diff --git a/fs/exec.c b/fs/exec.c index 506cd411f4ac..988b8621c079 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -1632,7 +1632,7 @@ static void check_unsafe_exec(struct linux_binprm *bprm) if (p->fs->users > n_fs) bprm->unsafe |= LSM_UNSAFE_SHARE; else - p->fs->in_exec = 1; + refcount_inc(&p->fs->in_exec); spin_unlock(&p->fs->lock); } @@ -1862,7 +1862,7 @@ static int bprm_execve(struct linux_binprm *bprm) sched_mm_cid_after_execve(current); /* execve succeeded */ - current->fs->in_exec = 0; + refcount_dec(¤t->fs->in_exec); current->in_execve = 0; rseq_execve(current); user_events_execve(current); @@ -1881,7 +1881,7 @@ static int bprm_execve(struct linux_binprm *bprm) force_fatal_sig(SIGSEGV); sched_mm_cid_after_execve(current); - current->fs->in_exec = 0; + refcount_dec(¤t->fs->in_exec); current->in_execve = 0; return retval; diff --git a/fs/fs_struct.c b/fs/fs_struct.c index 64c2d0814ed6..df46b873c425 100644 --- a/fs/fs_struct.c +++ b/fs/fs_struct.c @@ -115,7 +115,7 @@ struct fs_struct *copy_fs_struct(struct fs_struct *old) /* We don't need to lock fs - think why ;-) */ if (fs) { fs->users = 1; - fs->in_exec = 0; + fs->in_exec = REFCOUNT_INIT(0); spin_lock_init(&fs->lock); seqcount_spinlock_init(&fs->seq, &fs->lock); fs->umask = old->umask; diff --git a/include/linux/fs_struct.h b/include/linux/fs_struct.h index 783b48dedb72..aebc0b7aedb9 100644 --- a/include/linux/fs_struct.h +++ b/include/linux/fs_struct.h @@ -11,7 +11,7 @@ struct fs_struct { spinlock_t lock; seqcount_spinlock_t seq; int umask; - int in_exec; + refcount_t in_exec; struct path root, pwd; } __randomize_layout; diff --git a/kernel/fork.c b/kernel/fork.c index 735405a9c5f3..8b427045fd86 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1767,7 +1767,7 @@ static int copy_fs(unsigned long clone_flags, struct task_struct *tsk) /* tsk->fs is already what we want */ spin_lock(&fs->lock); /* "users" and "in_exec" locked for check_unsafe_exec() */ - if (fs->in_exec) { + if (refcount_read(&fs->in_exec)) { spin_unlock(&fs->lock); return -EAGAIN; } -- Kees Cook