On Fri, Mar 21, 2025 at 09:45:39AM +0100, Christian Brauner wrote: > Afaict, the only way this data race can happen is if we jump to the > cleanup label and then reset current->fs->in_exec. If the execve was > successful there's no one to race us with CLONE_FS obviously because we > took down all other threads. Not really. 1) A enters check_unsafe_execve(), sets ->in_exec to 1 2) B enters check_unsafe_execve(), sets ->in_exec to 1 3) A calls exec_binprm(), fails (bad binary) 4) A clears ->in_exec 5) C calls clone(2) with CLONE_FS and spawns D - ->in_exec is 0 6) B gets through exec_binprm(), kills A and C, but not D. 7) B clears ->in_exec, returns Result: B and D share ->fs, B runs suid binary. Had (5) happened prior to (2), (2) wouldn't have set ->in_exec; had (5) happened prior to (4), clone() would've failed; had (5) been delayed past (6), there wouldn't have been a thread to call clone(). But in the window between (4) and (6), clone() doesn't see execve() in progress and check_unsafe_execve() has already been done, so it hadn't seen the extra thread. IOW, it really is racy. It's a counter, not a flag.