Hello, On Thu, Jun 20, 2024 at 08:47:23PM +0200, Thomas Gleixner wrote: > One example I very explicitely mentioned back then is the dance around > fork(). It took me at least an hour last year to grok the convoluted > logic and it did not get any faster when I stared at it today again. > > fork() > sched_fork() > scx_pre_fork() > percpu_down_rwsem(&scx_fork_rwsem); > > if (dl_prio(p)) { > ret = -EINVAL; > goto cancel; // required to release the semaphore > } > > sched_cgroup_fork() > return scx_fork(); > > sched_post_fork() > scx_post_fork() > percpu_up_rwsem(&scx_fork_rwsem); > > Plus the extra scx_cancel_fork() which releases the scx_fork_rwsem in > case that any call after sched_fork() fails. This part is actually tricky. sched_cgroup_fork() part is mostly just me trying to find the right place among existing hooks. We can either just rename sched_cgroup_fork() to a more generic name or separate out the SCX hook in the fork path. When a BPF scheduler attaches, it needs to establish its base operating condition - ie. allocate per-task data structures, change sched class, and so on. There is trade-off between how fine-grained the synchronization can be and how easy it is for the BPF schedulers and we really do wanna make it easy for the BPF schedulers. So, the current approach is just locking things down while attaching which makes things a lot easier for the BPF schedulers. The locking is through a percpu_rwsem, so it's super heavy on the writer side but really light on the reader (fork) side. Maybe the overhead can be further reduced by guarding it with static_key but the difference won't be much and I doubt it'd make any noticeable difference in the fork path. Thanks. -- tejun