Le Fri, Feb 16, 2024 at 05:27:38PM -0800, Boqun Feng a écrit : > From: "Paul E. McKenney" <paulmck@xxxxxxxxxx> > > Holding a mutex across synchronize_rcu_tasks() and acquiring > that same mutex in code called from do_exit() after its call to > exit_tasks_rcu_start() but before its call to exit_tasks_rcu_stop() > results in deadlock. This is by design, because tasks that are far > enough into do_exit() are no longer present on the tasks list, making > it a bit difficult for RCU Tasks to find them, let alone wait on them > to do a voluntary context switch. However, such deadlocks are becoming > more frequent. In addition, lockdep currently does not detect such > deadlocks and they can be difficult to reproduce. > > In addition, if a task voluntarily context switches during that time > (for example, if it blocks acquiring a mutex), then this task is in an > RCU Tasks quiescent state. And with some adjustments, RCU Tasks could > just as well take advantage of that fact. > > This commit therefore initializes the data structures that will be needed > to rely on these quiescent states and to eliminate these deadlocks. > > Link: https://lore.kernel.org/all/20240118021842.290665-1-chenzhongjin@xxxxxxxxxx/ > > Reported-by: Chen Zhongjin <chenzhongjin@xxxxxxxxxx> > Reported-by: Yang Jihong <yangjihong1@xxxxxxxxxx> > Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxx> > Tested-by: Yang Jihong <yangjihong1@xxxxxxxxxx> > Tested-by: Chen Zhongjin <chenzhongjin@xxxxxxxxxx> > Signed-off-by: Boqun Feng <boqun.feng@xxxxxxxxx> > --- > init/init_task.c | 1 + > kernel/fork.c | 1 + > kernel/rcu/tasks.h | 2 ++ > 3 files changed, 4 insertions(+) > > diff --git a/init/init_task.c b/init/init_task.c > index 7ecb458eb3da..4daee6d761c8 100644 > --- a/init/init_task.c > +++ b/init/init_task.c > @@ -147,6 +147,7 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = { > .rcu_tasks_holdout = false, > .rcu_tasks_holdout_list = LIST_HEAD_INIT(init_task.rcu_tasks_holdout_list), > .rcu_tasks_idle_cpu = -1, > + .rcu_tasks_exit_list = LIST_HEAD_INIT(init_task.rcu_tasks_exit_list), > #endif > #ifdef CONFIG_TASKS_TRACE_RCU > .trc_reader_nesting = 0, > diff --git a/kernel/fork.c b/kernel/fork.c > index 0d944e92a43f..af7203be1d2d 100644 > --- a/kernel/fork.c > +++ b/kernel/fork.c > @@ -1976,6 +1976,7 @@ static inline void rcu_copy_process(struct task_struct *p) > p->rcu_tasks_holdout = false; > INIT_LIST_HEAD(&p->rcu_tasks_holdout_list); > p->rcu_tasks_idle_cpu = -1; > + INIT_LIST_HEAD(&p->rcu_tasks_exit_list); > #endif /* #ifdef CONFIG_TASKS_RCU */ > #ifdef CONFIG_TASKS_TRACE_RCU > p->trc_reader_nesting = 0; > diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h > index b7d5f2757053..4a5d562e3189 100644 > --- a/kernel/rcu/tasks.h > +++ b/kernel/rcu/tasks.h > @@ -277,6 +277,8 @@ static void cblist_init_generic(struct rcu_tasks *rtp) > rtpcp->rtpp = rtp; > if (!rtpcp->rtp_blkd_tasks.next) > INIT_LIST_HEAD(&rtpcp->rtp_blkd_tasks); > + if (!rtpcp->rtp_exit_list.next) I assume there can't be an exiting task concurrently at this point on boot. Because kthreadd just got created and workqueues as well but that's it, right? Or workqueues can die that early? Probably not. > + INIT_LIST_HEAD(&rtpcp->rtp_exit_list); Because if tasks can exit concurrently, then we are in trouble :-) Thanks. > } > > pr_info("%s: Setting shift to %d and lim to %d rcu_task_cb_adjust=%d.\n", rtp->name, > -- > 2.43.0 >