From: "Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx> [ Upstream commit 0607ba8403c4cdb253f8c5200ecf654dfb7790cc ] Ever since cdf7abc4610a ("srcu: Allow use of Tiny/Tree SRCU from both process and interrupt context"), it has been permissible to use SRCU read-side critical sections in interrupt context. This allows __call_srcu() to use SRCU read-side critical sections to prevent a new SRCU grace period from ending before the call to either srcu_funnel_gp_start() or srcu_funnel_exp_start completes, thus preventing SRCU grace-period counter overflow during that time. Note that this does not permit removal of the counter-wrap checks in srcu_gp_end(). These check are necessary to handle the case where a given CPU does not interact at all with SRCU for an extended time period. This commit therefore adds an SRCU read-side critical section to __call_srcu() in order to prevent grace period counter wrap during the funnel-locking process. Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx> Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> --- kernel/rcu/srcutree.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 1ff17e297f0c..4b0a6e319b2c 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -853,6 +853,7 @@ void __call_srcu(struct srcu_struct *sp, struct rcu_head *rhp, rcu_callback_t func, bool do_norm) { unsigned long flags; + int idx; bool needexp = false; bool needgp = false; unsigned long s; @@ -866,6 +867,7 @@ void __call_srcu(struct srcu_struct *sp, struct rcu_head *rhp, return; } rhp->func = func; + idx = srcu_read_lock(sp); local_irq_save(flags); sdp = this_cpu_ptr(sp->sda); spin_lock_rcu_node(sdp); @@ -887,6 +889,7 @@ void __call_srcu(struct srcu_struct *sp, struct rcu_head *rhp, srcu_funnel_gp_start(sp, sdp, s, do_norm); else if (needexp) srcu_funnel_exp_start(sp, sdp->mynode, s); + srcu_read_unlock(sp, idx); } /** -- 2.19.1