It is better to clear the special.b.need_qs when it is replaced by special.b.blocked or it is really fulfill its goal in rcu_preempt_deferred_qs_irqrestore(). It makes rcu_qs() easier to be understood, and also prepares for the percpu rcu_preempt_depth patch, which reqires rcu_special bits to be clearred in irq-disable context. Signed-off-by: Lai Jiangshan <laijs@xxxxxxxxxxxxxxxxx> --- kernel/rcu/tree_plugin.h | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h index eb5906c55c8d..e16c3867d2ff 100644 --- a/kernel/rcu/tree_plugin.h +++ b/kernel/rcu/tree_plugin.h @@ -264,8 +264,6 @@ static void rcu_qs(void) __this_cpu_read(rcu_data.gp_seq), TPS("cpuqs")); __this_cpu_write(rcu_data.cpu_no_qs.b.norm, false); - barrier(); /* Coordinate with rcu_flavor_sched_clock_irq(). */ - WRITE_ONCE(current->rcu_read_unlock_special.b.need_qs, false); } } @@ -297,6 +295,7 @@ void rcu_note_context_switch(bool preempt) /* Possibly blocking in an RCU read-side critical section. */ rnp = rdp->mynode; raw_spin_lock_rcu_node(rnp); + t->rcu_read_unlock_special.b.need_qs = false; t->rcu_read_unlock_special.b.blocked = true; t->rcu_blocked_node = rnp; -- 2.20.1