Ever since preemption was introduced to linux kernel, irq disabled spinlocks are always held with preemption disabled. One of the reason is that sometimes we need to use spin_unlock() which will do preempt_enable() to unlock the irq disabled spinlock with keeping irq disabled. So preempt_count can be used to test whether scheduler locks is possible held. CC: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Signed-off-by: Lai Jiangshan <laijs@xxxxxxxxxxxxxxxxx> --- kernel/rcu/tree_plugin.h | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h index 0982e9886103..aba5896d67e3 100644 --- a/kernel/rcu/tree_plugin.h +++ b/kernel/rcu/tree_plugin.h @@ -603,10 +603,14 @@ static void rcu_read_unlock_special(struct task_struct *t) tick_nohz_full_cpu(rdp->cpu); // Need to defer quiescent state until everything is enabled. if (irqs_were_disabled && use_softirq && - (in_interrupt() || - (exp && !t->rcu_read_unlock_special.b.deferred_qs))) { + (in_interrupt() || (exp && !preempt_bh_were_disabled))) { // Using softirq, safe to awaken, and we get // no help from enabling irqs, unlike bh/preempt. + // in_interrupt(): raise_softirq_irqoff() is + // guaranteed not to not do wakeup + // !preempt_bh_were_disabled: scheduler locks cannot + // be held, since spinlocks are always held with + // preempt_disable(), so the wakeup will be safe. raise_softirq_irqoff(RCU_SOFTIRQ); } else { // Enabling BH or preempt does reschedule, so... -- 2.20.1