On Wed, May 08, 2024 at 05:55:38PM +0300, Dan Carpenter wrote: > On Wed, May 08, 2024 at 07:33:38AM -0700, Paul E. McKenney wrote: > > On Wed, May 08, 2024 at 03:29:12PM +0300, Dan Carpenter wrote: > > > Hello David Woodhouse, > > > > > > Commit 82980b1622d9 ("rcu: Kill rnp->ofl_seq and use only > > > rcu_state.ofl_lock for exclusion") from Feb 16, 2021 (linux-next), > > > leads to the following Smatch static checker warning: > > > > > > kernel/rcu/tree.c:1844 rcu_gp_init() > > > warn: mixing irq and irqsave > > > > There are actually cases where this does make sense, one example being > > where some called function (for example, rcu_report_qs_rnp() below) > > needs a flags argument. > > > > I only found one false positive which was kind of related to that in > __run_hrtimer(). > > 1643 > 1644 static void __run_hrtimer(struct hrtimer_cpu_base *cpu_base, > 1645 struct hrtimer_clock_base *base, > 1646 struct hrtimer *timer, ktime_t *now, > 1647 unsigned long flags) __must_hold(&cpu_base->lock) > ^^^^^ > > 1648 { > .... > > 1678 /* > 1679 * The timer is marked as running in the CPU base, so it is > 1680 * protected against migration to a different CPU even if the lock > 1681 * is dropped. > 1682 */ > 1683 raw_spin_unlock_irqrestore(&cpu_base->lock, flags); > ^^^^^ > We potentially enable IRQs. > > 1684 trace_hrtimer_expire_entry(timer, now); > 1685 expires_in_hardirq = lockdep_hrtimer_enter(timer); > 1686 > 1687 restart = fn(timer); > 1688 > 1689 lockdep_hrtimer_exit(expires_in_hardirq); > 1690 trace_hrtimer_expire_exit(timer); > 1691 raw_spin_lock_irq(&cpu_base->lock); > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > Then disable them again. > > 1692 > > Of course the other warnings are mostly not "bugs" because the callers > haven't disabled IRQs. They're just in need of clean up. Agreed, and please see below for the proposed cleanup in this case. Thanx, Paul ------------------------------------------------------------------------ commit 43e96791e7d1605803093b34fb70217fe29ddaeb Author: Paul E. McKenney <paulmck@xxxxxxxxxx> Date: Wed May 8 10:28:43 2024 -0700 rcu: Disable interrupts directly in rcu_gp_init() Interrupts are enabled in rcu_gp_init(), so this commit switches from local_irq_save() and local_irq_restore() to local_irq_disable() and local_irq_enable(). Link: https://lore.kernel.org/all/febb13ab-a4bb-48b4-8e97-7e9f7749e6da@moroto.mountain/ Reported-by: Dan Carpenter <dan.carpenter@xxxxxxxxxx> Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxx> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index d5507ac1bbf19..7560e204198bb 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -1841,7 +1841,7 @@ static noinline_for_stack bool rcu_gp_init(void) WRITE_ONCE(rcu_state.gp_state, RCU_GP_ONOFF); /* Exclude CPU hotplug operations. */ rcu_for_each_leaf_node(rnp) { - local_irq_save(flags); + local_irq_disable(); arch_spin_lock(&rcu_state.ofl_lock); raw_spin_lock_rcu_node(rnp); if (rnp->qsmaskinit == rnp->qsmaskinitnext && @@ -1849,7 +1849,7 @@ static noinline_for_stack bool rcu_gp_init(void) /* Nothing to do on this leaf rcu_node structure. */ raw_spin_unlock_rcu_node(rnp); arch_spin_unlock(&rcu_state.ofl_lock); - local_irq_restore(flags); + local_irq_enable(); continue; } @@ -1886,7 +1886,7 @@ static noinline_for_stack bool rcu_gp_init(void) raw_spin_unlock_rcu_node(rnp); arch_spin_unlock(&rcu_state.ofl_lock); - local_irq_restore(flags); + local_irq_enable(); } rcu_gp_slow(gp_preinit_delay); /* Races with CPU hotplug. */