Re: [PATCH] irq_poll: Use raise_softirq_irqoff() in cpu_dead notifier

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2021-10-18 03:53:20 [-0700], Christoph Hellwig wrote:
> On Thu, Sep 30, 2021 at 12:37:54PM +0200, Sebastian Andrzej Siewior wrote:
> > __raise_softirq_irqoff() adds a bit to the pending sofirq mask and this
> > is it. The softirq won't be handled in a deterministic way but randomly
> > when an interrupt fires and handles softirq in its irq_exit() routine or
> > if something randomly checks and handles pending softirqs in the call
> > chain before the CPU goes idle.
> > 
> > Add a local_bh_disable/enable() around the IRQ-off section which will
> > handle pending softirqs.
> 
> This patch leaves me extremely confused, and it would even more if I was
> just reading the code.  local_irq_disable is supposed to disable BHs
> as well, so the code looks pretty much nonsensical to me.  But
> apparently that isn't the point if I follow your commit message as you
> don't care about an extra level of BH disabling but want to force a
> side-effect of the re-enabling?  Why not directly call the helper
> to schedule the softirq then? 

This side-effect is actually the way it works most of the time. There is
one exception in the network stack where do_softirq() is called manually
after checking for pending softirqs. I managed to remove one instance in
commit
   fe420d87bbc23 ("net/core: remove explicit do_softirq() from busy_poll_stop()")

just wasn't so lucky with the other one (yet). Other than that, it is
either local_bh_enable() that ensures that pending softirqs are
processed or __irq_exit_rcu().
Same as preempt_enable() ensure that a context switch happens if the
scheudler decided that one is needed but the CPU was in a
preempt-disabled section at that time.

Anyway. Even with s/__raise_softirq_irqoff/raise_softirq_irqoff/ you are
not much better off. The latter will wake ksoftirqd but it might result
in setting the NEED-resched bit which is not checked by
local_irq_enable(). So you end up waiting until random spin_unlock()
which has the needed check. Nothing here does that here but probably
something before the CPU-HP code is left.

Sebastian



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux