On Thu, Apr 18, 2024 at 7:49 AM Shakeel Butt <shakeel.butt@xxxxxxxxx> wrote: > > On Thu, Apr 18, 2024 at 11:02:06AM +0200, Jesper Dangaard Brouer wrote: > > > > > > On 18/04/2024 04.19, Yosry Ahmed wrote: > [...] > > > > > > I will keep the high-level conversation about using the mutex here in > > > the cover letter thread, but I am wondering why we are keeping the > > > lock dropping logic here with the mutex? > > > > > > > I agree that yielding the mutex in the loop makes less sense. > > Especially since the raw_spin_unlock_irqrestore(cpu_lock, flags) call > > will be a preemption point for my softirq. But I kept it because, we > > are running a CONFIG_PREEMPT_VOLUNTARY kernel, so I still worried that > > there was no sched point for other userspace processes while holding the > > mutex, but I don't fully know the sched implication when holding a mutex. > > > > Are the softirqs you are interested in, raised from the same cpu or > remote cpu? What about local_softirq_pending() check in addition to > need_resched() and spin_needbreak() checks? If softirq can only be > raised on local cpu then convert the spin_lock to non-irq one (Please > correct me if I am wrong but on return from hard irq and not within bh > or irq disabled spin_lock, the kernel will run the pending softirqs, > right?). Did you get the chance to test these two changes or something > similar in your prod environment? I tried making the spinlock a non-irq lock before, but Tejun objected [1]. Perhaps we could experiment with always dropping the lock at CPU boundaries instead? [1]https://lore.kernel.org/lkml/ZBz%2FV5a7%2F6PZeM7S@xxxxxxxxxxxxxxx/