On Wed, 4 Aug 2021 09:49:48 -0600 Jens Axboe <axboe@xxxxxxxxx> wrote: > > @@ -430,9 +430,9 @@ static struct io_wq_work *io_get_next_work(struct io_wqe *wqe) > > } > > > > if (stall_hash != -1U) { > > - raw_spin_unlock(&wqe->lock); > > + raw_spin_unlock_irq(&wqe->lock); > > io_wait_on_hash(wqe, stall_hash); > > - raw_spin_lock(&wqe->lock); > > + raw_spin_lock_irq(&wqe->lock); > > } > > > > return NULL; > > > > (this is on-top of the patch you sent earlier and Daniel Cc: me on after > > I checked that the problem/warning still exists). > > That'd work on non-RT as well, but it makes it worse on non-RT as well with > the irq enable/disable dance. While that's not the end of the world, would > be nice to have a solution that doesn't sacrifice anything, yet doesn't > make RT unhappy. We use to have something like: local_irq_disable_rt() that would only disable irqs when PREEMPT_RT was configured, but this was considered "ugly" and removed to try to only use spin_lock_irq() and raw_spin_lock_irq(). But for this situation, it looks like it would do exactly what you wanted. Not sacrifice anything yet keep RT happy. Not sure that's a solution still. :-/ Perhaps in this situation, we could open code it to: if (stall_hash != -1U) { raw_spin_unlock(&wqe->lock); /* On RT the spin_lock_irq() does not disable interrupts */ if (IS_ENABLED(CONFIG_PREEMPT_RT)) local_irq_enable(); io_wait_on_hash(wqe, stall_hash); if (IS_ENABLED(CONFIG_PREEMPT_RT)) local_irq_disable(); raw_spin_lock(&wqe->lock); } Note, I haven't looked at the rest of the code to know the ripple effect of this, but I'm just suggesting the idea. -- Steve