On Wed, Jan 03, 2024 at 10:58:33AM +0800, Aiqun Yu (Maria) wrote: > On 1/2/2024 5:14 PM, Matthew Wilcox wrote: > > > > -void __lockfunc queued_write_lock_slowpath(struct qrwlock *lock) > > > > +void __lockfunc queued_write_lock_slowpath(struct qrwlock *lock, bool irq) > > > > { > > > > int cnts; > > > > @@ -82,7 +83,11 @@ void __lockfunc queued_write_lock_slowpath(struct qrwlock *lock) > > > Also a new state showed up after the current design: > > > 1. locked flag with _QW_WAITING, while irq enabled. > > > 2. And this state will be only in interrupt context. > > > 3. lock->wait_lock is hold by the write waiter. > > > So per my understanding, a different behavior also needed to be done in > > > queued_write_lock_slowpath: > > > when (unlikely(in_interrupt())) , get the lock directly. > > > > I don't think so. Remember that write_lock_irq() can only be called in > > process context, and when interrupts are enabled. > In current kernel drivers, I can see same lock called with write_lock_irq > and write_lock_irqsave in different drivers. > > And this is the scenario I am talking about: > 1. cpu0 have task run and called write_lock_irq.(Not in interrupt context) > 2. cpu0 hold the lock->wait_lock and re-enabled the interrupt. Oh, I missed that it was holding the wait_lock. Yes, we also need to release the wait_lock before spinning with interrupts disabled. > I was thinking to support both write_lock_irq and write_lock_irqsave with > interrupt enabled together in queued_write_lock_slowpath. > > That's why I am suggesting in write_lock_irqsave when (in_interrupt()), > instead spin for the lock->wait_lock, spin to get the lock->cnts directly. Mmm, but the interrupt could come in on a different CPU and that would lead to it stealing the wait_lock from the CPU which is merely waiting for the readers to go away.