Re: [PATCH] kernel: Introduce a write lock/unlock wrapper for tasklist_lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 1/4/2024 2:18 AM, Matthew Wilcox wrote:
On Wed, Jan 03, 2024 at 10:58:33AM +0800, Aiqun Yu (Maria) wrote:
On 1/2/2024 5:14 PM, Matthew Wilcox wrote:
-void __lockfunc queued_write_lock_slowpath(struct qrwlock *lock)
+void __lockfunc queued_write_lock_slowpath(struct qrwlock *lock, bool irq)
    {
    	int cnts;
@@ -82,7 +83,11 @@ void __lockfunc queued_write_lock_slowpath(struct qrwlock *lock)
Also a new state showed up after the current design:
1. locked flag with _QW_WAITING, while irq enabled.
2. And this state will be only in interrupt context.
3. lock->wait_lock is hold by the write waiter.
So per my understanding, a different behavior also needed to be done in
queued_write_lock_slowpath:
    when (unlikely(in_interrupt())) , get the lock directly.

I don't think so.  Remember that write_lock_irq() can only be called in
process context, and when interrupts are enabled.
In current kernel drivers, I can see same lock called with write_lock_irq
and write_lock_irqsave in different drivers.

And this is the scenario I am talking about:
1. cpu0 have task run and called write_lock_irq.(Not in interrupt context)
2. cpu0 hold the lock->wait_lock and re-enabled the interrupt.

Oh, I missed that it was holding the wait_lock.  Yes, we also need to
release the wait_lock before spinning with interrupts disabled.

I was thinking to support both write_lock_irq and write_lock_irqsave with
interrupt enabled together in queued_write_lock_slowpath.

That's why I am suggesting in write_lock_irqsave when (in_interrupt()),
instead spin for the lock->wait_lock, spin to get the lock->cnts directly.

Mmm, but the interrupt could come in on a different CPU and that would
lead to it stealing the wait_lock from the CPU which is merely waiting
for the readers to go away.
That's right.
The fairness(or queue mechanism) wouldn't be ensured (only in interrupt context) if we have the special design when (in_interrupt()) spin to get the lock->cnts directly. When in interrupt context, the later write_lock_irqsave may get the lock earlier than the write_lock_irq() which is not in interrupt context.

This is a side effect of the design, while similar unfairness design in read lock as well. I think it is reasonable to have in_interrupt() waiters get lock earlier from the whole system's performance of view.


--
Thx and BRs,
Aiqun(Maria) Yu




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux