Re: [PATCH V11 04/17] locking/qspinlock: Improve xchg_tail for number of cpus >= 16k

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 9/10/23 23:09, Guo Ren wrote:
On Mon, Sep 11, 2023 at 10:35 AM Waiman Long <longman@xxxxxxxxxx> wrote:

On 9/10/23 04:28, guoren@xxxxxxxxxx wrote:
From: Guo Ren <guoren@xxxxxxxxxxxxxxxxx>

The target of xchg_tail is to write the tail to the lock value, so
adding prefetchw could help the next cmpxchg step, which may
decrease the cmpxchg retry loops of xchg_tail. Some processors may
utilize this feature to give a forward guarantee, e.g., RISC-V
XuanTie processors would block the snoop channel & irq for several
cycles when prefetch.w instruction (from Zicbop extension) retired,
which guarantees the next cmpxchg succeeds.

Signed-off-by: Guo Ren <guoren@xxxxxxxxxxxxxxxxx>
Signed-off-by: Guo Ren <guoren@xxxxxxxxxx>
---
   kernel/locking/qspinlock.c | 5 ++++-
   1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index d3f99060b60f..96b54e2ade86 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -223,7 +223,10 @@ static __always_inline void clear_pending_set_locked(struct qspinlock *lock)
    */
   static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail)
   {
-     u32 old, new, val = atomic_read(&lock->val);
+     u32 old, new, val;
+
+     prefetchw(&lock->val);
+     val = atomic_read(&lock->val);

       for (;;) {
               new = (val & _Q_LOCKED_PENDING_MASK) | tail;
That looks a bit weird. You pre-fetch and then immediately read it. How
much performance gain you get by this change alone?

Maybe you can define an arch specific primitive that default back to
atomic_read() if not defined.
Thx for the reply. This is a generic optimization point I would like
to talk about with you.

First, prefetchw() makes cacheline an exclusive state and serves for
the next cmpxchg loop semantic, which writes the idx_tail part of
arch_spin_lock. The atomic_read only makes cacheline in the shared
state, which couldn't give any guarantee for the next cmpxchg loop
semantic. Micro-architecture could utilize prefetchw() to provide a
strong forward progress guarantee for the xchg_tail, e.g., the T-HEAD
XuanTie processor would hold the exclusive cacheline state until the
next cmpxchg write success.

In the end, Let's go back to the principle: the xchg_tail is an atomic
swap operation that contains write eventually, so giving a prefetchw()
at the beginning is acceptable for all architectures..
••••••••••••

I did realize afterward that prefetchw gets the cacheline in exclusive state. I will suggest you mention that in your commit log as well as adding a comment about its purpose in the code.

Thanks,
Longman

Cheers,
Longman






[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux