On Wed, Jun 29, 2022 at 4:27 PM David Laight <David.Laight@xxxxxxxxxx> wrote: > > From: guoren@xxxxxxxxxx > > Sent: 28 June 2022 09:17 > > > > From: Guo Ren <guoren@xxxxxxxxxxxxxxxxx> > > > > Remove unnecessary atomic_read in arch_spin_value_unlocked(lock), > > because the value has been in lock. This patch could prevent > > arch_spin_value_unlocked contend spin_lock data again. > > I'm confused (as usual). > Isn't atomic_read() pretty much free? When a cache line is shared with multi-harts, not as free as you think. Preventing touching contended data is the basic principle. atomic_read in alpha is heavy, It could be a potential user of ticket-lock. > > .. > > diff --git a/include/asm-generic/spinlock.h b/include/asm-generic/spinlock.h > > index fdfebcb050f4..f1e4fa100f5a 100644 > > --- a/include/asm-generic/spinlock.h > > +++ b/include/asm-generic/spinlock.h > > @@ -84,7 +84,9 @@ static __always_inline int arch_spin_is_contended(arch_spinlock_t *lock) > > > > static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock) > > { > > - return !arch_spin_is_locked(&lock); > > + u32 val = lock.counter; > > + > > + return ((val >> 16) == (val & 0xffff)); > > That almost certainly needs a READ_ONCE(). > > The result is also inherently stale. > So the uses must be pretty limited. The previous read_once could get 64bit, use the API to check the 32bit atomic data part. > > David > > > } > > > > #include <asm/qrwlock.h> > > -- > > 2.36.1 > > - > Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK > Registration No: 1397386 (Wales) > -- Best Regards Guo Ren ML: https://lore.kernel.org/linux-csky/