On Sat, Jul 22, 2023 at 10:07:19PM -0400, Waiman Long wrote: > On 7/19/23 03:00, guoren@xxxxxxxxxx wrote: > > From: Guo Ren <guoren@xxxxxxxxxxxxxxxxx> > > > > Using arch_spinlock_is_locked would cause another unnecessary memory > > access to the contended value. Although it won't cause a significant > > performance gap in most architectures, the arch_spin_value_unlocked > > argument contains enough information. Thus, remove unnecessary > > atomic_read in arch_spin_value_unlocked(). > > AFAICS, only one memory access is needed for the current > arch_spinlock_is_locked(). So your description isn't quite right. OTOH, Okay, I would improve it. Here means "arch_spin_value_unlocked using arch_spinlock_is_locked" would cause "an" unnecessary ... > caller of arch_spin_value_unlocked() could benefit from this change. > Currently, the only caller is lockref. Thx for comment, I would add it in the commit msg. New version is here: https://lore.kernel.org/linux-riscv/20230731023308.3748432-1-guoren@xxxxxxxxxx/ > > Other than that, the patch looks good to me. > > Cheers, > Longman > > > > > Signed-off-by: Guo Ren <guoren@xxxxxxxxxx> > > Signed-off-by: Guo Ren <guoren@xxxxxxxxxxxxxxxxx> > > Cc: David Laight <David.Laight@xxxxxxxxxx> > > Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> > > --- > > Changelog: > > This patch is separate from: > > https://lore.kernel.org/linux-riscv/20220808071318.3335746-1-guoren@xxxxxxxxxx/ > > > > Peter & David have commented on it: > > https://lore.kernel.org/linux-riscv/YsK4Z9w0tFtgkni8@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/ > > --- > > include/asm-generic/spinlock.h | 16 +++++++++------- > > 1 file changed, 9 insertions(+), 7 deletions(-) > > > > diff --git a/include/asm-generic/spinlock.h b/include/asm-generic/spinlock.h > > index fdfebcb050f4..90803a826ba0 100644 > > --- a/include/asm-generic/spinlock.h > > +++ b/include/asm-generic/spinlock.h > > @@ -68,11 +68,18 @@ static __always_inline void arch_spin_unlock(arch_spinlock_t *lock) > > smp_store_release(ptr, (u16)val + 1); > > } > > +static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock) > > +{ > > + u32 val = lock.counter; > > + > > + return ((val >> 16) == (val & 0xffff)); > > +} > > + > > static __always_inline int arch_spin_is_locked(arch_spinlock_t *lock) > > { > > - u32 val = atomic_read(lock); > > + arch_spinlock_t val = READ_ONCE(*lock); > > - return ((val >> 16) != (val & 0xffff)); > > + return !arch_spin_value_unlocked(val); > > } > > static __always_inline int arch_spin_is_contended(arch_spinlock_t *lock) > > @@ -82,11 +89,6 @@ static __always_inline int arch_spin_is_contended(arch_spinlock_t *lock) > > return (s16)((val >> 16) - (val & 0xffff)) > 1; > > } > > -static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock) > > -{ > > - return !arch_spin_is_locked(&lock); > > -} > > - > > #include <asm/qrwlock.h> > > #endif /* __ASM_GENERIC_SPINLOCK_H */ > > > _______________________________________________ > linux-riscv mailing list > linux-riscv@xxxxxxxxxxxxxxxxxxx > http://lists.infradead.org/mailman/listinfo/linux-riscv >