> > > + select ARCH_USE_QUEUED_SPINLOCKS if TOOLCHAIN_HAS_ZABHA > > IIUC, we should make sure qspinlocks run with ARCH_WEAK_RELEASE_ACQUIRE, > > perhaps a similar select for the latter? (not a kconfig expert) > > > Where did you see this dependency? And if that is really a dependency of > qspinlocks, shouldn't this be under CONFIG_QUEUED_SPINLOCKS? (not a Kconfig > expert too). The comment on smp_mb__after_unlock_lock() in include/linux/rcupdate.h (the barrier is currently only used by the RCU subsystem) recalls: /* * Place this after a lock-acquisition primitive to guarantee that * an UNLOCK+LOCK pair acts as a full barrier. This guarantee applies * if the UNLOCK and LOCK are executed by the same CPU or if the * UNLOCK and LOCK operate on the same lock variable. */ #ifdef CONFIG_ARCH_WEAK_RELEASE_ACQUIRE #define smp_mb__after_unlock_lock() smp_mb() /* Full ordering for lock. */ #else /* #ifdef CONFIG_ARCH_WEAK_RELEASE_ACQUIRE */ #define smp_mb__after_unlock_lock() do { } while (0) #endif /* #else #ifdef CONFIG_ARCH_WEAK_RELEASE_ACQUIRE */ Architectures whose UNLOCK+LOCK implementation does not (already) meet the required "full barrier" ordering property (currently, only powerpc) can overwrite the "default"/common #define for this barrier (NOP) and meet the ordering by opting in for ARCH_WEAK_RELEASE_ACQUIRE. The (current) "generic" ticket lock implementation provides "the full barrier" in its LOCK operations (hence in part. in UNLOCK+LOCK), cf. arch_spin_trylock() -> atomic_try_cmpxchg() arch_spin_lock() -> atomic_fetch_add() -> atomic_cond_read_acquire(); smp_mb() but the "UNLOCK+LOCK pairs act as a full barrier" property doesn't hold true for riscv (and powerpc) when switching over to queued spinlocks. OTOH, I see no particular reason for other "users" of queued spinlocks (notably, x86 and arm64) for selecting ARCH_WEAK_RELEASE_ACQUIRE. But does this address your concern? Let me know if I misunderstood it. Andrea