On Wed, Jun 07, 2017 at 03:17:27PM +0200, Peter Zijlstra wrote: > > +static inline void arch_spin_unlock(arch_spinlock_t *lock) > > +{ > > + __asm__ __volatile__ ( > > + "amoswap.w.rl x0, x0, %0" > > + : "=A" (lock->lock) > > + :: "memory"); > > +} > > + > > +static inline int arch_spin_trylock(arch_spinlock_t *lock) > > +{ > > + int tmp = 1, busy; > > + > > + __asm__ __volatile__ ( > > + "amoswap.w.aq %0, %2, %1" > > + : "=r" (busy), "+A" (lock->lock) > > + : "r" (tmp) > > + : "memory"); > > + > > + return !busy; > > +} One other thing, you need to describe the acquire/release semantics for your platform. Is the above lock RCpc or RCsc ? If RCpc, you need to look into adding smp_mb__after_unlock_lock().