On Fri, Jun 12, 2015 at 05:45:59PM +0530, Vineet Gupta wrote: > - arch_spin_lock/unlock were lacking the ACQUIRE/RELEASE barriers > Since ARCv2 only provides load/load, store/store and all/all, we need > the full barrier > > - LLOCK/SCOND based atomics, bitops, cmpxchg, which return modified > values were lacking the explicit smp barriers. > > - Non LLOCK/SCOND varaints don't need the explicit barriers since that > is implicity provided by the spin locks used to implement the > critical section (the spin lock barriers in turn are also fixed in > this commit as explained above And iirc you're relying on asm-generic/barrier.h to issue smp_mb__{before,after}_atomic() as smp_mb(), right? Acked-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> Although I'd love to know why you need those extra barriers in your spinlocks... -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html