On Mon, 2014-10-13 at 15:44 +0200, Jiri Slaby wrote: > From: Michael Ellerman <mpe@xxxxxxxxxxxxxx> > > This patch does NOT apply to the 3.12 stable tree. If you still want > it applied, please provide a backport. > > =============== >From dfdf7ca58efedb5271f43811e6bd036551c198ef Mon Sep 17 00:00:00 2001 From: Michael Ellerman <mpe@xxxxxxxxxxxxxx> Date: Thu, 7 Aug 2014 15:36:18 +1000 Subject: [PATCH 2/2] powerpc: Add smp_mb()s to arch_spin_unlock_wait() commit 78e05b1421fa41ae8457701140933baa5e7d9479 upstream. Similar to the previous commit which described why we need to add a barrier to arch_spin_is_locked(), we have a similar problem with spin_unlock_wait(). We need a barrier on entry to ensure any spinlock we have previously taken is visibly locked prior to the load of lock->slock. It's also not clear if spin_unlock_wait() is intended to have ACQUIRE semantics. For now be conservative and add a barrier on exit to give it ACQUIRE semantics. Signed-off-by: Michael Ellerman <mpe@xxxxxxxxxxxxxx> Signed-off-by: Benjamin Herrenschmidt <benh@xxxxxxxxxxxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- arch/powerpc/lib/locks.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/powerpc/lib/locks.c b/arch/powerpc/lib/locks.c index 0c9c8d7d0734..170a0346f756 100644 --- a/arch/powerpc/lib/locks.c +++ b/arch/powerpc/lib/locks.c @@ -70,12 +70,16 @@ void __rw_yield(arch_rwlock_t *rw) void arch_spin_unlock_wait(arch_spinlock_t *lock) { + smp_mb(); + while (lock->slock) { HMT_low(); if (SHARED_PROCESSOR) __spin_yield(lock); } HMT_medium(); + + smp_mb(); } EXPORT_SYMBOL(arch_spin_unlock_wait); -- 1.9.1 -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html