On Tue, Dec 10, 2013 at 01:37:26PM +0100, Peter Zijlstra wrote: > On Mon, Dec 09, 2013 at 05:28:02PM -0800, Paul E. McKenney wrote: > > diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h > > index f89da808ce31..abf645799991 100644 > > --- a/arch/powerpc/include/asm/barrier.h > > +++ b/arch/powerpc/include/asm/barrier.h > > @@ -84,4 +84,6 @@ do { \ > > ___p1; \ > > }) > > > > +#define smp_mb__after_unlock_lock() do { } while (0) > > + > > #endif /* _ASM_POWERPC_BARRIER_H */ > > Didn't ben said ppc actually violates the current unlock+lock assumtion > and therefore this barrier woulnd't actually be a nop on ppc Or, ppc could fix its lock primitives to preserve the unlock+lock assumption, and avoid subtle breakage across half the kernel. - Josh Triplett -- To unsubscribe from this list: send the line "unsubscribe linux-arch" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html