On Tue, Jun 30, 2020 at 06:37:20PM +0100, Will Deacon wrote: > Rather then relying on the core code to use smp_read_barrier_depends() > as part of the READ_ONCE() definition, instead override __READ_ONCE() > in the Alpha code so that it is treated the same way as > smp_load_acquire(). > > Acked-by: Paul E. McKenney <paulmck@xxxxxxxxxx> > Signed-off-by: Will Deacon <will@xxxxxxxxxx> > --- > arch/alpha/include/asm/barrier.h | 61 ++++---------------------------- > arch/alpha/include/asm/rwonce.h | 19 ++++++++++ > 2 files changed, 26 insertions(+), 54 deletions(-) > create mode 100644 arch/alpha/include/asm/rwonce.h > > diff --git a/arch/alpha/include/asm/barrier.h b/arch/alpha/include/asm/barrier.h > index 92ec486a4f9e..2ecd068d91d1 100644 > --- a/arch/alpha/include/asm/barrier.h > +++ b/arch/alpha/include/asm/barrier.h > @@ -2,64 +2,17 @@ > #ifndef __BARRIER_H > #define __BARRIER_H > > -#include <asm/compiler.h> > - > #define mb() __asm__ __volatile__("mb": : :"memory") > #define rmb() __asm__ __volatile__("mb": : :"memory") > #define wmb() __asm__ __volatile__("wmb": : :"memory") > -#define read_barrier_depends() __asm__ __volatile__("mb": : :"memory") > +#define __smp_load_acquire(p) \ > +({ \ > + __unqual_scalar_typeof(*p) ___p1 = \ > + (*(volatile typeof(___p1) *)(p)); \ > + compiletime_assert_atomic_type(*p); \ > + ___p1; \ > +}) Sorry if I'm being thick, but doesn't this need a barrier after the volatile access to provide the acquire semantic? IIUC prior to this commit alpha would have used the asm-generic __smp_load_acquire, i.e. | #ifndef __smp_load_acquire | #define __smp_load_acquire(p) \ | ({ \ | __unqual_scalar_typeof(*p) ___p1 = READ_ONCE(*p); \ | compiletime_assert_atomic_type(*p); \ | __smp_mb(); \ | (typeof(*p))___p1; \ | }) | #endif ... where the __smp_mb() would be alpha's mb() from earlier in the patch context, i.e. | #define mb() __asm__ __volatile__("mb": : :"memory") ... so don't we need similar before returning ___p1 above in __smp_load_acquire() (and also matching the old read_barrier_depends())? [...] > +#include <asm/barrier.h> > + > +/* > + * Alpha is apparently daft enough to reorder address-dependent loads > + * on some CPU implementations. Knock some common sense into it with > + * a memory barrier in READ_ONCE(). > + */ > +#define __READ_ONCE(x) __smp_load_acquire(&(x)) As above, I don't see a memory barrier implied here, so this doesn't look quite right. Thanks, Mark. _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization