On Tue, 2 Aug 2022, Will Deacon wrote: > On Mon, Aug 01, 2022 at 12:12:47PM -0400, Mikulas Patocka wrote: > > On Mon, 1 Aug 2022, Will Deacon wrote: > > > On Mon, Aug 01, 2022 at 06:42:15AM -0400, Mikulas Patocka wrote: > > > > > > > Index: linux-2.6/arch/x86/include/asm/bitops.h > > > > =================================================================== > > > > --- linux-2.6.orig/arch/x86/include/asm/bitops.h 2022-08-01 12:27:43.000000000 +0200 > > > > +++ linux-2.6/arch/x86/include/asm/bitops.h 2022-08-01 12:27:43.000000000 +0200 > > > > @@ -203,8 +203,10 @@ arch_test_and_change_bit(long nr, volati > > > > > > > > static __always_inline bool constant_test_bit(long nr, const volatile unsigned long *addr) > > > > { > > > > - return ((1UL << (nr & (BITS_PER_LONG-1))) & > > > > + bool r = ((1UL << (nr & (BITS_PER_LONG-1))) & > > > > (addr[nr >> _BITOPS_LONG_SHIFT])) != 0; > > > > + barrier(); > > > > + return r; > > > > > > Hmm, I find it a bit weird to have a barrier() here given that 'addr' is > > > volatile and we don't need a barrier() like this in the definition of > > > READ_ONCE(), for example. > > > > gcc doesn't reorder two volatile accesses, but it can reorder non-volatile > > accesses around volatile accesses. > > > > The purpose of the compiler barrier is to make sure that the non-volatile > > accesses that follow test_bit are not reordered by the compiler before the > > volatile access to addr. > > If we need these accesses to be ordered reliably, then we need a CPU barrier > and that will additionally prevent the compiler reordering. So I still don't > think we need the barrier() here. This is x86-specific code. x86 has strong memory ordering, so we only care about compiler reordering. We could use smp_rmb() (or smp_load_acquire()) instead of barrier() here, but smp_rmb() and smp_load_acquire() on x86 is identical to barrier() anyway. Mikulas