Re: [PATCH v4 1/2] introduce test_bit_acquire and use it in wait_on_bit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 01, 2022 at 12:12:47PM -0400, Mikulas Patocka wrote:
> On Mon, 1 Aug 2022, Will Deacon wrote:
> > On Mon, Aug 01, 2022 at 06:42:15AM -0400, Mikulas Patocka wrote:
> > 
> > > Index: linux-2.6/arch/x86/include/asm/bitops.h
> > > ===================================================================
> > > --- linux-2.6.orig/arch/x86/include/asm/bitops.h	2022-08-01 12:27:43.000000000 +0200
> > > +++ linux-2.6/arch/x86/include/asm/bitops.h	2022-08-01 12:27:43.000000000 +0200
> > > @@ -203,8 +203,10 @@ arch_test_and_change_bit(long nr, volati
> > >  
> > >  static __always_inline bool constant_test_bit(long nr, const volatile unsigned long *addr)
> > >  {
> > > -	return ((1UL << (nr & (BITS_PER_LONG-1))) &
> > > +	bool r = ((1UL << (nr & (BITS_PER_LONG-1))) &
> > >  		(addr[nr >> _BITOPS_LONG_SHIFT])) != 0;
> > > +	barrier();
> > > +	return r;
> > 
> > Hmm, I find it a bit weird to have a barrier() here given that 'addr' is
> > volatile and we don't need a barrier() like this in the definition of
> > READ_ONCE(), for example.
> 
> gcc doesn't reorder two volatile accesses, but it can reorder non-volatile
> accesses around volatile accesses.
> 
> The purpose of the compiler barrier is to make sure that the non-volatile 
> accesses that follow test_bit are not reordered by the compiler before the 
> volatile access to addr.

If we need these accesses to be ordered reliably, then we need a CPU barrier
and that will additionally prevent the compiler reordering. So I still don't
think we need the barrier() here.

> > > Index: linux-2.6/include/linux/wait_bit.h
> > > ===================================================================
> > > --- linux-2.6.orig/include/linux/wait_bit.h	2022-08-01 12:27:43.000000000 +0200
> > > +++ linux-2.6/include/linux/wait_bit.h	2022-08-01 12:27:43.000000000 +0200
> > > @@ -71,7 +71,7 @@ static inline int
> > >  wait_on_bit(unsigned long *word, int bit, unsigned mode)
> > >  {
> > >  	might_sleep();
> > > -	if (!test_bit(bit, word))
> > > +	if (!test_bit_acquire(bit, word))
> > >  		return 0;
> > >  	return out_of_line_wait_on_bit(word, bit,
> > >  				       bit_wait,
> > 
> > Yet another approach here would be to leave test_bit as-is and add a call to
> > smp_acquire__after_ctrl_dep() since that exists already -- I don't have
> > strong opinions about it, but it saves you having to add another stub to
> > x86.
> 
> It would be the same as my previous patch with smp_rmb() that Linus didn't 
> like. But I think smp_rmb (or smp_acquire__after_ctrl_dep) would be 
> correct here.

Right, I saw Linus' objection to smp_rmb() and I'm not sure where
smp_acquire__after_ctrl_dep() fits in with his line of reasoning. On the one
hand, it's talking about acquire ordering, but on the other, it's ugly as
sin :)

Will



[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux