Re: [PATCH v2 3/4] MCS Lock: Barrier corrections

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 2013-11-05 at 22:18 +0100, Peter Zijlstra wrote:
> On Tue, Nov 05, 2013 at 11:21:57AM -0800, Tim Chen wrote:
> > On Tue, 2013-11-05 at 18:37 +0000, Will Deacon wrote:
> > > On Tue, Nov 05, 2013 at 05:42:36PM +0000, Tim Chen wrote:
> > > > This patch corrects the way memory barriers are used in the MCS lock
> > > > and removes ones that are not needed. Also add comments on all barriers.
> > > 
> > > Hmm, I see that you're fixing up the barriers, but I still don't completely
> > > understand how what you have is correct. Hopefully you can help me out :)
> > > 
> > > > Reviewed-by: Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx>
> > > > Reviewed-by: Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx>
> > > > Signed-off-by: Jason Low <jason.low2@xxxxxx>
> > > > ---
> > > >  include/linux/mcs_spinlock.h |   13 +++++++++++--
> > > >  1 files changed, 11 insertions(+), 2 deletions(-)
> > > > 
> > > > diff --git a/include/linux/mcs_spinlock.h b/include/linux/mcs_spinlock.h
> > > > index 96f14299..93d445d 100644
> > > > --- a/include/linux/mcs_spinlock.h
> > > > +++ b/include/linux/mcs_spinlock.h
> > > > @@ -36,16 +36,19 @@ void mcs_spin_lock(struct mcs_spinlock **lock, struct mcs_spinlock *node)
> > > >  	node->locked = 0;
> > > >  	node->next   = NULL;
> > > >  
> > > > +	/* xchg() provides a memory barrier */
> > > >  	prev = xchg(lock, node);
> > > >  	if (likely(prev == NULL)) {
> > > >  		/* Lock acquired */
> > > >  		return;
> > > >  	}
> > > >  	ACCESS_ONCE(prev->next) = node;
> > > > -	smp_wmb();
> > > >  	/* Wait until the lock holder passes the lock down */
> > > >  	while (!ACCESS_ONCE(node->locked))
> > > >  		arch_mutex_cpu_relax();
> > > > +
> > > > +	/* Make sure subsequent operations happen after the lock is acquired */
> > > > +	smp_rmb();
> > > 
> > > Ok, so this is an smp_rmb() because we assume that stores aren't speculated,
> > > right? (i.e. the control dependency above is enough for stores to be ordered
> > > with respect to taking the lock)...
> 

The smp_rmb was put in to make sure that the lock
is indeed set before we start doing speculative reads in next critical
section.

Wonder if your concern is about the possibility of write in next 
critical section bleeding into read in previous critical section?

If reads and writes are re-ordered in previous critical section before mcs_spin_unlock, 
it may be possible that the previous critical section is still
reading when it set the lock for the next mcs in mcs_spin_unlock.  
This allows the next critical section to start writing prematurely, before 
previous critical section finished all reads.  

If this concern is valid, we should change the smp_wmb() to smp_mb()
in the unlock function, to make sure previous critical section has
completed all operations before next section starts.


> PaulMck completely confused me a few days ago with control dependencies
> etc.. Pretty much saying that C/C++ doesn't do those.

Will appreciate feedback getting the barriers right.

Thanks.

Tim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]