On Wed, Jan 14, 2015 at 11:31:47AM +0000, Will Deacon wrote: > Hi Oleg, > > On Tue, Jan 13, 2015 at 06:45:10PM +0000, Oleg Nesterov wrote: > > On 01/13, Will Deacon wrote: > > > > > > 1. Does smp_mb__before_spinlock actually have to order prior loads > > > against later loads and stores? Documentation/memory-barriers.txt > > > says it does, but that doesn't match the comment > > > > The comment says that smp_mb__before_spinlock() + spin_lock() should > > only serialize STOREs with LOADs. This is because it was added to ensure > > that the setting of condition can't race with ->state check in ttwu(). > > Yup, that makes sense. The comment is consistent with the code, and I think > the code is doing what it's supposed to do. > > > But since we use wmb() it obviously serializes STOREs with STORES. I do > > not know if this should be documented, but we already have another user > > which seems to rely on this fact: set_tlb_flush_pending(). > > In which case, it's probably a good idea to document that too. > > > As for "prior loads", this doesn't look true... > > Agreed. I'd propose something like the diff below, but it also depends on > my second question since none of this is true for smp_load_acquire. OK, finally getting to this, apologies for the delay... It does look like I was momentarily confusing the memory ordering implied by lock acquisition with that by smp_lock_acquire(). Your patch looks good, would you be willing to resend with commit log and Signed-off-by? Thanx, Paul > Will > > --->8 > > diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt > index 70a09f8a0383..9c0e3c45a807 100644 > --- a/Documentation/memory-barriers.txt > +++ b/Documentation/memory-barriers.txt > @@ -1724,10 +1724,9 @@ for each construct. These operations all imply certain barriers: > > Memory operations issued before the ACQUIRE may be completed after > the ACQUIRE operation has completed. An smp_mb__before_spinlock(), > - combined with a following ACQUIRE, orders prior loads against > - subsequent loads and stores and also orders prior stores against > - subsequent stores. Note that this is weaker than smp_mb()! The > - smp_mb__before_spinlock() primitive is free on many architectures. > + combined with a following ACQUIRE, orders prior stores against > + subsequent loads and stores. Note that this is weaker than smp_mb()! > + The smp_mb__before_spinlock() primitive is free on many architectures. > > (2) RELEASE operation implication: > > -- To unsubscribe from this list: send the line "unsubscribe linux-arch" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html