On Fri, Sep 27, 2013 at 10:41:16PM +0200, Peter Zijlstra wrote: > On Fri, Sep 27, 2013 at 08:15:32PM +0200, Oleg Nesterov wrote: > > On 09/26, Peter Zijlstra wrote: [ . . . ] > > > +static bool cpuhp_readers_active_check(void) > > > { > > > + unsigned int seq = per_cpu_sum(cpuhp_seq); > > > + > > > + smp_mb(); /* B matches A */ > > > + > > > + /* > > > + * In other words, if we see __get_online_cpus() cpuhp_seq increment, > > > + * we are guaranteed to also see its __cpuhp_refcount increment. > > > + */ > > > > > > + if (per_cpu_sum(__cpuhp_refcount) != 0) > > > + return false; > > > > > > + smp_mb(); /* D matches C */ > > > > It seems that both barries could be smp_rmb() ? I am not sure the comments > > from srcu_readers_active_idx_check() can explain mb(), note that > > __srcu_read_lock() always succeeds unlike get_cpus_online(). > > I see what you mean; cpuhp_readers_active_check() is all purely reads; > there are no writes to order. > > Paul; is there any argument for the MB here as opposed to RMB; and if > not should we change both these and SRCU? Given that these memory barriers execute only on the semi-slow path, why add the complexity of moving from smp_mb() to either smp_rmb() or smp_wmb()? Straight smp_mb() is easier to reason about and more robust against future changes. Thanx, Paul -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>