On Tue, Nov 26, 2013 at 03:58:11PM -0800, Linus Torvalds wrote: > On Tue, Nov 26, 2013 at 2:51 PM, Paul E. McKenney > <paulmck@xxxxxxxxxxxxxxxxxx> wrote: > > > > Good points, and after_spinlock() works for me from an RCU perspective. > > Note that there's still a semantic question about exactly what that > "after_spinlock()" is: would it be a memory barrier *only* for the CPU > that actually does the spinlock? Or is it that "third CPU" order? > > IOW, it would stil not necessarily make your "unlock+lock" (on > different CPU's) be an actual barrier as far as a third CPU was > concerned, because you could still have the "unlock happened after > contention was going on, so the final unlock only released the MCS > waiter, and there was no barrier". > > See what I'm saying? We could guarantee that if somebody does > > write A; > spin_lock() > mb__after_spinlock(); > read B > > then the "write A" -> "read B" would be ordered. That's one thing. > > But your > > - CPU 1: > > write A > spin_unlock() > > - CPU 2 > > spin_lock() > mb__after_spinlock(); > read B > > ordering as far as a *third* CPU is concerned is a whole different > thing again, and wouldn't be at all the same thing. > > Is it really that cross-CPU ordering you care about? Cross-CPU ordering. I have to guarantee the grace period across all CPUs, and I currently rely on a series of lock acquisitions to provide that ordering. On the other hand, I only rely on unlock+lock pairs, so that I don't need any particular lock or unlock operation to be a full barrier in and of itself. If that turns out to be problematic, I could of course insert smp_mb()s everywhere, but they would be redundant on most architectures. Thanx, Paul -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>