Re: [PATCH v6 4/5] MCS Lock: Barrier corrections

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 26, 2013 at 11:20 AM, Paul E. McKenney
<paulmck@xxxxxxxxxxxxxxxxxx> wrote:
>
> There are several places in RCU that assume unlock+lock is a full
> memory barrier, but I would be more than happy to fix them up given
> an smp_mb__after_spinlock() and an smp_mb__before_spinunlock(), or
> something similar.

A "before_spinunlock" would actually be expensive on x86.

So I'd *much* rather see the "after_spinlock()" version, if that is
sufficient for all users. And it should be, since that's the
traditional x86 behavior that we had before the MCS lock discussion.

Because it's worth noting that a spin_lock() is still a full memory
barrier on x86, even with the MCS code, *assuming it is done in the
context of the thread needing the memory barrier". And I suspect that
is much more generally true than just x86. It's the final MCS hand-off
of a lock that is pretty weak with just a local read. The full lock
sequence is always going to be much stronger, if only because it will
contain a write somewhere shared as well.

                   Linus

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]