Re: [PATCH v6 4/5] MCS Lock: Barrier corrections

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 26, 2013 at 1:59 AM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>
> If you now want to weaken this definition, then that needs consideration
> because we actually rely on things like
>
> spin_unlock(l1);
> spin_lock(l2);
>
> being full barriers.

Btw, maybe we should just stop that assumption. The complexity of this
discussion makes me go "maybe we should stop with subtle assumptions
that happen to be obviously true on x86 due to historical
implementations, but aren't obviously true even *there* any more with
the MCS lock".

We already have a concept of

        smp_mb__before_spinlock();
        spin_lock():

for sequences where we *know* we need to make getting a spin-lock be a
full memory barrier. It's free on x86 (and remains so even with the
MCS lock, regardless of any subtle issues, if only because even the
MCS lock starts out with a locked atomic, never mind the contention
slow-case). Of course, that macro is only used inside the scheduler,
and is actually documented to not really be a full memory barrier, but
it handles the case we actually care about.

IOW, where do we really care about the "unlock+lock" is a memory
barrier? And could we make those places explicit, and then do
something similar to the above to them?

                       Linus

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]