On Sat, Jul 08, 2017 at 10:35:43AM +0200, Ingo Molnar wrote: > > * Manfred Spraul <manfred@xxxxxxxxxxxxxxxx> wrote: > > > Hi Ingo, > > > > On 07/07/2017 10:31 AM, Ingo Molnar wrote: > > > > > > There's another, probably just as significant advantage: queued_spin_unlock_wait() > > > is 'read-only', while spin_lock()+spin_unlock() dirties the lock cache line. On > > > any bigger system this should make a very measurable difference - if > > > spin_unlock_wait() is ever used in a performance critical code path. > > At least for ipc/sem: > > Dirtying the cacheline (in the slow path) allows to remove a smp_mb() in the > > hot path. > > So for sem_lock(), I either need a primitive that dirties the cacheline or > > sem_lock() must continue to use spin_lock()/spin_unlock(). > > Technically you could use spin_trylock()+spin_unlock() and avoid the lock acquire > spinning on spin_unlock() and get very close to the slow path performance of a > pure cacheline-dirtying behavior. > > But adding something like spin_barrier(), which purely dirties the lock cacheline, > would be even faster, right? Interestingly enough, the arm64 and powerpc implementations of spin_unlock_wait() were very close to what it sounds like you are describing. Thanx, Paul -- To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html