On Thu, 21 Jul 2016 19:54:55 +0200 Manfred Spraul <manfred@xxxxxxxxxxxxxxxx> wrote: > Next update: > - switch to smp_store_mb() instead of WRITE_ONCE();smp_mb(); > - introduce SEM_GLOBAL_LOCK instead of magic -1. > - do not use READ_ONCE() for the unlocked&unordered test: > READ_ONCE doesn't make sense for unlocked&unordered code. > - document why smp_mb() is required after spin_lock(). I assume "ipc/sem.c: remove duplicated memory barriers" is still relevant? From: Manfred Spraul <manfred@xxxxxxxxxxxxxxxx> Subject: ipc/sem.c: remove duplicated memory barriers With 2c610022711 ("locking/qspinlock: Fix spin_unlock_wait() some more"), memory barriers were added into spin_unlock_wait(). Thus another barrier is not required. And as explained in 055ce0fd1b8 ("locking/qspinlock: Add comments"), spin_lock() provides a barrier so that reads within the critical section cannot happen before the write for the lock is visible. i.e. spin_lock provides an acquire barrier after the write of the lock variable, this barrier pairs with the smp_mb() in complexmode_enter(). Link: http://lkml.kernel.org/r/1468386412-3608-3-git-send-email-manfred@xxxxxxxxxxxxxxxx Signed-off-by: Manfred Spraul <manfred@xxxxxxxxxxxxxxxx> Cc: "H. Peter Anvin" <hpa@xxxxxxxxx> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Cc: Davidlohr Bueso <dave@xxxxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Cc: Ingo Molnar <mingo@xxxxxxx> Cc: <1vier1@xxxxxx> Cc: <felixh@xxxxxxxxxxxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- ipc/sem.c | 16 ---------------- 1 file changed, 16 deletions(-) diff -puN ipc/sem.c~ipc-semc-remove-duplicated-memory-barriers ipc/sem.c --- a/ipc/sem.c~ipc-semc-remove-duplicated-memory-barriers +++ a/ipc/sem.c @@ -290,14 +290,6 @@ static void complexmode_enter(struct sem sem = sma->sem_base + i; spin_unlock_wait(&sem->lock); } - /* - * spin_unlock_wait() is not a memory barriers, it is only a - * control barrier. The code must pair with spin_unlock(&sem->lock), - * thus just the control barrier is insufficient. - * - * smp_rmb() is sufficient, as writes cannot pass the control barrier. - */ - smp_rmb(); } /* @@ -363,14 +355,6 @@ static inline int sem_lock(struct sem_ar */ spin_lock(&sem->lock); - /* - * See 51d7d5205d33 - * ("powerpc: Add smp_mb() to arch_spin_is_locked()"): - * A full barrier is required: the write of sem->lock - * must be visible before the read is executed - */ - smp_mb(); - if (!smp_load_acquire(&sma->complex_mode)) { /* fast path successful! */ return sops->sem_num; _ -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html