This is a note to let you know that I've just added the patch titled ipc/sem.c: optimize sem_lock() to the 3.10-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: ipc-sem.c-optimize-sem_lock.patch and it can be found in the queue-3.10 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From 6d07b68ce16ae9535955ba2059dedba5309c3ca1 Mon Sep 17 00:00:00 2001 From: Manfred Spraul <manfred@xxxxxxxxxxxxxxxx> Date: Mon, 30 Sep 2013 13:45:06 -0700 Subject: ipc/sem.c: optimize sem_lock() From: Manfred Spraul <manfred@xxxxxxxxxxxxxxxx> commit 6d07b68ce16ae9535955ba2059dedba5309c3ca1 upstream. Operations that need access to the whole array must guarantee that there are no simple operations ongoing. Right now this is achieved by spin_unlock_wait(sem->lock) on all semaphores. If complex_count is nonzero, then this spin_unlock_wait() is not necessary, because it was already performed in the past by the thread that increased complex_count and even though sem_perm.lock was dropped inbetween, no simple operation could have started, because simple operations cannot start when complex_count is non-zero. Signed-off-by: Manfred Spraul <manfred@xxxxxxxxxxxxxxxx> Cc: Mike Galbraith <bitbucket@xxxxxxxxx> Cc: Rik van Riel <riel@xxxxxxxxxx> Reviewed-by: Davidlohr Bueso <davidlohr@xxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Signed-off-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> Cc: Mike Galbraith <efault@xxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- ipc/sem.c | 8 ++++++++ 1 file changed, 8 insertions(+) --- a/ipc/sem.c +++ b/ipc/sem.c @@ -257,12 +257,20 @@ static void sem_rcu_free(struct rcu_head * Caller must own sem_perm.lock. * New simple ops cannot start, because simple ops first check * that sem_perm.lock is free. + * that a) sem_perm.lock is free and b) complex_count is 0. */ static void sem_wait_array(struct sem_array *sma) { int i; struct sem *sem; + if (sma->complex_count) { + /* The thread that increased sma->complex_count waited on + * all sem->lock locks. Thus we don't need to wait again. + */ + return; + } + for (i = 0; i < sma->sem_nsems; i++) { sem = sma->sem_base + i; spin_unlock_wait(&sem->lock); Patches currently in stable-queue which might be from manfred@xxxxxxxxxxxxxxxx are queue-3.10/ipc-drop-ipc_lock_by_ptr.patch queue-3.10/ipc-sem.c-synchronize-the-proc-interface.patch queue-3.10/ipc-msg-drop-msg_unlock.patch queue-3.10/ipc-fix-race-with-lsms.patch queue-3.10/ipc-shm-shorten-critical-region-in-shmctl_down.patch queue-3.10/ipc-rename-ids-rw_mutex.patch queue-3.10/ipc-msg.c-fix-lost-wakeup-in-msgsnd.patch queue-3.10/ipc-shm-introduce-lockless-functions-to-obtain-the-ipc-object.patch queue-3.10/ipc-sem.c-optimize-sem_lock.patch queue-3.10/ipc-shm-shorten-critical-region-for-shmat.patch queue-3.10/ipc-sem.c-fix-race-in-sem_lock.patch queue-3.10/ipc-sem-separate-wait-for-zero-and-alter-tasks-into-seperate-queues.patch queue-3.10/ipc-shm-shorten-critical-region-for-shmctl.patch queue-3.10/ipc-shm-introduce-shmctl_nolock.patch queue-3.10/ipc-drop-ipcctl_pre_down.patch queue-3.10/ipc-document-general-ipc-locking-scheme.patch queue-3.10/ipc-util.c-ipc_rcu_alloc-cacheline-align-allocation.patch queue-3.10/ipc-shm-drop-shm_lock_check.patch queue-3.10/ipc-sem.c-cacheline-align-the-semaphore-structures.patch queue-3.10/ipc-msg-prevent-race-with-rmid-in-msgsnd-msgrcv.patch queue-3.10/ipc-shm-guard-against-non-existant-vma-in-shmdt-2.patch queue-3.10/ipc-sem.c-always-use-only-one-queue-for-alter-operations.patch queue-3.10/ipc-shm-cleanup-do_shmat-pasta.patch queue-3.10/ipc-drop-ipc_lock_check.patch queue-3.10/ipc-sem.c-update-sem_otime-for-all-operations.patch queue-3.10/ipc-sem.c-rename-try_atomic_semop-to-perform_atomic_semop-docu-update.patch queue-3.10/ipc-shm-make-shmctl_nolock-lockless.patch queue-3.10/ipc-sem.c-replace-shared-sem_otime-with-per-semaphore-value.patch -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html