This is a note to let you know that I've just added the patch titled ipc,shm: introduce lockless functions to obtain the ipc object to the 3.11-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: ipc-shm-introduce-lockless-functions-to-obtain-the-ipc-object.patch and it can be found in the queue-3.11 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From 8b8d52ac382b17a19906b930cd69e2edb0aca8ba Mon Sep 17 00:00:00 2001 From: Davidlohr Bueso <davidlohr.bueso@xxxxxx> Date: Wed, 11 Sep 2013 14:26:15 -0700 Subject: ipc,shm: introduce lockless functions to obtain the ipc object From: Davidlohr Bueso <davidlohr.bueso@xxxxxx> commit 8b8d52ac382b17a19906b930cd69e2edb0aca8ba upstream. This is the third and final patchset that deals with reducing the amount of contention we impose on the ipc lock (kern_ipc_perm.lock). These changes mostly deal with shared memory, previous work has already been done for semaphores and message queues: http://lkml.org/lkml/2013/3/20/546 (sems) http://lkml.org/lkml/2013/5/15/584 (mqueues) With these patches applied, a custom shm microbenchmark stressing shmctl doing IPC_STAT with 4 threads a million times, reduces the execution time by 50%. A similar run, this time with IPC_SET, reduces the execution time from 3 mins and 35 secs to 27 seconds. Patches 1-8: replaces blindly taking the ipc lock for a smarter combination of rcu and ipc_obtain_object, only acquiring the spinlock when updating. Patch 9: renames the ids rw_mutex to rwsem, which is what it already was. Patch 10: is a trivial mqueue leftover cleanup Patch 11: adds a brief lock scheme description, requested by Andrew. This patch: Add shm_obtain_object() and shm_obtain_object_check(), which will allow us to get the ipc object without acquiring the lock. Just as with other forms of ipc, these functions are basically wrappers around ipc_obtain_object*(). Signed-off-by: Davidlohr Bueso <davidlohr.bueso@xxxxxx> Tested-by: Sedat Dilek <sedat.dilek@xxxxxxxxx> Cc: Rik van Riel <riel@xxxxxxxxxx> Cc: Manfred Spraul <manfred@xxxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Signed-off-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> Cc: Mike Galbraith <efault@xxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- ipc/shm.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) --- a/ipc/shm.c +++ b/ipc/shm.c @@ -124,6 +124,26 @@ void __init shm_init (void) IPC_SHM_IDS, sysvipc_shm_proc_show); } +static inline struct shmid_kernel *shm_obtain_object(struct ipc_namespace *ns, int id) +{ + struct kern_ipc_perm *ipcp = ipc_obtain_object(&shm_ids(ns), id); + + if (IS_ERR(ipcp)) + return ERR_CAST(ipcp); + + return container_of(ipcp, struct shmid_kernel, shm_perm); +} + +static inline struct shmid_kernel *shm_obtain_object_check(struct ipc_namespace *ns, int id) +{ + struct kern_ipc_perm *ipcp = ipc_obtain_object_check(&shm_ids(ns), id); + + if (IS_ERR(ipcp)) + return ERR_CAST(ipcp); + + return container_of(ipcp, struct shmid_kernel, shm_perm); +} + /* * shm_lock_(check_) routines are called in the paths where the rw_mutex * is not necessarily held. Patches currently in stable-queue which might be from davidlohr.bueso@xxxxxx are queue-3.11/ipc-shm-shorten-critical-region-in-shmctl_down.patch queue-3.11/ipc-shm-introduce-lockless-functions-to-obtain-the-ipc-object.patch queue-3.11/ipc-shm-shorten-critical-region-for-shmat.patch queue-3.11/ipc-shm-shorten-critical-region-for-shmctl.patch queue-3.11/ipc-shm-introduce-shmctl_nolock.patch queue-3.11/ipc-drop-ipcctl_pre_down.patch queue-3.11/ipc-shm-cleanup-do_shmat-pasta.patch queue-3.11/ipc-shm-make-shmctl_nolock-lockless.patch -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html