[merged] ipc-semc-optimize-sem_lock.patch removed from -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Subject: [merged] ipc-semc-optimize-sem_lock.patch removed from -mm tree
To: manfred@xxxxxxxxxxxxxxxx,bitbucket@xxxxxxxxx,davidlohr@xxxxxx,riel@xxxxxxxxxx,mm-commits@xxxxxxxxxxxxxxx
From: akpm@xxxxxxxxxxxxxxxxxxxx
Date: Thu, 17 Oct 2013 11:12:15 -0700


The patch titled
     Subject: ipc/sem.c: optimize sem_lock()
has been removed from the -mm tree.  Its filename was
     ipc-semc-optimize-sem_lock.patch

This patch was dropped because it was merged into mainline or a subsystem tree

------------------------------------------------------
From: Manfred Spraul <manfred@xxxxxxxxxxxxxxxx>
Subject: ipc/sem.c: optimize sem_lock()

Operations that need access to the whole array must guarantee that there
are no simple operations ongoing.  Right now this is achieved by
spin_unlock_wait(sem->lock) on all semaphores.

If complex_count is nonzero, then this spin_unlock_wait() is not
necessary, because it was already performed in the past by the thread that
increased complex_count and even though sem_perm.lock was dropped
inbetween, no simple operation could have started, because simple
operations cannot start when complex_count is non-zero.

Signed-off-by: Manfred Spraul <manfred@xxxxxxxxxxxxxxxx>
Cc: Mike Galbraith <bitbucket@xxxxxxxxx>
Cc: Rik van Riel <riel@xxxxxxxxxx>
Reviewed-by: Davidlohr Bueso <davidlohr@xxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 ipc/sem.c |    8 ++++++++
 1 file changed, 8 insertions(+)

diff -puN ipc/sem.c~ipc-semc-optimize-sem_lock ipc/sem.c
--- a/ipc/sem.c~ipc-semc-optimize-sem_lock
+++ a/ipc/sem.c
@@ -257,12 +257,20 @@ static void sem_rcu_free(struct rcu_head
  * Caller must own sem_perm.lock.
  * New simple ops cannot start, because simple ops first check
  * that sem_perm.lock is free.
+ * that a) sem_perm.lock is free and b) complex_count is 0.
  */
 static void sem_wait_array(struct sem_array *sma)
 {
 	int i;
 	struct sem *sem;
 
+	if (sma->complex_count)  {
+		/* The thread that increased sma->complex_count waited on
+		 * all sem->lock locks. Thus we don't need to wait again.
+		 */
+		return;
+	}
+
 	for (i = 0; i < sma->sem_nsems; i++) {
 		sem = sma->sem_base + i;
 		spin_unlock_wait(&sem->lock);
_

Patches currently in -mm which might be from manfred@xxxxxxxxxxxxxxxx are

origin.patch
slab-leaks3-default-y.patch

--
To unsubscribe from this list: send the line "unsubscribe mm-commits" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Kernel Newbies FAQ]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Photo]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux