[tip:locking/core] locking/mutexes: Optimize mutex trylock slowpath

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Commit-ID:  72d5305dcb3637913c2c37e847a4de9028e49244
Gitweb:     http://git.kernel.org/tip/72d5305dcb3637913c2c37e847a4de9028e49244
Author:     Jason Low <jason.low2@xxxxxx>
AuthorDate: Wed, 11 Jun 2014 11:37:23 -0700
Committer:  Ingo Molnar <mingo@xxxxxxxxxx>
CommitDate: Sat, 5 Jul 2014 11:25:42 +0200

locking/mutexes: Optimize mutex trylock slowpath

The mutex_trylock() function calls into __mutex_trylock_fastpath() when
trying to obtain the mutex. On 32 bit x86, in the !__HAVE_ARCH_CMPXCHG
case, __mutex_trylock_fastpath() calls directly into __mutex_trylock_slowpath()
regardless of whether or not the mutex is locked.

In __mutex_trylock_slowpath(), we then acquire the wait_lock spinlock, xchg()
lock->count with -1, then set lock->count back to 0 if there are no waiters,
and return true if the prev lock count was 1.

However, if the mutex is already locked, then there isn't much point
in attempting all of the above expensive operations. In this patch, we only
attempt the above trylock operations if the mutex is unlocked.

Signed-off-by: Jason Low <jason.low2@xxxxxx>
Reviewed-by: Davidlohr Bueso <davidlohr@xxxxxx>
Signed-off-by: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: akpm@xxxxxxxxxxxxxxxxxxxx
Cc: tim.c.chen@xxxxxxxxxxxxxxx
Cc: paulmck@xxxxxxxxxxxxxxxxxx
Cc: rostedt@xxxxxxxxxxx
Cc: Waiman.Long@xxxxxx
Cc: scott.norton@xxxxxx
Cc: aswin@xxxxxx
Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Link: http://lkml.kernel.org/r/1402511843-4721-5-git-send-email-jason.low2@xxxxxx
Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
---
 kernel/locking/mutex.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index e4d997b..11b103d 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -820,6 +820,10 @@ static inline int __mutex_trylock_slowpath(atomic_t *lock_count)
 	unsigned long flags;
 	int prev;
 
+	/* No need to trylock if the mutex is locked. */
+	if (mutex_is_locked(lock))
+		return 0;
+
 	spin_lock_mutex(&lock->wait_lock, flags);
 
 	prev = atomic_xchg(&lock->count, -1);
--
To unsubscribe from this list: send the line "unsubscribe linux-tip-commits" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Stable Commits]     [Linux Stable Kernel]     [Linux Kernel]     [Linux USB Devel]     [Linux Video &Media]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux