4.4-stable review patch. If anyone has any objections, please let me know. ------------------ From: Davidlohr Bueso <dave@xxxxxxxxxxxx> commit 1329ce6fbbe4536592dfcfc8d64d61bfeb598fe6 upstream. Make use of wake-queues and enable the wakeup to occur after releasing the wait_lock. This is similar to what we do with rtmutex top waiter, slightly shortening the critical region and allow other waiters to acquire the wait_lock sooner. In low contention cases it can also help the recently woken waiter to find the wait_lock available (fastpath) when it continues execution. Reviewed-by: Waiman Long <Waiman.Long@xxxxxxx> Signed-off-by: Davidlohr Bueso <dbueso@xxxxxxx> Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Cc: Ding Tianhong <dingtianhong@xxxxxxxxxx> Cc: Jason Low <jason.low2@xxxxxx> Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> Cc: Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx> Cc: Paul E. McKenney <paulmck@xxxxxxxxxx> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Cc: Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx> Cc: Waiman Long <waiman.long@xxxxxxx> Cc: Will Deacon <Will.Deacon@xxxxxxx> Link: http://lkml.kernel.org/r/20160125022343.GA3322@xxxxxxxxxxxxxxx Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- kernel/locking/mutex.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -719,6 +719,7 @@ static inline void __mutex_unlock_common_slowpath(struct mutex *lock, int nested) { unsigned long flags; + WAKE_Q(wake_q); /* * As a performance measurement, release the lock before doing other @@ -746,11 +747,11 @@ __mutex_unlock_common_slowpath(struct mu struct mutex_waiter, list); debug_mutex_wake_waiter(lock, waiter); - - wake_up_process(waiter->task); + wake_q_add(&wake_q, waiter->task); } spin_unlock_mutex(&lock->wait_lock, flags); + wake_up_q(&wake_q); } /*