The following commit has been merged into the locking/core branch of tip: Commit-ID: f9e21aa9e6fb11355e54c8949a390d49ca21cde1 Gitweb: https://git.kernel.org/tip/f9e21aa9e6fb11355e54c8949a390d49ca21cde1 Author: Waiman Long <longman@xxxxxxxxxx> AuthorDate: Tue, 22 Mar 2022 11:20:57 -04:00 Committer: Peter Zijlstra <peterz@xxxxxxxxxxxxx> CommitterDate: Tue, 05 Apr 2022 10:24:34 +02:00 locking/rwsem: No need to check for handoff bit if wait queue empty Since commit d257cc8cb8d5 ("locking/rwsem: Make handoff bit handling more consistent"), the handoff bit is always cleared if the wait queue becomes empty. There is no need to check for RWSEM_FLAG_HANDOFF when the wait list is known to be empty. Signed-off-by: Waiman Long <longman@xxxxxxxxxx> Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> Link: https://lkml.kernel.org/r/20220322152059.2182333-2-longman@xxxxxxxxxx --- kernel/locking/rwsem.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c index acde5d6..b077b1b 100644 --- a/kernel/locking/rwsem.c +++ b/kernel/locking/rwsem.c @@ -977,12 +977,11 @@ queue: if (list_empty(&sem->wait_list)) { /* * In case the wait queue is empty and the lock isn't owned - * by a writer or has the handoff bit set, this reader can - * exit the slowpath and return immediately as its - * RWSEM_READER_BIAS has already been set in the count. + * by a writer, this reader can exit the slowpath and return + * immediately as its RWSEM_READER_BIAS has already been set + * in the count. */ - if (!(atomic_long_read(&sem->count) & - (RWSEM_WRITER_MASK | RWSEM_FLAG_HANDOFF))) { + if (!(atomic_long_read(&sem->count) & RWSEM_WRITER_MASK)) { /* Provide lock ACQUIRE */ smp_acquire__after_ctrl_dep(); raw_spin_unlock_irq(&sem->wait_lock);