On Tue, 19 Jul 2022 08:30:02 -0700 Doug Anderson wrote: > > I haven't done any stress testing other than my test case, though, so > I can't speak to whether there might be any other unintended issues. The diff below is prepared for any regressions I can imagine in stress tests by adding changes to both read and write acquirer slow pathes. On the read side, make lock stealing more aggressive; on the other hand, write acquirers try to set HANDOFF after a RWSEM_WAIT_TIMEOUT nap to force the reader acquirers to take the slow path. Hillf --- a/kernel/locking/rwsem.c +++ b/kernel/locking/rwsem.c @@ -992,13 +992,7 @@ rwsem_down_read_slowpath(struct rw_semap struct rwsem_waiter waiter; DEFINE_WAKE_Q(wake_q); - /* - * To prevent a constant stream of readers from starving a sleeping - * waiter, don't attempt optimistic lock stealing if the lock is - * currently owned by readers. - */ - if ((atomic_long_read(&sem->owner) & RWSEM_READER_OWNED) && - (rcnt > 1) && !(count & RWSEM_WRITER_LOCKED)) + if (WARN_ON_ONCE(count & RWSEM_FLAG_READFAIL)) goto queue; /* @@ -1169,7 +1163,11 @@ rwsem_down_write_slowpath(struct rw_sema goto trylock_again; } - schedule(); + if (RWSEM_FLAG_HANDOFF & atomic_long_read(&sem->count)) + schedule(); + else + schedule_timeout(1 + RWSEM_WAIT_TIMEOUT); + lockevent_inc(rwsem_sleep_writer); set_current_state(state); trylock_again: --