Hi Sasha, I don't think this commit should be picked by stable, since the problem it fixes is caused by commit f611e8cf98ec ("lockdep: Take read/write status in consideration when generate chainkey"), which just got merged in the merge window of 5.10. So 5.9 and 5.4 don't have the problem. Regards, Boqun On Tue, Nov 17, 2020 at 07:56:44AM -0500, Sasha Levin wrote: > From: Boqun Feng <boqun.feng@xxxxxxxxx> > > [ Upstream commit d61fc96a37603384cd531622c1e89de1096b5123 ] > > Chris Wilson reported a problem spotted by check_chain_key(): a chain > key got changed in validate_chain() because we modify the ->read in > validate_chain() to skip checks for dependency adding, and ->read is > taken into calculation for chain key since commit f611e8cf98ec > ("lockdep: Take read/write status in consideration when generate > chainkey"). > > Fix this by avoiding to modify ->read in validate_chain() based on two > facts: a) since we now support recursive read lock detection, there is > no need to skip checks for dependency adding for recursive readers, b) > since we have a), there is only one case left (nest_lock) where we want > to skip checks in validate_chain(), we simply remove the modification > for ->read and rely on the return value of check_deadlock() to skip the > dependency adding. > > Reported-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > Signed-off-by: Boqun Feng <boqun.feng@xxxxxxxxx> > Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> > Link: https://lkml.kernel.org/r/20201102053743.450459-1-boqun.feng@xxxxxxxxx > Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> > --- > kernel/locking/lockdep.c | 19 +++++++++---------- > 1 file changed, 9 insertions(+), 10 deletions(-) > > diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c > index 3eb35ad1b5241..f3a4302a1251f 100644 > --- a/kernel/locking/lockdep.c > +++ b/kernel/locking/lockdep.c > @@ -2421,7 +2421,9 @@ print_deadlock_bug(struct task_struct *curr, struct held_lock *prev, > * (Note that this has to be done separately, because the graph cannot > * detect such classes of deadlocks.) > * > - * Returns: 0 on deadlock detected, 1 on OK, 2 on recursive read > + * Returns: 0 on deadlock detected, 1 on OK, 2 if another lock with the same > + * lock class is held but nest_lock is also held, i.e. we rely on the > + * nest_lock to avoid the deadlock. > */ > static int > check_deadlock(struct task_struct *curr, struct held_lock *next) > @@ -2444,7 +2446,7 @@ check_deadlock(struct task_struct *curr, struct held_lock *next) > * lock class (i.e. read_lock(lock)+read_lock(lock)): > */ > if ((next->read == 2) && prev->read) > - return 2; > + continue; > > /* > * We're holding the nest_lock, which serializes this lock's > @@ -3227,16 +3229,13 @@ static int validate_chain(struct task_struct *curr, > > if (!ret) > return 0; > - /* > - * Mark recursive read, as we jump over it when > - * building dependencies (just like we jump over > - * trylock entries): > - */ > - if (ret == 2) > - hlock->read = 2; > /* > * Add dependency only if this lock is not the head > - * of the chain, and if it's not a secondary read-lock: > + * of the chain, and if the new lock introduces no more > + * lock dependency (because we already hold a lock with the > + * same lock class) nor deadlock (because the nest_lock > + * serializes nesting locks), see the comments for > + * check_deadlock(). > */ > if (!chain_head && ret != 2) { > if (!check_prevs_add(curr, hlock)) > -- > 2.27.0 >