On 2021-09-01 11:44:11 [+0200], Peter Zijlstra wrote: > Subject: locking/rtmutex: Fix ww_mutex deadlock check > > Dan reported that rt_mutex_adjust_prio_chain() can be called with > .orig_waiter == NULL however commit a055fcc132d4 ("locking/rtmutex: > Return success on deadlock for ww_mutex waiters") unconditionally > dereferences it. > > Since both call-sites that have .orig_waiter == NULL don't care for the > return value, simply disable the deadlock squash by adding the NULL > check. > > Notably, both callers use the deadlock condition as a termination > condition for the iteration; once detected, we're sure (de)boosting is > done. Arguably [3] would be a more natural termination point, but I'm > not sure adding a third deadlock detection state would improve the code. > > Fixes: a055fcc132d4 ("locking/rtmutex: Return success on deadlock for ww_mutex waiters") > Reported-by: Dan Carpenter <dan.carpenter@xxxxxxxxxx> > Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> It sounds reasonable and I don't see any fallout in the testsuite so Acked-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> > --- > kernel/locking/rtmutex.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c > index 8eabdc79602b..6bb116c559b4 100644 > --- a/kernel/locking/rtmutex.c > +++ b/kernel/locking/rtmutex.c > @@ -753,7 +753,7 @@ static int __sched rt_mutex_adjust_prio_chain(struct task_struct *task, > * other configuration and we fail to report; also, see > * lockdep. > */ > - if (IS_ENABLED(CONFIG_PREEMPT_RT) && orig_waiter->ww_ctx) > + if (IS_ENABLED(CONFIG_PREEMPT_RT) && orig_waiter && orig_waiter->ww_ctx) > ret = 0; > > raw_spin_unlock(&lock->wait_lock); Sebastian