[bug report] locking/rtmutex: Return success on deadlock for ww_mutex waiters

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Peter Zijlstra,

This is a semi-automatic email about new static checker warnings.

The patch a055fcc132d4: "locking/rtmutex: Return success on deadlock
for ww_mutex waiters" from Aug 26, 2021, leads to the following
Smatch complaint:

    kernel/locking/rtmutex.c:756 rt_mutex_adjust_prio_chain()
    error: we previously assumed 'orig_waiter' could be null (see line 644)

kernel/locking/rtmutex.c
   643		 */
   644		if (orig_waiter && !rt_mutex_owner(orig_lock))
                    ^^^^^^^^^^^
A lot of this code assumes "orig_waiter" can be NULL.

   645			goto out_unlock_pi;
   646	
   647		/*
   648		 * We dropped all locks after taking a refcount on @task, so
   649		 * the task might have moved on in the lock chain or even left
   650		 * the chain completely and blocks now on an unrelated lock or
   651		 * on @orig_lock.
   652		 *
   653		 * We stored the lock on which @task was blocked in @next_lock,
   654		 * so we can detect the chain change.
   655		 */
   656		if (next_lock != waiter->lock)
   657			goto out_unlock_pi;
   658	
   659		/*
   660		 * There could be 'spurious' loops in the lock graph due to ww_mutex,
   661		 * consider:
   662		 *
   663		 *   P1: A, ww_A, ww_B
   664		 *   P2: ww_B, ww_A
   665		 *   P3: A
   666		 *
   667		 * P3 should not return -EDEADLK because it gets trapped in the cycle
   668		 * created by P1 and P2 (which will resolve -- and runs into
   669		 * max_lock_depth above). Therefore disable detect_deadlock such that
   670		 * the below termination condition can trigger once all relevant tasks
   671		 * are boosted.
   672		 *
   673		 * Even when we start with ww_mutex we can disable deadlock detection,
   674		 * since we would supress a ww_mutex induced deadlock at [6] anyway.
   675		 * Supressing it here however is not sufficient since we might still
   676		 * hit [6] due to adjustment driven iteration.
   677		 *
   678		 * NOTE: if someone were to create a deadlock between 2 ww_classes we'd
   679		 * utterly fail to report it; lockdep should.
   680		 */
   681		if (IS_ENABLED(CONFIG_PREEMPT_RT) && waiter->ww_ctx && detect_deadlock)
   682			detect_deadlock = false;
   683	
   684		/*
   685		 * Drop out, when the task has no waiters. Note,
   686		 * top_waiter can be NULL, when we are in the deboosting
   687		 * mode!
   688		 */
   689		if (top_waiter) {
   690			if (!task_has_pi_waiters(task))
   691				goto out_unlock_pi;
   692			/*
   693			 * If deadlock detection is off, we stop here if we
   694			 * are not the top pi waiter of the task. If deadlock
   695			 * detection is enabled we continue, but stop the
   696			 * requeueing in the chain walk.
   697			 */
   698			if (top_waiter != task_top_pi_waiter(task)) {
   699				if (!detect_deadlock)
   700					goto out_unlock_pi;
   701				else
   702					requeue = false;
   703			}
   704		}
   705	
   706		/*
   707		 * If the waiter priority is the same as the task priority
   708		 * then there is no further priority adjustment necessary.  If
   709		 * deadlock detection is off, we stop the chain walk. If its
   710		 * enabled we continue, but stop the requeueing in the chain
   711		 * walk.
   712		 */
   713		if (rt_mutex_waiter_equal(waiter, task_to_waiter(task))) {
   714			if (!detect_deadlock)
   715				goto out_unlock_pi;
   716			else
   717				requeue = false;
   718		}
   719	
   720		/*
   721		 * [4] Get the next lock
   722		 */
   723		lock = waiter->lock;
   724		/*
   725		 * [5] We need to trylock here as we are holding task->pi_lock,
   726		 * which is the reverse lock order versus the other rtmutex
   727		 * operations.
   728		 */
   729		if (!raw_spin_trylock(&lock->wait_lock)) {
   730			raw_spin_unlock_irq(&task->pi_lock);
   731			cpu_relax();
   732			goto retry;
   733		}
   734	
   735		/*
   736		 * [6] check_exit_conditions_2() protected by task->pi_lock and
   737		 * lock->wait_lock.
   738		 *
   739		 * Deadlock detection. If the lock is the same as the original
   740		 * lock which caused us to walk the lock chain or if the
   741		 * current lock is owned by the task which initiated the chain
   742		 * walk, we detected a deadlock.
   743		 */
   744		if (lock == orig_lock || rt_mutex_owner(lock) == top_task) {
                    ^^^^^^^^^^^^^^^^^
This might mean it's a false positive, but Smatch isn't clever enough to
figure it out.  And I'm stupid too!  Plus lazy...  and ugly.

   745			ret = -EDEADLK;
   746	
   747			/*
   748			 * When the deadlock is due to ww_mutex; also see above. Don't
   749			 * report the deadlock and instead let the ww_mutex wound/die
   750			 * logic pick which of the contending threads gets -EDEADLK.
   751			 *
   752			 * NOTE: assumes the cycle only contains a single ww_class; any
   753			 * other configuration and we fail to report; also, see
   754			 * lockdep.
   755			 */
   756			if (IS_ENABLED(CONFIG_PREEMPT_RT) && orig_waiter->ww_ctx)
                                                             ^^^^^^^^^^^^^^^^^^^
Unchecked dereference.

   757				ret = 0;
   758	

regards,
dan carpenter



[Index of Archives]     [Kernel Development]     [Kernel Announce]     [Kernel Newbies]     [Linux Networking Development]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Device Mapper]

  Powered by Linux